text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Feasibility of dynamic T2 *‐based oxygen‐enhanced lung MRI at 3T
Abstract Purpose To demonstrate proof‐of‐concept of a T2*‐sensitized oxygen‐enhanced MRI (OE‐MRI) method at 3T by assessing signal characteristics, repeatability, and reproducibility of dynamic lung OE‐MRI metrics in healthy volunteers. Methods We performed sequence‐specific simulations for protocol optimisation and acquired free‐breathing OE‐MRI data from 16 healthy subjects using a dual‐echo RF‐spoiled gradient echo approach at 3T across two institutions. Non‐linear registration and tissue density correction were applied. Derived metrics included percent signal enhancement (PSE), ∆R2* and wash‐in time normalized for breathing rate (τ‐nBR). Inter‐scanner reproducibility and intra‐scanner repeatability were evaluated using intra‐class correlation coefficient (ICC), repeatability coefficient, reproducibility coefficient, and Bland–Altman analysis. Results Simulations and experimental data show negative contrast upon oxygen inhalation, due to substantial dominance of ∆R2* at TE > 0.2 ms. Density correction improved signal fluctuations. Density‐corrected mean PSE values, aligned with simulations, display TE‐dependence, and an anterior‐to‐posterior PSE reduction trend at TE1. ∆R2* maps exhibit spatial heterogeneity in oxygen delivery, featuring anterior‐to‐posterior R2* increase. Mean T2* values across 32 scans were 0.68 and 0.62 ms for pre‐ and post‐O2 inhalation, respectively. Excellent or good agreement emerged from all intra‐, inter‐scanner and inter‐rater variability tests for PSE and ∆R2*. However, ICC values for τ‐nBR demonstrated limited agreement between repeated measures. Conclusion Our results demonstrate the feasibility of a T2*‐weighted method utilizing a dual‐echo RF‐spoiled gradient echo approach, simultaneously capturing PSE, ∆R2* changes, and oxygen wash‐in during free‐breathing. The excellent or good repeatability and reproducibility on intra‐ and inter‐scanner PSE and ∆R2* suggest potential utility in multi‐center clinical applications.
INTRODUCTION
Oxygen-enhanced MRI (OE-MRI) is a method that has been demonstrated for imaging lung function. 1,2To date, the majority of OE-MRI studies have made use of T 1 -weighted acquisitions, which enable regional investigation of oxygen delivery to the tissues and blood pool via ventilation and gas exchange across the alveolar epithelium into the bloodstream, since a change in T 1 occurs due to the paramagnetic nature of oxygen dissolved in the parenchyma.][5][6][7][8][9] To date, most OE-MRI studies in lungs have been performed at field strengths of 1.5T or lower, 10 while there is a scarcity of literature on OE-MRI methods at 3T.Previous studies at 3T used T 1 -weighted single-slice non-selective inversion-recovery half-Fourier acquisition single-shot turbo spin echo (HASTE), 11 3D radial UTE pulse sequence with HASTE acquisition, 12 and 3D T 1 -weighted fast-field echo (FFE). 13While all these methods used separate free-breathing acquisitions at 21% and 100% O 2 , methodologies for free-breathing lung OE-MRI over entire time course enabling dynamic parametrisation have not been established at 3T.
There are inherent difficulties in conducting lung MRI at 3T.The magnetic susceptibility differences at the numerous air-tissue interfaces within the lung are greater than at lower field strengths and significantly shorten T 2 * in the parenchyma, thereby reducing the signal available for gradient echo-based methods.Additionally, T 1 relaxivity of oxygen decreases with increased field strength, 14 further diminishing the sensitivity of the commonly-used ΔR 1 -based OE-MRI methods.Moreover, when employing gradient echo-based methods, the competing ΔR 2 * effect becomes substantial and dominates over the ΔR 1 effect at 3T, even at short TE. 15 Nevertheless, spoiled gradient echo pulse sequences are the most widely used methods for dynamic MRI data collection, enabling rapid acquisition of images with good spatial coverage and resolution.Although spin echo-based and ultrafast echo-based methods are not compromised by T 2 * effects, they may be limited to T 1 -sensitized "static" (e.g., breath-hold or gated) OE-MRI due to relatively low temporal resolution.
Given the increasing clinical availability of 3T MRI, the above technical challenges underscore the necessity for novel methodological advancements, aiming to Mina Kim and Josephine H. Naish contributed equally to this study.
facilitate the widespread adoption of dynamic OE-MRI at 3T.We hypothesized that T 2 *-sensitized dynamic OE-MRI, characterized by a dual-echo acquisition, can enhance the sensitivity of lung signal detection and this work is motivated by a need to evaluate the performance of our proposed method.Furthermore, in order for OE-MRI to find application in clinical research and, ultimately, clinical practice, it is important to harmonize protocols across centers and vendors.Additionally, any derived biomarkers must exhibit satisfactory levels of repeatability and reproducibility. 16he primary objective of the present study was to demonstrate proof-of-concept of the T 2 *-sensitized method and an initial assessment of its robustness.Specifically, we aimed (1) to use simulations to characterize the OE-MRI signal across a range of achievable sequence parameters at 3T; (2) to evaluate the feasibility of the T 2 *-sensitized OE-MRI method at 3T in healthy volunteers; and (3) to assess the repeatability and reproducibility of the dynamic OE-MRI metrics in healthy volunteers across two sites and two vendors.
METHODS
For dynamic multi-slice OE-MRI acquisition, we implemented a dual-echo RF-spoiled gradient echo sequence to enable estimation of T 2 *.We aimed to obtain images with a high temporal resolution to minimize motion artifact during free-breathing while maximizing lung coverage and enabling reasonable spatial resolution.We determined that TR = 16 ms and matrix size = 96 × 96, would enable dynamic temporal resolution <2 s and acquisition of six slices.Both TEs for the dual-echo acquisition should be as short as possible to avoid losing signal due to low T 2 *, and flip angle (FA) should be chosen to maximize signal difference between normoxia and hyperoxia.The human data experimental workflow is outlined in Figure S1 (Supporting Information).
Simulations
We simulated the signal behavior of the dual-echo RF-spoiled gradient echo sequence at our chosen TR (16 ms) over a range of FA and TE values to match the experimental sequence and protocol, described as follows. 17First, the expected signal difference between air breathing and 100% oxygen breathing (ΔS) and percent signal enhancement (PSE; 100% × ΔS/S(air)) in the lung was simulated as a function of FA up to 30 • and TE up to 3 ms using the following parameters: T 1 (air) = 1281 ms, 18 T 1 (100% O 2 ) = 1102 ms, 18 T 2 * (air) = 0.68 ms, and T 2 * (100% O 2 ) = 0.62 ms.Given the absence of previously reported lung hyperoxic T 2 * values in the literature, we computed T 2 * values for all subjects in our study, then averaged those values for use in the simulations (Table S1).Then, the expected ΔS values were simulated as a function of TE using FA of 5 • , which was shown to maximize the absolute signal difference between the 21% and 100% oxygen images.Simulations were performed in MATLAB R2022b (MathWorks, Natick, MA).
Participants
Following
MRI acquisition
Where possible, protocols for the two different scanner manufacturers utilized identical acquisition parameters for a 2D interleaved multi-slices dual-echo RF-spoiled gradient echo sequence, while some options required manufacturer-specific parameters.Site-independent parameters included: six coronal slices of 10 mm thickness with 4 mm gap with phase-encoding right/left; in-plane resolution 4.69 × 4.69 mm 2 ; FOV covering entire lungs in all 16 volunteers (except for the inter-slice gaps in the anterior/posterior direction); TR = 16 ms; matrix size = 96 × 96; and dynamic temporal resolution = 1.54 s.Selection of FA (5 • ) was based on our simulations (Figure 1).The shortest TE values available for the chosen acquisition were selected for each scanner (L, Philips in London; M, Siemens in Manchester): TE 1L = 0.71 ms, TE 2L = 1.2 ms; TE 1M = 0.81 ms, TE 2M = 1.51 ms.Scan parameters for each vendor are listed in Table 1.Subjects were fitted with a disposable/MRI-compatible non-rebreathing mask (Intersurgical, Berkshire, UK) to allow for medical air and 100% oxygen delivery while lying supine in the scanner.Piped gases were delivered to the subject at 15 L/min using a standard low flow oxygen blender (Inspiration Healthcare, Leicestershire, UK).The initial 60 dynamic acquisitions were obtained while breathing medical air.The gas supply was then switched to 100% O 2 for the following 150 dynamic acquisitions, after which the supply was returned to medical air for further 130 acquisitions.Images were acquired during uncontrolled free-breathing to minimize participant burden and avoid interrupting gas delivery.Total scanning time for the dynamic series was approximately 9 min.
Data analysis
For motion correction, non-linear image registration was performed on the dynamic time series data using Advanced Normalization Tools (ANTs). 19,20Subsequently, the lung parenchyma, excluding central major vasculature, was manually segmented from registered images.For an initial exploration of the data, first, image registration and density correction were performed as described below.Secondly, averaged hyperoxia images (61st to 210th) were subtracted from averaged normoxia images (10th to 60th).Last, mean PSE maps was calculated from the subtracted images normalized to the averaged normoxia images.
For our main data analysis, the dynamic series were fitted using exponential functions to characterize oxygen wash-in (encompassing the downslope between the plateau regions of the curve) and wash-out (the upslope and return to baseline).The baseline for the exponential fit was defined as the averaged signal intensity across all normoxia time points before O 2 inhalation as described in Eq. 1.The curve was fitted with the two functional forms described in Eq. 2 for the downslope and Eq. 3 for the upslope, where A 1 (x) and A 2 (x) are the baseline and fitted negative maximum hyperoxia intensity (or plateau value) at position x, respectively, and τ, tp1, and tp2 are the fitted wash-in time and the provided gas switching time points (i.e., tp1, air to O 2 ; tp2, O 2 to air).Maximum PSE maps were produced by the subtraction of the baseline from the negative maximum hyperoxia value (A 1 -A 2 ), normalized to the baseline A 1 .We additionally defined a breathing rate-normalized wash-in time, τ-nBR as the product of τ and the average breathing rate over the dynamic series.
As differences in lung tissue density can influence the measured signal enhancement between normoxia and hyperoxia, time-varying PSE maps were calculated twice, with and without a voxel-wise tissue density correction.
Uncorrected PSE values were calculated by the subtraction of normoxia signal (S 21 ) from hyperoxia signal (S 100 ), normalized to S 21 as.
2][23][24] The whole-lung fractional volume change V was calculated at each time point by averaging the Jacobian determinant from the registration over all voxels in the lung mask across all slices.The Jacobian determinant was only used to obtain an estimate of lung volume change, with density correction based on the signal intensity variation associated with the lung volume change as described further below.The respiratory index α local was estimated voxel-wise (locally at the position x) by linear regression estimation of the observed signal intensity S as a function of V as Then, the α local values were applied as a voxel-wise density correction as where S C (t, x) represent the corrected S(t, x).Corrected PSE values were quantified as where S 100c (t, x) and S 21c (t, x) represent the corrected S 100 (t, x) and S 21 (t, x), respectively.To compare pre-and post-density correction, we calculated median PSE values within masks at each TE twice, either across all six slices or the two most posterior slices, excluding anterior slices with poor SNR.The median PSE value for each slice was then averaged across all subjects.For intra-scanner repeatability, median PSE values were averaged over all six slices at each TE.The R 2 * of each voxel was quantified analytically from the magnitude-reconstructed signal from the masked lung images acquired at TE 1 and TE 2 after tissue density correction as described in Eqs. 5 and 6.ΔR 2 * maps were calculated by the subtraction of mean normoxia R 2 * maps across multiple time points (30th to 60th time series acquisitions) from mean hyperoxia R 2 * maps across multiple time points (120th to 180th).Median ΔR 2 * values were averaged over the two most posterior slices for multi-site comparison between Manchester and London, and six slices for scan-rescan comparison in London.
Data were analyzed by an experienced (>10 years) MRI physicist using a computational pipeline written in MATLAB R2022b (MathWorks, Natick, MA), taking ∼60 min per subject visit, primarily due to motion correction.
Statistical analysis
Normality was assessed for all metrics using the Shapiro-Wilk test.For non-normally distributed metrics, we log-transformed the data before statistical analyzes (Figure S1).We used Bland-Altman plots with 95% limits of agreement (LOA) and derived the repeatability coefficient (RC), reproducibility coefficient (RDC), and intraclass correlation coefficient (ICC) to evaluate the agreement of repeated measures, as recommended in the QIBA guidelines. 25For the log-transformed metric, we calculated the asymmetric cut-points for RC and RDC through back-transformation, 26 and ICC values were computed as described in Pleil et al. 27 An inter-rater ICC analysis was conducted on London data from eight volunteers at the initial time point, using additional lung masks outlined by a second rater.The agreement levels were: excellent for ICC > 0.74, good for ICC 0.6-0.74,fair for ICC 0.4-0.59,and poor for ICC < 0.4. 28Coefficient of variation (CV) was calculated across all subjects at each TE.All statistical analyzes were performed using SPSS v28.0 (SPSS Inc, Chicago, IL).
The effect of oxygen on signal intensity-simulations
Simulations show that ΔT 2 *-induced negative enhancement (ΔS) for our chosen TR = 16 ms is maximum at FA ∼5 • , independent of the choice of TE (Figure 1A).The amplitude of the TE dependence of ΔS reduces with smaller FA (Figure 1B), with negative-going signal change occurring at shorter TE.The magnitude of negative PSE increases with lower FA and longer TE, while PSE values are closer to 0 at shorter TE and high FA (Figure 1C).We also observed that ΔT 2 * dominates the signal change and produces negative contrast at TE > 0.2 ms for FA = 5 • (Figure 1D).The expected signal change at TE 1L (0.71 ms) is about 55% more sensitive to changes in ΔT 1 and about 21% more sensitive to changes in ΔT 2 * than at TE 2L (1.2 ms) (Figure 1D).
The effect of oxygen on signal intensity-experimental
Typical location of the acquired images is shown in Figure 2A.PSE maps at both TEs demonstrate uniform PSE across the parenchyma (Figure 2B,D).As expected, due to the dominant effect of T 2 * changes, time course plots of mean PSE from masked lungs exhibit negative contrast induced by 100% O 2 inhalation (Figure 2C,E), in agreement with the simulated results (Figure 1D).The median signal intensity is higher in posterior slices due to greater proton density, associated with the subjects' supine position (Figures 2C,E and S2).
The effect of density correction
The time course plots (Figure 2C,E) post-density correction (red solid line) show smaller magnitude signal fluctuation than pre-density correction (blue solid line) due to the reduced impact of respiratory motion-induced signal changes.Example signal time courses with their downslope and upslope fits (Eqs. 2 and 3) also show improvement in time course wash-in fitting with tissue density correction (Figure S3).
Table 2 summarizes the mean PSE and CV before and after applying density correction for the evaluation of inter-scanner TE-dependence.Across eight healthy volunteers, the magnitudes of all negative PSE values were reduced by applying tissue density correction.Moreover, this correction enhanced the linearity of the reduction in negative PSE magnitudes.In addition, CV of mean PSE was also decreased for all except TE 1M PSE (mean PSE values over two posterior slices) or TE 2M PSE (mean PSE values over all six slices).All repeatability metrics were improved by the density correction step (Figure S4) as also described in the Section 3.7.
TE dependence
While the mean signal intensity is higher at TE 1L than for TE 2L (Figure 2C,E), PSE is greater at TE 2L than at TE 1L (Figure 2B,D), again in agreement with our simulations (Figure 1B,C).The TE dependence of PSE expected from simulations (Figure 1C) is also observed in the density-corrected PSE values from four separate TEs at two sites (Table 2).
𝚫R 2 * quantification
Figure 3 shows examples of plateau ΔR 2 * maps across six slices from anterior to posterior, the corresponding time course plots of the median R 2 * from the maps of masked lungs for each slice.Median ΔR 2 * maps illustrate clear O 2 delivery in the entire lung (Figure 3A,C,E), with a spatial distribution that is heterogeneous compared to the patterns observed in the PSE maps (Figure 2).Median R 2 * time course plots show R 2 * is largely unaffected by density correction (because the calculation of R 2 * normalizes for density) except for the last posterior slice.We observed that the discrepancy between tissue density corrected (red) and uncorrected (blue) ΔR 2 * plots (Figure 3B) is a common occurrence among volunteers with smaller lung volumes.In such cases, the final posterior slice is aligned with the rear of the lungs, adjacent to the ribcage (Figure 3A) and appears to be influenced by the partial volume effects.
Signal variation with slice position in the lung
The PSE of TE 1L gradually decreases from anterior to posterior slices across all subjects (Figure 4A), whereas the PSE of TE 2L do not noticeably change (Figure 4B).ΔR 2 * shows a gradual increase from anterior to posterior.
Repeatability and reproducibility
All metrics were normally distributed except a subset of τ-nBR (Table S2).
Example intra-scanner PSE maps show relatively homogeneous enhancement at both TEs (Figure 5A,B).The mean PSE values from eight healthy volunteers varied little between the repeat scans.
The plateau △R 2 * maps from the same data set show relatively heterogeneous △R 2 * distribution, wherein certain structures, particularly areas of major vasculature, do not appear to respond to 100% O 2 inhalation (Figure 5C).This is expected as the R 2 * change is mainly due to gaseous oxygen in the alveoli but not dissolved oxygen as previously reported. 15he Bland-Altman plot analyzes of the repeated measurements of PSE and △R 2 * indicate little and insignificant bias between two intra-scanner measurements in London, respectively, as shown by the 95% LOA (Figure 5D-F).The ICC and RC measurements of PSE at TE 1L and TE 2L , and intra-scanner △R 2 * show excellent intra-scanner repeatability (Table 3).
The PSE maps from inter-scanner traveling volunteers' scans show similar spatial distribution of enhancement at TE 1 .However, the PSE observed at TE 2 from the Manchester site exhibits more pronounced noise, which could be attributed to the signal approaching the noise floor at the longer TE (Figure 6A).Plots of the combined PSE values at four separate TEs from the two MRI systems display the expected TE dependence of the signal (Figure 6B, Table 2), similar to simulation (Figure 1C).The △R 2 * maps display a lack of enhancement near major vasculature (Figure 6C).Notably, the △R 2 * maps obtained from the Manchester site continue to show increased noise, a pattern consistent with the TE 2M PSE maps.Nevertheless, △R 2 * comparison between two different scanners again shows little evidence of bias, with the 95% LOA measurements being (−0.06%, 0.06%) (Figure 6D).The values of ICC and RDC observed for inter-scanner △R 2 * comparisons reflect good reproducibility (Table 2).
The ICC inter measurement of the log-transformed τ-nBR values shows fair repeatability and reproducibility, while ICC intra , RC, and RDC values are in the poor range due to existence of outliers as shown in Figure S5.
DISCUSSION
In recent years, installations of a clinical 3T MR systems have significantly increased worldwide, often motivated by the higher SNR, relative to lower field systems.However, the viability of dynamic lung OE-MRI at 3T has not to date been investigated.In this work, we demonstrate the feasibility of detecting dynamic OE signal change and quantifying ΔR 2 * due to oxygen breathing at 3T.To progress the translation of these biomarkers toward clinical use, we also evaluate the intra-scanner repeatability and the inter-scanner/cross-site reproducibility of the proposed method.While a limited number of studies, to date, have demonstrated the feasibility of 3T T 1 -weighted OE-MRI, [11][12][13] our present study is the first report to analyze detailed T 2 *-weighted dynamic signal enhancement behavior, repeatability and reproducibility at 3T.Additionally, this investigation simultaneously entails the quantification of O 2 -induced ΔR 2 *.Our motivation for focussing on T 2 *-related contrast at 3T is twofold.First, the longitudinal relaxivity of O 2 is approximately 20% lower at 3T than 1.5T, 14 leading to a proportionately smaller achievable ΔR 1 at 3 T. Secondly, T 2 * in lung decreases with field strength, meaning that SNR in T 1 -weighted OE-MRI is much reduced.T 2 *-based OE-MRI has been proposed to counter some of these detrimental effects, although previously-developed methods employed non-standard acquisition methods. 29Of note, T 2 *-related signal is potentially more specific to ventilation as it is expected to be an effect of changing concentrations of oxygen gas in the alveoli rather than dissolved oxygen. 15n the present study, we optimized a multi-slice dual-echo RF-spoiled gradient echo acquisition; this method enables measurement of dynamic OE signal change at high temporal resolution with controllable T 2 *-weighting, and monitoring of dynamic ΔR 2 *, simultaneously, while requiring no or minimal pulse programming.This easy implementation on standard clinical platforms is intended to assist in clinical translation of this technique.
T 2 *-weighting allows good oxygen delivery contrast at 3T
Our simulations (Figure 1D) show the expected dependence of spoiled gradient echo PSE on both ΔT 1 and ΔT 2 *, which can lead to reduced oxygen-related signal change if TE and FA are not optimized.Maximum (negative) PSE for our chosen TR of 16 ms is found with a FA of ∼5 • across a wide range of TE (Figure 1A).Our simulations also indicate that negative PSE at TE longer than approximately 0.23 ms when using this FA, is due to the significant oxygen-level dependent ΔR 2 * effect dominating the signal change in the lungs (Figure 1B,D).Our experimental data are consistent with our simulations, with negative PSE observed throughout the lung parenchyma, at levels that are in agreement with simulations.Importantly, this allows the generation of visually high-quality mean PSE maps at the TEs used in this study (Figures 2, 5, 6).The mean PSE values of traveling healthy volunteers (Table 2) are consistent with the expected trend of PSE with variable TEs from our simulations (Figure 1C).Specifically, the simulated PSE plots, derived from the individual T 2 * values of eight healthy volunteers, demonstrate the significant impact of the chosen T 2 * values on the variability of simulated PSE values (Figure S6).Nonetheless, the experimental PSE values remain within the range of PSE values predicted by simulations.
The mean T 2 * value across all healthy volunteers for 21% O 2 inhalation (0.68 ± 0.05 ms) aligns closely with the literature-reported values (0.74 ± 0.1 ms) at 3T. 30 We observed that upon 100% O 2 inhalation, the mean T 2 * value decreased by about 9% relative to normoxia, resulting in ΔR 2 * of 0.14 ± 0.03 ms −1 .To the best of our knowledge, this is the first study that reports mean values of hyperoxic T 2 * and ΔR 2 * of healthy human lungs at 3T.While the influence of T 2 * on PSE is large at 3T, the effect will also be present at lower field strengths when using gradient echo methods and should be accounted for when interpreting nominally T 1 -weighted OE-MRI. 17
T 2 *-weighted OE-MRI demonstrates good repeatability and reproducibility
The Bland-Altman, RC, and ICC analysis of the repeated measurements of PSE and △R 2 * suggest high intra-scanner repeatability (Table 3, Figure 5).They also demonstrate that comparable dynamic OE-MRI protocols for the lung can be implemented at 3T across different sites and scanners with good repeatability and reproducibility for △R 2 *.While maximum gradient strength and maximum gradient slew rate between the two systems from the different are identical, matching TE and bandwidth between scanners proved challenging, which results in variability of OE signal enhancement.For this reason, direct reproducibility assessments for PSE were not feasible although the variation in PSE with TE between scanners closely aligned with our simulations (Figures 1C and 6B).We were able to assess ΔR 2 * reproducibility, as T 2 * signal decay is, to the best of our knowledge, monoexponential with TE in the lung.In this work, we derived a new metric, τ-nBR, to compensate differences in individual participant's breathing patterns between scans, which have a direct impact on ventilation.The τ-nBR metric showed fair to poor reliability of the dynamic parameter, and this disparity may be attributed to inaccuracies in gas switching time points, impacting on the fitting, as the gas blender was manually operated.Therefore, further investigation is necessary to optimize the enhancement (see Section 4.5 for details).
Density correction improves repeatability
While previous studies have demonstrated that density variation due to respiration could provide useful physiological parameters, in the current study, our motivation was to optimize the OE signal.We, therefore, utilized the adapted sponge model, which was introduced by Zha et al. 22 to correct for density variation.In line with the previous reports, [21][22][23][24] our results demonstrate that the density correction significantly improves quantification of OE-MRI metrics by decreasing fluctuation due to respiratory motion-induced signal changes (Table 2, Figure S4).This is particularly useful in posterior slices (in supine position) where fluctuation of signal changes is greater (Figure S2).
The accuracy of the sponge model for density correction depends on the assumption that all signal change is due to density variation associated with ventilation.In practice, it is likely that other factors, such as changes in blood volume and local alveolar susceptibility profiles, also influence the signal change during the breathing cycle, and that these factors may vary depending on disease status.Nevertheless, the clear reduction in breathing-related signal variation after correction provides evidence that the density correction is largely successful in our experiments.Furthermore, our results show that the proposed method at 3T yields excellent intra-scanner repeatability after correction of pixel-wise signal intensity using the deformation fields from image registration (Table 3).Previous studies utilizing a non-Cartesian UTE approach with free-breathing at 1.5T 22 or breath-held acquisitions at 0.55T 10 similarly demonstrated improvement of repeatability in both mean PSE and the low-enhancement percent.
Signal variation with position in the lung
Mean signal intensity for both baseline and O 2 -induced change is observed to be higher in posterior slices across all subjects due to greater proton density in subject's supine position (Figures 2C,E and S2).Interestingly, we also observed that the absolute PSE of TE 1 gradually decreases, from anterior to posterior slices, across all subjects (Figure 4A) whereas the PSE of TE 2 does not noticeably change (Figure 4B).While PSE combines both ΔT 1 and ΔT 2 * effects, the PSE at TE 1 contains a stronger ΔT 1 effect than that at TE 2 , as shown in the simulation (Figure 1D).On the contrary, ΔT 2 * effect is more substantial and dominating over ΔT 1 effect in the PSE at TE 2 .Thus, this may be attributed to increasing effect of ΔT 1 from anterior to posterior, possibly due to increased vessel density and/or blood pooling due to gravity, which require further investigation.A trend of ΔR 2 * increase observed from anterior to posterior slices may reflect the expected predominant sensitivity of ΔR 2 * to ventilation, as more ventilation is expected in the posterior slices when the lungs are in supine position.This is consistent with a previous report that T 2 *-related signal is potentially more specific to ventilation due to an effect of changing concentrations of oxygen gas in the alveoli. 15
Limitations and future directions
The present study has several limitations.First, we employed a 2D multi-slice readout, which was designed to prioritize relatively high temporal resolution and allow reasonable lung coverage while accommodating free-breathing for participant comfort.Although the temporal resolution of 1.54 s is currently the highest achievable resolution in dynamic lung OE-MRI, it cannot still resolve all cardiac and respiratory motion-related artifacts during acquisition.Moreover, a 2D interleaved multi-slice excitation affects the signal variation due to through-slice respiratory motion and inflow of blood, which are likely to be a source of noise for our T 2 *-sensitized signal.Additionally, slice gaps and the limited number of slices may mean that some localized pathology could be missed.This could be mitigated by increasing the number of slices, either by increasing TR (thereby lowering temporal resolution and leading to more motion-related image blurring and artifacts) or by employing acceleration methods.Since there may be inconsistencies between the slice positions, a multi-slice acquisition also leads challenges for inter-scanner, inter-session image registration that is essential for voxel-wise comparison.The 3D non-Cartesian UTE OE-MRI methods have been demonstrated at 1.5T and lower field strengths to allow isotropic spatial resolution whole lung OE-MRI measurements. 10,17,22,31However, those studies are limited to static acquisitions which employed either breath-hold or two separate free-breathing sessions of normoxia and hyperoxia.Dynamic OE-MRI using such methods may be possible by employing temporal view sharing methods, but we are unaware of any studies to date that have made use of this strategy.Although currently existing dynamic methods utilize respiratory gating approaches resulting in longer temporal resolution compared to our proposed method, it's worth noting that these 3D dynamic methods offer enhanced spatial resolution and SNR.Consequently, there is a need for future investigations to explore the implementation of 3D UTE acquisition in our proposed method, while maintaining a reasonable temporal resolution.
Second, our study design lacked a reference standard due to the absence of established dynamic OE-MRI methods at 3T.This also aligns with a key motivation of the present study, which focuses on developing a reliable protocol tailored for 3T.Future investigations could compare our methods with OE-MRI at lower field strengths or with other functional lung MRI methods.
Thirdly, low SNR in the current study leads to poor performance in extracting dynamic parameters, particularly wash-in time.Although our results show that PSE and △R 2 * measures are repeatable, these largely reflect steady-state conditions.Nonetheless, our primary aim was to explore a complete free-breathing acquisition approach that spans the entire gas delivery time course for both air and O 2 phase.This approach not only enhances subject comfort but also maintains physiological realism, features that breath-hold or separate free-breathing methods for each gas phase may lack.Therefore, the feasibility of the proposed method with the full free-breathing acquisition suggests potential for future development of OE-MRI for evaluating dynamic parameters.Furthermore, optimizing the methodological approach, such as new hardware for the administration or additional monitoring to track breathing patterns, might improve reproducibility in this measurement.
Lastly, being a proof-of-concept investigation, our current study is limited by a small sample size and the absence of individuals with disease.Future studies will provide a more comprehensive understanding of the method's applicability in such settings.
CONCLUSIONS
Our study establishes the viability of dynamic lung OE-MRI at 3T, optimizing a dual-echo RF-spoiled gradient echo acquisition for simultaneous PSE, R 2 * changes, and oxygen wash-in measurement during free-breathing, offering functional information.Excellent intra-scanner repeatability and good inter-scanner reproducibility of the metrics suggest multi-center clinical application will be feasible.Future studies in respiratory diseases may allow us to better understand the method's potential.
The predicted OE signal change ΔS plotted as a function of flip angle at multiple TE, with TR = 16 ms.(B) The predicted signal change ΔS plotted as a function of TE with multiple flip angle, with TR = 16 ms.(C) PSE plotted as a function of TE at multiple flip angles, with TR = 16 ms.(D) The expected OE signal change for the T 1 -weighted RF spoiled gradient echo acquisition at 3T due to ΔT 1 alone (red dashed line), ΔT 2 * alone (green dotted line), and both (blue solid line), assuming literature-reported values for T 1 and measured T 2 * in the lungs at 21% oxygen and 100% oxygen, flip angle = 5 • , and TR = 16 ms.
The typical location of the six slices from anterior to posterior.Example subject data show unmasked percentage signal change maps obtained with (B) TE 1L (0.71 ms) and (D) TE 2L (1.2 ms), and (C, E) the corresponding time course curves of the median signal intensity from masked, registered lung for each slice.Blue lines show uncorrected signal; red line shows signal after density correction.
F I G U R E 4
Examples of three subjects: (A, C, E) the plateau ΔR 2 * maps of masked lung, six slices from anterior to posterior and (B, D, F) the corresponding time course curves of median R 2 * from masked lung along for each slice.Increase of R 2 * due to 100% O 2 inhalation is visible in all slices but clearer in posterior slices.The PSE from masked, registered, tissue density corrected lung for each slice of eight individual subjects scanned in London with (A) TE 1L (0.71 ms) and (B) TE 2L (1.2 ms).(C) Equivalent plot for ΔR 2 *.
5
Example of one subject and Bland-Altman analysis comparing PSE and △R 2 * between two separate sessions (repeatability) in London.(A) mean PSE with TE = 0.71 ms, (B) mean PSE with TE = 1.2 (C) mean △R 2 *, and (D, E) Bland-Altman plots for the repeated measurements of PSE from the first and second TE and (F) △R 2 * (intra-scanner, intra-subject).(A, B) The mean PSE values from eight healthy volunteers varied little between the repeat scans (−7.39% ± 1.61% and −7.79% ± 1.58% at TE 1L ; −13.71% ± 2.96% and −14.27% ± 2.31% at TE 2L ).(D-F) The data points correspond to individual participants for difference between two visits to London site.
An example of inter-scanner intra-subject reproducibility of PSE at TE 1 and TE 2 scanned using two MRI systems at two sites (London and Manchester).(B) PSE obtained at four separate TEs (TE 1L = 0.71 ms, TE 2L = 1.2 in London and TE 1M = 0.81 ms, TE 2M = 1.51 ms in Manchester).The combined PSE values from two MRI systems show a similar trend as a function of TE as the PSE simulation (Figure 1C).(C) Inter-scanner intra-subject reproducibility of △R 2 * from the same subject.(D) Bland-Altman analysis comparing △R 2 * between two scanners for the same subjects.
ms) Averaged over two posterior slices Averaged over all six slices Mean PSE (%) ± SD (%) CV (%) Mean PSE (%) ± SD (%) CV (%)
Inter-scanner TE-dependence assessment of uncorrected and corrected mean (±SD) PSE measurements from eight traveling healthy volunteers in four different TEs (TE 1L and TE 2L in London; TE 1M and TE 2M in Manchester).The mean PSE and CV were calculated across all subjects at each TE based on the measurement from the two most posterior slices and all six slices.
T A B L E 2Note: SD of each metric.PSE values were computed from the two repeated sessions in London (16 data sets in total) while △R 2 *, and wash-in time-normalized for breathing rate (τ-nBR) were computed from traveling volunteers at both sites and the two repeated sessions in London (32 data sets in total).The mean value for each metric was averaged across all six slices except that for △R 2 * which was averaged across two posterior slices.Middle columns: mean difference between two sessions, the Bland-Altman 95% LOA for inter-and intra-scanner comparisons, RC for intra-scanner comparisons, and RDC for inter-scanner comparisons.Last three columns: ICC for intra-scanner variation (ICC ), and inter-rater variation (ICC inter-rater ) based on absolute agreement, two-way mixed-effects model.Notably, the ICC inter-rater values exceeded 0.99 for all metrics.The excellent ICC inter-rater values were expected, as the only manual step is lung segmentation, and the median voxel value for all reported measurements is largely insensitive to differences in lung outlining.
Note: First column: mean ± intra ), inter-scanner variation (ICC inter a Unit for PSE and associated RC and RDC: %. b Unit for △R 2 * and associated RD and RDC: ms −1 .c Unitless for log-transformed τ-nBR and associated RC and RDC.
Table 3 .
. A workflow diagram summarizing the experimental study population and data analysis.Figure S2.Example time course curves of the median signal intensity (SI) and R 2 * from masked, registered lung for each slice of a single traveling subject obtained in London (A) and Manchester (B) with 1L (0.71 ms), TE 2L (1.2 ms), TE 1M (0.81 ms), and TE 2M (1.51 ms), by pre-(blue line) and post-tissue density correction (red line).Figure S3. (A) Pre-and (B) post-density corrected example time course (blue dashed lines) and fits (red solid lines) for downslopes and upslopes from an individual voxel.FigureS4.The Bland-Altman plots for the repeated measurements of percent signal change (PSE) averaged over two posterior slices from the 1st and 2nd TE before (A, C for TE 1L and TE 2L , respectively) and after tissue density correction (B, D for TE 1L and TE 2L , respectively).The 95% LOA decreased from (−7.22%, 4.55%) to (−2.36%, 1.33%) for TE 1L and (−7.53%, 6.20%) to (−2.99%, 1.84%) for TE 2L .Similarly, additional statistical metrics display significantly reduced RC (69% and 65% for TE 1L and TE 2L , respectively) and increased ICC intra (94% and 75% for TE 1L and TE 2L , respectively) with tissue density correction compared to pre-density correction.FigureS5.Bland-Altman analysis plots illustrating the repeated measurements of wash-in time normalized for breathing rate (non-transformed breathing rate [τ-nBR]; unitless) representing the breath count during τ, for the 1st and 2nd TE.The data points correspond to individual participants for inter-scanner difference between two visits to Manchester and London sites (A, B) and intra-scanner difference between two scans in London site (C, D).See also FigureS6.Simulated versus experimental percent signal change (PSE) values plotted as a function of TE.For the simulation, we utilized literature-reported values for T 1 (1281 ms for air and 1102 ms for 100% O 2 ) and incorporated measured T 2 * values from the lungs at air and 100% O 2 breathing, acquired using the same experimental protocol as detailed in this study (TableS1).For each traveling volunteer, we averaged T 2 * values across two sites (London and Manchester) from two posterior slices.These averaged T 2 * values for eight volunteers listed on the right-hand side (as displayed in TableS1of the supporting information) were used to simulate eight individual PSE plots.The experimental PSE values were obtained at four separate echo times (TE 1L = 0.71 ms, TE 2L = 1.2 ms in London and TE 1M = 0.81 ms, TE 2M = 1.51 ms in Manchester) and averaged across multiple either (A) two posterior or (B) all six slices for each traveling volunteer.The collective PSE results from the two MRI systems exhibit a comparable trend in relation to the TE while the simulated PSE plots show variability influenced by individual T 2 * values. | 9,009 | 2023-11-27T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
A Complete Framework for a Behavioral Planner with Automated Vehicles: A Car-Sharing Fleet Relocation Approach
Currently, research on automated vehicles is strongly related to technological advances to achieve a safe, more comfortable driving process in different circumstances. The main achievements are focused mainly on highway and interurban scenarios. The urban environment remains a complex scenario due to the number of decisions to be made in a restrictive context. In this context, one of the main challenges is the automation of the relocation process of car-sharing in urban areas, where the management of the platooning and automatic parking and de-parking maneuvers needs a solution from the decision point of view. In this work, a novel behavioral planner framework based on a Finite State Machine (FSM) is proposed for car-sharing applications in urban environments. The approach considers four basic maneuvers: platoon following, parking, de-parking, and platoon joining. In addition, a basic V2V communication protocol is proposed to manage the platoon. Maneuver execution is achieved by implementing both classical (i.e., PID) and Model-based Predictive Control (i.e., MPC) for the longitudinal and lateral control problems. The proposed behavioral planner was implemented in an urban scenario with several vehicles using the Carla Simulator, demonstrating that the proposed planner can be helpful to solve the car-sharing fleet relocation problem in cities.
Introduction
Population growth in urban locations has created several transportation-related issues. Road congestion and parking spot shortage, as well as the maintenance costs [1] are changing the traditional vision of becoming a car owner. However, there is still an appreciation for the flexibility a private car gives in comparison to public transport. The results of the Central Statistics Office show that almost 75% of the people in Ireland use a bus less than once a month [2], being the main reason for the lack of service or inconvenience of the stops. In this context, car-sharing services have emerged as a new business model, which combines the benefits of private transport and the elimination of parking and maintenance management for the driver [3,4].
The car-sharing model requires managing a fleet of vehicles in urban environments. Typically, vehicles are parked in several reserved spots around a city ("one-way-trip") [5], so that vehicles can be rented and parked in another spot after the renting period has ended. However, this leads to imbalances between parking spots, requiring approaches to reallocate the fleet. Reallocation is carried out by staff, which manually move a vehicle from one spot to the other [6,7]. Hence, the optimization of this process is critical for reducing costs [8]. In this field, most of the works are focused on the optimization of the trajectories to be performed by the staff [9,10], while other works have proposed to optimize the number of workers, by using trucks [11]. However, this is still an open research area.
In this context, automated vehicles (AVs) offer a potential framework for the development of new shared vehicle fleet management approaches. Among the different strategies Finally, merging or joining strategies in platooning scenarios are usually cooperative maneuvers that let two vehicles agree on which position the entering vehicle should enter and generate a trajectory according to the decision made. Most works in this area focus on merging scenarios at highway incorporations [27,45], focusing on lateral path planning [46] or even negotiation with communication V2V [47].
The aforementioned maneuvers have been studied and proposed for specific applications. In the case of platooning-related works, most focus on inter-urban or highway scenarios. However, few works in the literature focus on urban scenarios, and to the best of the author's knowledge, there is a lack of literature related to the application of these approaches to car-sharing applications in urban environments. Therefore, this work aims to provide insight into this area, by means of the following contributions: • A behavioral planner framework for the car-sharing relocation problem using platooning in urban scenarios. • The definition of a basic V2V protocol integrated into the framework specifically designed for car-sharing applications. • An adapted urban testing environment using the Carla Simulator for the specified use case.
The proposed framework considers a scenario in which a human drives a leading vehicle and gathers a set of AVs, which make use of the proposed behavioral planner to relocate them to urban environments. This paper is structured as follows. In Section 2, the proposed frameworks, the behavioral planner, and the implementation of each maneuver is given. Section 3 describes the simulation tools used for the validation process. Section 4 shows the results of the simulations obtained when executing the proposal. Finally, in Section 5, some conclusions are gathered, as well as future work planned.
Overview
The approach proposed in this work is focused on car-sharing applications, in which a human driver leads a leader vehicle and relocates a set of AVs, which will be used to provide car-sharing services. To define the behavioral planner, first, a brief explanation of the cycle of the relocated AVs is detailed. Figure 1 summarizes the process of relocating vehicles. First, a set of vehicles is assumed to be parked in predefined spots, so that a global planner can use these spots, as well as the information of the map, or even the current traffic, to define a global path that allows the relocation of the different vehicles and that will be provided to the driver of the leader vehicle. Please note that the definition of this global planner is not the focus of this work and that the optimal path will be considered known in the use case. When the leader vehicle (in blue) approaches the parking spot of the AV (1), a signal is sent to the AV (in orange) to start the platoon merging process. For that purpose, the AV will perform a de-parking maneuver, followed by a merging maneuver. Once the AV has merged with the platoon, it operates in vehicle-following mode (2). The platoon will continue to pick up the different vehicles until the relocation parking spot defined by the global planner is approached (3). In this case, the AV (in orange) will perform a de-merging maneuver, followed by a parking maneuver.
As the aforementioned functionality is well structured and the transition flags are clear, a Finite State Machine (FSM) is defined, in which each maneuver is defined using a state, as can be seen in Figure 2. There are five maneuvers a relocated vehicle can be in: platoon-following, parking, waiting, de-parking, and joining. The aforementioned state machine is designed to reproduce the cycle of AVs in carsharing applications. To implement it, basic V2V communication was assumed to exist between the leader and the follower AVs. This way, the initial state of an AV is waiting in a parking spot (Figure 1 (1)), broadcasting a communication message containing its position, state, and ID. Once the leader approaches, it will receive the broadcast message and will communicate with the AV if joining is possible, determining a platoon position at the end of the platoon. Note that joining may not be possible if other vehicles are blocking the AV for the de-parking maneuvers.
In order to join the platoon, the AV enters the de-parking state and will orient itself with the nearest lane. Depending on the relative position for the rest of the platoon, a joining process will be required (if the platoon is in another lane or the distance to the platoon is more than the desired platoon inter-vehicle distance), activating this state.
Once the AV achieves to the desired distance to the last member of the platoon, the vehicle is considered merged, and the follower AV sends a message to the leader so that the platoon list can be updated. Then, the AV enters the following state ( Figure 1 (2)). In this state, each AV calculates its control values, making use of the position and velocity of the preceding vehicle. When the desired parking spot is reached (Figure 1 (3)), the leader will signal the last vehicle on the platoon to park, by entering the parking state and jumping to the waiting state once finished to start the cycle again.
To implement the aforementioned approach, it was assumed there exists connectivity between the leader and the AV followers at small inter-vehicle distances. The literature has proven that Vehicle-to-vehicle (V2V) communication can increase string stability if shared information is used for the longitudinal and lateral control [48].
In this regard, the proposed approach considers that all vehicles share information ( Figure 3). The information flow topology is predecessor-leader following [49], which means every vehicle in the platoon will receive information from both the leader and their predecessor. Additionally, the last vehicle shares its position and velocity values with the leader so it can manage the traffic light situations. The proposed basic messaging system is centralized by the leader, which is responsible for managing the platoon. Since, in this case scenario, the leader of the platoon is driven by a human driver, possible wrong decisions due to communication mistakes are not considered in this paper. Messages are sent broadcast with enough information for each follower to interpret. Each message contains the following information: • Vehicle state: This number represents if the follower is in the platoon or parked in a parking spot. • Platoon position: This is the relative position of the vehicle in the platoon. • Parking spot: If an AV follower is to be parked, the leader of the platoon sends the position of the parking spot to the assigned AV follower.
Next, based on the defined behavioral planner and considering the communications framework, the different maneuvers will be detailed. Note that, in the proposed study case, each maneuver can implement a particular control strategy that best fits the requirements of the maneuver. Although the development of particular control approaches is not in the scope of this paper, a comparative study was carried out using different control configurations to define the one selected for the study case implemented in this work. This way, three of the most-used approaches in the literature were evaluated: • PID: Longitudinal and lateral controls were considered independent, using a fixed velocity reference for the longitudinal control and using a PID-based control for both controls. • Longitudinal PID and lateral MPC: Longitudinal and lateral controls were considered independent. A fixed speed reference was used for the longitudinal control, in which a PID was implemented, while a bicycle-model-based MPC was used for steering. • MPC: Both the speed reference and steering input were generated by a single MPC, which considers a bicycle model for its prediction. The MPC generates the velocity reference, which is followed by a PID.
The results are shown in Table 1, where the Root Mean Square (RMS) of the lateral error was selected as a performance metric. Figure 4 shows the trajectory performance of each controller for the two scenarios evaluated (parallel and battery parking). The controller parameters evaluated are detailed in the caption. Based on the aforementioned table and considering the particular study case analyzed in this work, the control approaches detailed in Table 2 were selected to implement the maneuvers. Please note, however, that this does not imply that the proposed approach is limited to these controllers, as it can consider different ones. More detail regarding each controller will be given in the next sections.
Platoon Following
In the platoon-following state, the AV followers have to track the trajectory defined by the preceding vehicle. In the literature, this approach is implemented by considering two controllers [50]: longitudinal and lateral control. The first one generates the inputs for the throttle and brake, and it is responsible for the longitudinal acceleration and extension of the speed of a vehicle. The second controller generates the steering inputs to follow a specific path.
In the following subsections, the controllers chosen for the platoon-following maneuver will be explained in further detail.
Longitudinal Control
This area has been thoroughly studied in the literature, with two major research lines: ensuring string stability, that is ensuring that the distance between the platoon members remains bounded and stable; and optimizing fuel consumption. The first is typically implemented using controllers that do not require the model of vehicles, such as PIDbased approaches [25,51], while the latter requires considering the model of the vehicles to perform the minimization. In these cases, Model-based Predictive Controllers (MPCs) are commonly used [52][53][54].
Traditionally, vehicle following control is labeled as Adaptive Cruise Control (ACC) in the literature. This is meant to be an evolution of the classic Cruise Control (CC) technology, which uses an estimation of the velocity of the preceding vehicle to generate a speed reference for the controller to follow. If V2V communication is available between vehicles, more accurate data can be used, provided by the preceding vehicle, and longitudinal control is labeled as Cooperative Adaptive Cruise Control (CACC), which is the strategy implemented in this work.
Since urban scenarios are dynamic and many decisions have to be made during a single driving session, a string-stability-oriented solution was chosen, where a PID-based CACC is used to generate a velocity reference for the lower PID-based level controller to act upon ( Figure 5). Therefore, the control law follows the classic formulation: where v re f is the reference velocity for the lower controller, K p , K i , and K d are adjustable parameters, and e is the predicted error of the distance between the controlled vehicle and the previous one: The predicted distance makes use of the information from the previous vehicle and assuming the acceleration will maintain constant through the next time step: where d is the distance between the controlled and preceding vehicles along the trajectory, v and v p are the longitudinal velocities of the controlled and the previous vehicles, respectively, and a and a p are the longitudinal accelerations for the controlled and the previous vehicles, respectively.
Lateral Control
When it comes to lateral control in AVs, the main goal of the problem is to follow a trajectory defined previously. Perception systems provide the information needed for this task. This procedure can be applied to several AV functionalities, such as lane-keeping, lane changing, or obstacle avoidance. In this work, the reference trajectory is defined by the leader vehicle, so the followers are controlled to maintain the same track as the leader. The most common approaches in the literature are PID- [55,56], fuzzy- [30,31], and MPCbased [57,58] controllers. However, MPC controllers seem to be more predominant thanks to the possibility of introducing vehicle dynamics and safety and comfort constraints into the control law.
In urban scenarios, a traditional assumption in the literature is to neglect tire slip due to low lateral accelerations. Hence, a model-based predictive control based on the so-called bicycle kinematic model of the vehicle can be used as the lateral controller ( Figure 5): where x k and y k are the coordinates of the center of the rear axle of the vehicle at the k ∈ [0, h]th time step represented in the reference system of the vehicle at time k = 0, h being the prediction horizon; v k is the longitudinal velocity, which is considered constant within the horizon; φ k is the yaw angle; L is the wheelbase of the vehicle and T s the step time. δ k is the equivalent steering angle in radians transported to the center of the front track for a supposed 100% Ackermann configuration.
If ∆u + = ∆δ k . . . ∆δ k+h−1 T is the controller output andx(k) is the state of the system, as defined in Equation (4) then the cost function to be optimized by the MPC is defined as follows: whereŷ is the y coordinate of the vehicle along the horizon h represented in the vehicle's initial state reference system and y ref is the y coordinate of the reference trajectory represented in the same reference system. Q and R are the weighting matrices used to tune the controller once again. Steering input is limited to st < 0.7 rad and st > −0.7 rad due to physical restrictions. The lateral reference is generated from the positions shared periodically by the leader. For that purpose, the last points provided by the leader are used to calculate the local path to be followed. This is better explained in Figure 6 with a simplified case using only three points. As can be seen, at each iteration, a new point is added to the path if the ego vehicle has traveled a certain distant from the last point. At the same time, the oldest point in the list is removed if it is behind the controlled vehicle. The reference for the controller is achieved by obtaining equidistant points at d = v T s , where v is the current longitudinal speed of the vehicle and T s the control period.
Parking/De-Parking
The parking or de-parking maneuver will be executed in the states with the same name and will allow picking (or leave) a follower in the platoon. A geometric approach to generate a path to the parking spot is used, which is combined with an MPC approach to track the defined trajectory, as explained next.
Note that only parking will be detailed, as de-parking will consist of an identical approach, but in reverse order.
Path Generation
To define a path to park a follower, two scenarios were considered: battery parking, which considers all cases in which the orientation of the parked vehicle is not parallel with the orientation of the road; parallel parking, in which the parking spot is parallel with the road.
When considering battery parking, the typical human driver maneuver considers: (1) lining up the vehicle with the parking spot from the initial position (P ini ); (2) guiding it in a straight line to reach the final position. Geometrically, the optimal way of generating the described trajectory is by combining an arc tangent to the initial and the final orientation axes, followed by a straight line between the tangent point (P tang ) and the final position (P end ).
Based on this behavior, a geometric approach has been implemented for battery parking, which is summarized in Figure 7, in which two different trajectories are defined (green and orange). In addition, to define the trajectories, several constraints have to be considered: the steering angle needed to perform the curve must be within the boundaries of the controlled vehicle; no collision should happen with the nearby obstacles (vehicles, pedestrian, trees, etc.). This last consideration is ensured by checking the intersection between a bounding box around any obstacle and the controlled vehicle. To do so, additional paths are generated from each vertex of the bounding box around the controlled vehicle. Then, the intersection between them and the edges of the bounding boxes surrounding the obstacles is checked (Figure 8. If an intersection is confirmed, the trajectory generated is discarded. This method allows the application of a Safety Coefficient (SC) to offline collision checking to enhance the robustness of the approach. Therefore, an SC of δ = 1.05 was used in the testing simulations. Notice that any number of bounding boxes can be added to the trajectory generator, even as a method to avoid invading some spaces, such as the opposite direction lane.
In the case that no trajectory is found to fulfill the required conditions, another starting point is chosen at a distance in front or behind the vehicle, entering an iterative process to improve the success rate of the algorithm. Since, in a parking maneuver, a vehicle can move either forward or backward, each point in the trajectory has a direction value assigned, ψ = 1 for the forward direction and ψ = −1 for the backward direction. Additionally, the trajectory is segmented considering the driving direction, so the transition is smoother.
Once a trajectory is chosen, it is necessary to represent it in the reference system of the vehicle and obtain equidistant points at distance d s = v d Ts, v d being the desired parking speed. In the case of parallel parking, two tangent circular segments are defined to perform the parking maneuver, as depicted in Figure 9. The result is not restricted to a single trajectory, so the first feasible one is chosen as the reference. As in the previous case, constraints regarding steering and bounding boxes to avoid collisions are considered. Figure 10 illustrates an example regarding the aforementioned procedure.
Parking/De-Parking Tracking Controller
An MPC-based controller was implemented to follow the trajectories defined for these maneuvers (Figure 11). Different from the platoon-following case, both velocity and steering are controlled by the same controller, which allows the parking vehicle to consider both variables in the maneuver.
To implement the MPC, a kinematic bicycle model was considered as defined previously. The MPC controls both the steering wheel δ and the longitudinal velocity reference v r (t), which are followed by a PID-based low-level controller ( Figure 11).
Similar to the vehicle-following case, a quadratic cost function J is defined: whereê p = e p (k + 1) . . . e p (k + N + 1) T is the distance between the vehicle and the reference position each time: (x k − x re f k ) 2 + (y k − y re f k ) 2 ; ∆u(t) = [δ(t)v r (t)] T comprises both steering and the longitudinal velocity reference. The control problem takes the general form defined in Equation (4) in which the following input constraints are applied: v < 8.33 m/s, δ < 0.7 rad, and δ > −0.7 rad.
Joining
Due to the high density in urban scenarios, it is possible that an external vehicle interrupts the formation of a platoon if the gap is too big. In order to reduce this effect, the follower should join the platoon as soon as possible once the de-parking has been executed. This is carried out by defining an additional state in the behavioral framework designed to quickly reduce the distance between a follower and the preceding vehicle by increasing its speed until a certain distance is matched. For this purpose, the CACC controller defined for the vehicle following state is used, in which more aggressive parameters are selected, while limiting the reference value to the speed limits in urban scenarios. Similarly, the lateral control is the same used in the aforementioned maneuver.
Note that this state will only be executed if the relative distance between the follower vehicle and the preceding one after de-parking exceeds a predefined value. If not, the state will directly change to vehicle following (Figure 2).
Simulation Framework
Control and behavioral planning algorithms are easily tested by coding them using an Integrated Developer Environment (IDE). However, there are limitations implied with this approach. Perception systems are key in automated vehicles since they are responsible for the positioning of the controlled vehicle and the object recognition. The information obtained by these algorithms, along with the messages received via communication are then used to make the safest decision that fulfills the current goal. Therefore, there is a need to build a virtual environment that can simulate all the platoon-related systems, so that it is possible to test and validate this technology safely and even provide an incremental approach of the design and test of the whole framework.
Thanks to the evolution of technology, 3D environment simulators such as rFpro [59], Congata [60], and Nvidias' Drive Constellation [61], among others, have been proposed recently. Features such as dynamic lighting, weather change, or sensor simulations tied with state-of-the-art vehicle dynamic models and realistic assets have proven a great way of testing new autonomous-driving-related technology. Furthermore, the easy accessibility to game engines has made possible the release of some open-source projects, such as LGSVL [62] or the CARLA Simulator [63].
This work used the functionalities provided by the CARLA Simulator, in which a clientserver configuration is used ( Figure 12). The server handles the 3D urban open-world scenario defined in the simulation, along with all the actors present in it (objects such as traffic lights, roads, etc.), pedestrians, and vehicles. The behavioral planner framework and related functionalities are executed on a client, which interacts with the vehicles in the simulation. Each vehicle in the platoon was considered a single object, while had its control script, which is able to communicate with the rest of the vehicles by simulating the communication approach defined in the previous section. Additionally, the Global Navigation Satellite System (GNSS) virtual sensor provided by the Carla Simulator environment was used to obtain the positions of the vehicles. Hence, the framework was designed then to manage a decentralized control system where each vehicle can make its own decisions based on the information received from both perception and communication. The ability to obtain kinematic information from the server makes it possible to bypass the perception algorithm for the sake of simplicity and allows defining a basis on which perception algorithms can be easily later integrated with the approach to test the overall approach.
Validation and Use Case
The relocation problem was simulated in the environment detailed in the previous section to test the validity of the proposed behavioral planner framework. The proposed use case has three actors take part in it: a leader vehicle and two follower vehicles.
The leader vehicle is supposed to be driven by a staff member of the car-sharing company. To emulate a human driver, the proposed MPC lateral and longitudinal controllers were used to follow a predefined path. The two follower vehicles are to be relocated. The first one is parked at point P1 and the second one at P2 (Figure 13). In addition, two other parking spots exist (P3 and P4). P1 and P3 are line parking spots, while P2 and P4 are battery parking spots. The global trajectory of the leader vehicle is depicted in blue and the starting position in green and was assumed calculated by a global planner, which is not the focus of this work. For this simulation, the velocity of the leader vehicle was set to 30 km/h, which is known as the velocity limit in residential urban environments. The distance gap for the platooning was set to 7 m, considering this value represents the distance between the centers of the preceding and the following vehicles. A fixed step time of T s = 0.05 s was used in the simulations using the synchronization feature of Carla Simulator. The parameters of the MPC controllers detailed in Section 2 are gathered in Table 3. The values of the PID controllers used for the CACC feature and the velocity following control are listed in Table 4. Table 4. PID controllers parameters. Units for CACC controller: K p (1/s), K i (1/s 2 ) and K d (-). Units of Low level controller: K p (%s/m), K i (%/m) and K d (%s 2 /m).
CACC Low Level
The overall performance of the controllers can be seen in both Figures 14 and 15. Figure 14a shows the velocity of both the leader vehicle and the two followers during the whole scenario, positive values meaning the vehicle is moving forward and negative values meaning the vehicle is moving backwards. Figure 14b, on the other hand, details the active states for each follower. Finally, Figure 15 illustrates the relative distance, measured point to point to the predecessor vehicle in the platoon, of the followers. In this case, a distance can only be measured when a predecessor is defined, either when a vehicle is in the joining or following states. Therefore, Figure 15 only shows values in the time lapses where the followers are in the platoon-following state.
As can be seen in Figures 14a and 15, the errors of the first follower around steps 600 and 1050 were due to two sharp turns during the trajectory. This error will be higher as the curvature of the path increases. However, the controller used in this specific use case was able to overcome the error to follow once again the established velocity. The picking-up process starts when the leader detects a vehicle is parked near its position at point P1 (Figure 16a). The message sent by the leader makes the follower vehicle change from a waiting state to a de-parking state (Figures 14 and 16a). The resulting maneuver places the follower vehicle behind the leader one. However, as seen in Figure 15, the relative distance between both vehicles is higher than the defined threshold of 7 m; hence, a joining state is entered by the follower vehicle (Figure 14b), increasing its speed (Figure 14a) until the desired relative distance reference is achieved (Figure 15). This allows entering the following state, and the CACC longitudinal control is activated (Figure 16a). From this point on, the follower can maintain a relative distance from the leader. Slight deviations occur in steps 560 and 1020 due to the 90 • curves taken. Notice that, since the inter-vehicle distance is calculated point to point, the curves introduce a slight error, which can be neglected, as can be seen in the Figure 14. The pick-up process of the second vehicle at point P2 is performed similarly to the first one, as depicted in Figures 14, 15 and 17. Note that the second follower takes the relative distance from its preceding vehicles, that is the first follower vehicle. In this case, as in the previous one, after de-parking, a joining state is required to place the follower at the desired distance in the platoon. This is depicted in step 104, where a velocity increase is required to catch up with the platoon after de-parking (Figure 14a). Once both followers have been picked up, the platoon advances under the guidance of the leader vehicle. Note that, as depicted in Figure 15, the relative distances are properly followed by both followers.
The leaving process at P3 is performed similarly to the pick-up process. Note that, in the proposed approach, as all vehicles are similar, the last one in the platoon is supposed to disengage. In this case, the leader sends the message when it detects an empty parking spot near the area it is supposed to leave the relocated vehicles (Figure 18a). This way, Follower 2 modifies its state to the parking state, stopping the vehicle first and calculating the parking maneuver for line parking. Then, the parking trajectory control follows the defined maneuver, as depicted in Figure 19. Once the vehicle is properly stopped, it enters the waiting state; see Figure 14.
Finally, Figure 20 illustrates the same process for Follower 1, the last element in the platoon, in the case of a battery parking spot in P4. In this case, the trajectory calculated and executed by the follower in the parking state is depicted in Figure 21. Based on the aforementioned simulations, it can be seen that the proposed framework has a proper performance, and hence, it constitutes an appropriate approach to further develop urban-based car-sharing applications.
Conclusions and Future Work
In this work, a behavioral planner framework was proposed for car-sharing applications in urban environments, which includes the definition of a finite state machine and a basic communication protocol. The proposed work aimed to provide insight into the development of new automated-vehicle-based solutions in urban environments for car-sharing applications.
In order to test the approach, a simulation framework based on the CARLA simulator was used to test a scenario in which a leader vehicle relocates two vehicles at a low speed, as if they were assets from a car-sharing business model. Car following, lane following, and parking maneuvers were validated for this purpose. In the simulation, the control framework performed correctly.
However, other information, such as traffic lights and vulnerable road users, could be considered. These can have a big impact on the state management of connected and automated vehicles and, thus, will be considered in further iterations of the proposed framework. Moreover, in this work, an optimal fleet relocation planner that defines the global path to be followed was assumed, which will be included in future work by the authors. | 7,481.6 | 2022-11-01T00:00:00.000 | [
"Engineering"
] |
Vehicle License Plate Recognition System Based on Deep Learning in Natural Scene
: With the popularity of intelligent transportation system, license plate recognition system has been widely used in the management of vehicles in and out of closed communities. But in the natural environment such as video monitoring, the performance and accuracy of recognition are not ideal. In this paper, the improved Alex net convolution neural network is used to remove the false license plate in a large range of suspected license plate areas, and then the projection transformation and Hough transformation are used to correct the inclined license plate, so as to build an efficient license plate recognition system in natural environment. The proposed system has the advantages of removing interference objects in a large area and accurately locating the license plate. The experimental results show that the localization success rate is 98%, and our system is feasible and efficient.
Introduction
With the development of China's transportation in recent years, there are more and more private cars. How to efficiently manage cars has become an urgent problem. At present, the automatic license plate recognition technology is booming. This technology can realize, for example, the management of vehicles entering and leaving the community, the identification of truck license plates on highway crossings, and the collection of parking fees. The license plate recognition in a simple environment has certain practicability.
However, in terms of current research, in the natural environment, there are still many difficulties in automatic license plate recognition technology. For example, bad weathers like haze, rain and snow; As shown in Fig. 1, different lighting conditions; different shooting distances and shooting angles, vehicle speed in the image, and so on. These complex factors have caused problems such as blurred imaging, overexposure, and underexposure in the final captured image. The license plate recognition technology is still not ideal in the natural environment. The key to solving the problem of slow and low accuracy of license plate recognition lies in license plate positioning.
In this paper, in the existing license plate recognition system, the license plate positioning part is optimized, and a new license plate positioning algorithm is used to improve the positioning accuracy, shorten the time and reduce the resource overhead. First, the original image is processed through graying and binarization, and then use the optimized Canny operator to perform the Edge Detection. The optimized Canny operator further improves the accuracy and anti-interference of Edge Detection [1], with fast processing speed and small resource overhead .Combined with morphological processing to extract a large area of suspected license plate area, and after corrosive expansion processing, use Alex Net to remove fake license plates to accurately locate the license plate area; finally set a threshold for cutting and use a template Matching to achieve the segmentation and recognition of license plates. The advantage of this system is that the interference factors in the natural environment are processed accurately, and the license plate is quickly identified after accurately locating the license plate.
Related Work
For the license plate recognition in the natural environment, the key to the solution lies in the positioning of the license plate. At present, there are four main exploration directions for license plate positioning: edge-based, color-based, texture-based, and character-based algorithms [2].
The principle based on edges is to find areas with a larger edge density than the area of the license plate image to be identified, forming edges with different densities. Generally, the edge-based method is to detect rectangular contours because the license plate has a certain aspect ratio [3]. This method is characterized by fast recognition speed and can quickly extract suspected license plate regions, but it is easily affected by the natural environment and has insufficient accuracy. Tan, Jinn-Li combined edge detection with morphology and mathematical operations [4], and performed expansion and corrosion treatment on the screen according to changes in the brightness and area of the license plate to find a rectangle that may be LP. This combination method solves the problem of low recognition accuracy to a certain extent, and has certain feasibility. The edge-based method is characterized by higher detection speed, but this method is more sensitive to unwanted edges and it is not easy to detect complex images.
The color-based method recognizes license plates by locating them in an image. Shi [5] and Zayed et al. [6] proposed a color model classifier. This method uses the color information of the license plate for localization. It has a good recognition rate for photos with a clear license plate color and a simple background, but it is subject to light conditions. Jia et al. used a combination of rectangular features, aspect ratio, and edge density to determine the license plate candidate region, and then used a shift algorithm to segment the color image in the region [7]. This method optimizes in complex scenes Recognition, but the operation is complicated and the recognition speed decreases. Because such methods are sensitive to light and shadow, and are easy to misreport, there is less rese arch now.
The texture-based method is to detect the required area of the pixel distribution in the license plate image. After graying the color license plate image, it scans in rows and columns to count the number of grayscale transitions of the license plate image. By comparison, the positioning of the license plate is finally achieved. A typical method is to use SVM to analyze the color features of the license plate texture [8]. This method converts the image data into vector data and establishes an SVM model for analysis according to specific training parameters. It takes a long time, but the trained model can be fast and accurate. Analysis of license plate textures; Wang, Shen-Zheng uses Adaboost [9] algorithm to build a classifier cascade, uses efficient rectangular features and integral map methods on the underlying feature extraction, extracts Haar-like features, and trains Haar classifiers. This method has extremely high accuracy of license plate recognition. The texture-based method has complicated calculation and poor noise resistance. When in the complex background, or the license plate and the texture characteristics of the body are similar, it is difficult to locate the license plate based on the texture in the image.
The character-based method is to treat the license plate as a character string, while checking for the presence of characters in the image to locate the license plate. For example, scale space analysis is used to extract characters [10]. According to scale invariance and rotation invariance, an image pyramid is established using Gaussian convolution kernels for recognition. It is characterized by excellent recognition speed under appropriate parameters, but it is difficult to find the scale in the image, and it is difficult to choose the appropriate parameters. The character-based method is faster in recognition, but the text in the background of the image will greatly affect the performance of the algorithm.
Natural Scene License Plate Recognition System Based on Deep Learning
As shown in Fig. 2, the system implemented in this paper is divided into four parts, image processing, license plate location, license plate segmentation, and pattern recognition.
Image Preprocessing
As shown in Fig. 3, because in natural scenes it is very difficult to accurately extract license plate boundaries in color original images, the original image should be grayed out, and then contrast enhanced and binarized. The results of Gray processing are shown in Fig. 4. The processed image is detected by the canny operator for edge detection. Compared with other operators, after the canny operator is processed, the edge of the image is very good and it is completely saved.
The principle of the Canny operator [11] is to pass the original image through a Gaussian filter for low-pass filtering. However, Gaussian filtering does not suppress all noise. It is easy to detect impact noise as an edge in a complex environment. Therefore, median filtering is used instead of Gaussian filtering. Then use double thresholds to detect and link edges, and finally form accurate edges. Using median filtering can significantly suppress image noise, better preserve edge information, and achieve smooth effect; when performing gradient calculations, first-order partial derivatives in the 3 × 3 domain are used to find the gradient amplitude and direction, more than the 2 × 2 field, the difference in the horizontal and vertical directions is introduced, which can get more accurate edges and suppress noise interference.
Figure 4: Grayscale image
After several experimental comparisons, as shown in Fig. 5, the edges obtained by optimized edge detection are more complete, and the detailed information can be better saved.
License Plate Positioning
The pre-processed picture still has a lot of noise. In order to enhance the details of the image and the coherence of the image, morphological processing is needed to reduce the noise. For the problem of intermittent and too small positioning areas, expansion and corrosion treatments are required. It receives an image data and a structure. The output value of the pixels in which the background and structure are completely coincident in the image is 1. Finally, the image data after the structure has been etched is returned to achieve the purpose of reducing irrelevant structures.
Most of the corroded image structures are scattered and incoherent. In order to determine the license plate position later, we should further process the images shown in Fig. 5 by a smoothing processing. Here, a closed operation is adopted, that is, the expansion treatment and then the corrosion treatment. The principle of the dilation process is: the foreground image of the binary image is 1, and the background is 0. Assuming that there is a foreground object in the original image, then the process of dilating the original image with a structural element is as follows: traverse each pixel of the original image, then use the center point of the structural element to align the pixel currently being traversed, and then take the maximum value of all pixels in the corresponding area of the original image covered by the current structural element, and replace the current pixel value with this maximum value. Since the maximum value of the binary image is 1, it is replaced with 1, which becomes a white foreground object. For some small breaks in the foreground object, if the structural elements are the same size, these breaks will be connected. The results of corrosion expansion are shown in Fig. 6. After the initial coarse positioning, the color image to be recognized initially has become a binary image with the license plate as the main structure. However, there are still many suspected license plate areas in the figure. Here, deep learning is used to remove the fake license plate. Experiments prove that the method has high accuracy and simple steps.
Alex Net is a classic model of CNN in the field of image recognition [12]. The specific structure is shown in Fig. 8, the output of the last fully connected layer is the input of a 1000-dimensional Softmax function. Softmax will generate a distribution network of 1000 labels with 8 weighted layers; the first 5 layers are convolution layers, and the remaining 3 layers are fully connected layers. The output of the last fully-connected layer is the input of a 1000-dimensional Softmax, which produces a distribution of 1000 labels. Aiming at the problem that sigmoid gradient saturation leads to slow training convergence, we bring ReLU in Alex Net, which is called here Rectified Linear Units (ReLUs). ReLU is a piecewise linear function. If the value is less than or equal to 0, the output is 0; if it is greater than 0, the output is the same. Using ReLUs in deep learning is much faster than the equivalent Tanh. According to the needs of the classification, we reduce the output neurons of the classifier from 1000 to 3. In order to prevent the overfitting phenomenon during the training process, we need to process the three RGB channels of the image. Starting from the first layer of the convolution layer, the function is first modified. In the first step, select 96 3D convolution kernels with a step size of 4 and a size of 11 × 11. After operation, get 96 55 × 55 feature plane clusters, and then correct it by activation function.
After correction, we sample from these data. Then, select the Maxpooling operation with a step of 2 and a size of 3 × 3. Then the normalization processing formula is as follows: N is the number of convolution kernels in the convolution layer; I, n, a, β are constant terms. Values can be assigned according to specific experiments. In this paper, i = 2, n = 7, a =10 -4 , and β = 0.75. Experiments show that the training effect after normalization better. Due to the large number of network layers and feature planes, the test results only show part of the features of the third layer network as Fig.10 and Fig. 11. For the inclination angle problem of the license plate after positioning, using a commonly used license plate correction radon operator to correct the license plate tilt, the precise license plate positioning result can be obtained.
Character Segmentation and Recognition
For the precisely positioned license plate, the binarized license plate part is used to find blocks with continuous text, and if the length is greater than a set threshold, it is cut to complete the character segmentation. There are relatively few words and characters that make up the license plate, and these characters are all printed. Therefore, the template matching algorithm [13] is used to recognize the characters, and a high recognition accuracy can be obtained. The character template library is shown in Fig.12, and the last result of character segmentation is shown in Fig. 13.
Experimental Results and Analysis
Through a large number of experiments to test the improvement scheme of this article, the Alex Net classifier test used 2,810 samples, including 1022 blue license plates, 991 yellow license plates, and 797 fake license plates. Compared with SVM scheme, according to the experimental data of Tab. 1 and Tab. 2, the accurate positioning rate of the final license plate reached 98.3%, and the wrong positioning rate of final license plate reduced to 2.58%. It is proved that this algorithm has good recognition performance and the system is feasible and effective. Because Canny Edge Detection combined with morphological processing is used for preliminary positioning, the time taken by the Alex Net classifier to scan the full image is greatly reduced. Because the license plate recognition system in this paper is based on two different features, it can be executed simultaneously under a multi-core CPU without affecting each other, reducing time and increasing the operating speed. This method solves to a certain extent the shortcomings of the current license plate positioning technology in a complex environment, such as low positioning accuracy and long time, and enables the license plate positioning technology to achieve higher positioning in natural scenes with many vehicles, many people, and large driving angle Accuracy. At the same time, with the continuous improvement of CPU performance, the calculation cost of the Alex Net classifier will also decrease, so the system has a large room for improvement in engineering applications.
Conclusion
Incorporating deep learning theory and Alex Net model, into the license plate location system, not only improves the accuracy of license plate location, but also reduces the time for full image scan of Alex Net classifiers when using Canny Edge Detection method combined with morphological processing to perform preliminary positioning. Combined with a large number of experimental results, it is proved that the system has a high positioning success rate, use less time and fewer resources, and has certain practicability.
Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study. | 3,690.4 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Ultimate squeezing through coherent quantum feedback: A fair comparison with measurement-based schemes
We develop a general framework to describe interferometric coherent feedback loops and prove that, under any such scheme, the steady-state squeezing of a bosonic mode subject to a rotating wave coupling with a white noise environment and to any quadratic Hamiltonian must abide by a noise-dependent bound that reduces to the 3dB limit at zero temperature. Such a finding is contrasted, at fixed dynamical parameters, with the performance of homodyne continuous monitoring of the output modes. The latter allows one to beat coherent feedback and the 3dB limit under certain dynamical conditions, which will be determined exactly.
Introduction -Feedback is one of the main avenues to exert and refine control on physical systems. In quantum mechanics, feedback control may be applied in two, radically different, fashions: as measurement-based feedback [1], where measurements are used to purify the system and to inform operations on it, or as coherent feedback [2], where only deterministic manipulations of subsystems coupled to the system of interest (part of the latter's environment) are performed. While in measurementbased feedback the quantum information is turned into classical information by the act of measuring, in coherent feedback the information stays quantum at all stages of the control loop.
It might be argued that, since the manipulations involved are deterministic, coherent feedback loops should not be considered as feedback control at all, but rather as a class of open-loop control strategies where only certain auxiliary degrees of freedom are accessible. However, quantum optics allows us to disregard such a terminological dispute (although the discussion of this issue in [3] is worth mentioning), through the adoption of the inputoutput formalism, which is tailored to describe the interaction of a countable set of localised modes (e.g., a set of cavity modes) with a neighbouring field's continuum (the electromagnetic field outside a cavity). We shall then, as customary in the context of quantum optics, define a coherent feedback loop as one where a set of output modes, interacting with a system at an input-output interface, may be manipulated through quantum CP-maps and then fed back into a system as input modes at another input-output interface. This approach is similar to that used in the established field of 'cascaded quantum systems', where the output of one system is used as the input of another [4][5][6], and of the related "all optical" feedback [7], the difference being that here the output is fed back into the system whence it came.
Notice that the input-output paradigm finds successful and broad application to a number of quantum set-ups where a high degree of coherent control is achievable, ranging from purely optical set-ups to optomechanics [8], nanoelectromechanics, atomic ensembles, and cavity QED waveguides [9], to mention but a few. Given the impressive recent advances in the realisation of quantum technologies, connectivities (especially via fibres and waveguides) are quickly nearing a point where quantum control loops will be feasible and pivotal in harnessing quantum resources, such as quantum coherence and entanglement. Indeed, coherent control loops have been demonstrated in optical [10] and solid-state [11] systems, whilst measurement-based feedback has been by now applied to a variety of systems, with the aim of performing quantum operations, of enhancing cooling routines [12] or of entangling quantum systems [13]. It is therefore paramount to understand the ultimate limits of feedback strategies as well as which class of loops, coherent or measurement-based, is advantageous to perform a given task or optimise a given figure of merit.
After the seminal study [7] -which, at variance with the present enquiry, does not deal with a squeezing Hamiltonian acting on the system, but rather with more general forms of coupling between system cavity and the feedback loop, such a theoretical comparison has been addressed only for specific tasks in finite-dimensional scenarios [3,14], or for the realisation of protocols involving out-of-loop degrees of freedom [15] (i.e., concerning the relationship between input and output degrees of freedom). Treatment [15], in particular, adopts a framework that is wholly analogous to ours, and establishes a few remarkable impossibilities for measurement-based feedback. In our study, the input-output formalism, compounded with general linear operations, will yield a framework for a fair comparison between the two classes of control schemes, which may be contrasted at given connectivities and other technical and environmental parameters (such as detection efficiencies and temperature).
Note that the set of operations encompassed by coherent and measurement-based feedback differ, since the stochastic dynamics originating from measurements cannot be reduced to deterministic operations, which is the essence of the so called "measurement problem" of quantum mechanics. Nevertheless, coherent feedback has been proven superior in a number of tasks and contexts, so that our ability to demonstrate situations where measurement-based schemes do in principle prove superior is all the more striking and consequential. Measurement-based strategies, it turns out, prove particularly effective in stabilising in-loop figures of merit (i.e., quantities pertaining to the localised modes).
Specifically, in this paper we shall consider a single bosonic mode coupled to a white noise continuum at finite temperature through the input-output formalism. We shall assume the mode to be subject to a squeezing Hamiltonian and, as a significant case study, shall adopt the optimisation of steady-state squeezing as our figure of merit. First, as proof of principle, we shall present a simple coherent feedback loop using a single feedback mode subject to losses, show that it can enhance the achievable squeezing, and contrast it with what is achievable through (feasible) homodyne measurements of the output field. Then, we consider the most general possible coherent feedback setup, letting an arbitrary number of output modes at one interface undergo the most general deterministic Gaussian CP-map not involving any source of squeezing (i.e., the most general open, passive optical transformation, corresponding in practice to leakage, beam splitters and phase shifters), before being fed back into the system. This will prove that the simple scheme we considered is indeed optimal, and that our comparison is therefore conclusive.
Continuous variable systems -A system of n bosonic modes can be described as a vector of operatorsr = (x 1 ,p 1 ...x n ,p n ) T . These obey canonical commutation relations (CCR) [x i ,p j ] = iδ ij1 where we have set = 1. The CCR for multiple modes can be described using the symmetrised version of the commutator [r,r T ] = rr T − (rr T ) T = iΩ n where Ω n is a 2n × 2n matrix known as the symplectic form: Ω n = n j=1 Ω 1 , with Ω 1 = 0 1 −1 0 . In the rest of this paper, we will omit the subscript from Ω, letting the context specify the appropriate dimension. For a quantum stateρ, the expectation value of the observablex is given by x = Tr[ρx]. Using vector notation, this can be generalised to give the first and second statistical moments of a state: The above definition leads to a real, symmetric covariance matrix σ.
The steady-states we will focus on are Gaussian states, which may be defined as the ground and thermal states of quadratic Hamiltonians. Such states are fully characterised by first and second statistical moments, as defined above. Unitary operations which map Gaussian states into Gaussian states are those generated by a quadratic Hamiltonians. The effect of such operations on the vector of operators is a symplectic transformationr → Sr where S is a 2n × 2n real matrix which satisfies SΩS T = Ω. The corresponding effect on the covariance matrix of the system is the transformation σ − → SσS T . In this study, we will make use of so-called 'passive' transformations, which do not add any energy to the system and therefore do not perform any squeezing. Passive transformations must satisfy the extra constraint that S is orthogonal, ie. SS T = ½.
The Input-Output Formalism -The input-output formalism is a method for dealing with the evolution of systems coupled to a noisy environment, consisting of a continuum of modes (e.g., the free electromagnetic field). The interaction of the system with such an environment can be modelled as a series of instantaneous interactions with different modes at different times [16]. The incoming mode which interacts with the system at time t is known as the input mode and is labelledx in (t), The mode scattered at time t is labelledx out (t) and is known as the output mode. The input modes satisfy the continuous CCR: The coupling of the system to the input fields is given by a HamiltonianĤ C : wherer SB = (x 1 ,p 1 , ...x n ,p n ,x in,1 ,p in,1 ...x in,m ,p in,m ) T is the total vector of the n system modes and m bath modes. The square matrix H C is the coupling Hamiltonian matrix and C is a 2n × 2m matrix known as the coupling matrix. The Heisenberg evolution of the system operators is given by a stochastic differential equation known as the quantum Langevin equation [17]: The matrix A is known as the drift matrix of the system and is given by A = Ω n H S + 1 2 ΩCΩC T . The symmetric square matrix H S specifies the system Hamiltonian H S = 1 2r T H Sr . The vectorr in (t) is a stochastic process known as a quantum Wiener process which, in analogy with the classical Wiener process, obeys the relations where σ in is the covariance matrix of the input modes. This relationship implies delta correlations between bath modes interacting at different times (the wellknown "white noise" condition) and hence the Markovianity of the free dynamics, which we are thus assuming.
Eqs. (1) and (3) can be combined to obtain an equation for the evolution of the system covariance matrix: where D = ΩCσ in C T Ω T is known as the diffusion matrix. The condition required for Eqs. (4,3) to admit a steady-state (stable) solution is that the matrix A must be 'Hurwitz', meaning that all of the real parts of its eigenvalues are negative. If this condition is satisfied, then the steady-state solution reads Squeezing with no control -Squeezing is the process of reducing the variance of one quadrature and correspondingly increasing the variance of its conjugate. Our figure of merit for this study is σ 11 , the element of the covariance matrix corresponding to twice the variance of thê x-quadrature. The smaller the value of σ 11 , the more squeezed the system is. We consider a single bosonic cavity mode, subject to the HamiltonianĤ =Ĥ S +Ĥ C , whereĤ S = −χ{x,p}/4 with χ > 0 is the Hamiltonian which squeezes thex quadrature. This corresponds to a Hamiltonian matrix Losses in the cavity are modeled by coupling the cavity mode to an external fields through the Hamil-tonianĤ C that allows for the exchange of excitations: This corresponds to a coupling matrix C = √ γΩ T 1 where γ is the strength of the coupling. When this is the form of system-environment coupling, the so-called input-output boundary condition relates the system modes to the input and output as follows [16]: For this example and for the remainder of this paper, we will consider the case when all input fields are in Gibbs thermal states of the free Hamiltonian, so that σ in =N ½, whereN = 2N + 1 and N ≥ 0 is the mean number of thermal excitations in the environment, which can be promptly related to temperature and frequency through the Bose law. The steady-state squeezing obtained from (5) is σ 11 =N γ/(χ + γ). The condition for stability is that |χ| < γ, which means that, if the loss rate or squeezing parameter can be tuned, and the input fields are taken to be vacua (soN = 1) the maximum steadystate squeezing that can be achieved is σ 11 = 1 2 . This is known in the literature as the 3dB limit, as 10 log 10 (2) ≈ 3.01 (this is the noise, in decibels, associated with the smallest eigenvalue of σ in units of vacuum noise).
We note that steady-state squeezing could be improved upon if the input state were squeezed, but in this study we will only consider naturally occurring, non-squeezed reservoirs (as opposed to squeezed ones, which have only been envisaged through very demanding engineering), also in view of comparing different feedback strategies under the assumption that the only squeezing source is constituted by the system Hamiltonian with a certain strength χ.
Homodyne Monitoring -In order to provide the reader with a comparison between measurement-based and co-herent feedback, let us also consider the continuous monitoring of the outputx-quadrature through a homodyne detector with efficiency ζ, which yields a relevant element of the covariance matrix given by [18]: . This monitoring maximises the steady-state squeezing among all general-dyne detections at zero temperature, i.e., forN = 1 [19], but is beneficial at finite temperature too (forN > 1); we do not report the finite temperature optimisation here, since it would require the unrealistic access to purifications of the bath [20]. Notice also that, since the conditional covariance matrix (and hence the squeezing) after any general-dyne detection does not depend on the measurement outcome, mere monitoring already achieves the optimal performance allowed by this class of feasible measurements, without the need of closing the control loop.
Simple Coherent Feedback -As a preliminary piece of inquiry, let us report the treatment [17], and examine the performance of the simplest possible coherent feedback loop by feeding the output of one interface into the input of the other after undergoing losses. To do this we will consider a system mode coupled to two input fields, each through a Hamiltonian of the form given in (6). To avoid ambiguity, we will use a subscript e to refer to environmental white noise modes, and the the subscript in when referring to the input interacting with the system through the Hamiltonian given in (6). Adding coherent feedback involves settingr in,1 =r e,1 andr in,2 (t) = Φ(r out,1 (t)), where Φ is the CP-map corresponding to losses. These losses can be modelled as mixing at a beam splitter with an environmental moder e,2 . This means that coherent feedback can be achieved by setting : where η is the loss rate and we have used the input output relation (7). It is important to note that we are assuming instantaneous feedback, with no delays between the mode put out at interface 1 and fed back at interface 2, which will preserve the Markovianity of the dynamics. Making this substitution into Eq. (6) results in the systemr being effectively coupled to the environmentr e,tot = (r T e,1 ,r T e,2 ) T through the coupling matrix . Such a system requires γ(1 − √ η) > χ 2 in order to be stable. The steadystate squeezing achieved in these conditions is , which is minimised by letting √ η → 1 − χ 2γ , resulting in a squeezing of σ 11 →N 2 . Thus, at zero temperature (i.e., forN = 1), coherent feedback allows the 3dB limit to be approached (but not beaten) for any choice of parameters satisfying 0 < χ 2γ < 1. Notice that, regardless of χ, no stable squeezing is achievable ifN ≥ 2.
This is a very remarkable result, showing that a coherent feedback loop is in principle capable to amplify the strength of any squeezing Hamiltonian up to the 3dB stability limit. However, a homodyne measurement-based loop, whose steady-state squeezing is given by Eq. (8), outperforms this coherent feedback scheme when the efficiency ζ of the detector satisfies and the denominator of the RHS of (10) is positive. For χ < γ/2, either the denominator is negative or the bound above is larger than 1, which proves that homodyne monitoring does not beat coherent feedback at such weak interaction strengths. For χ ≥ γ/2, there is always a detection efficiency threshold above which the coherent feedback loop we considered is outperformed by homodyne monitoring; this threshold, quite interestingly, decreases with increasing noise (although the absolute performance of monitoring at given χ still deteriorates as the noise increases). As the upper limit for stability χ = γ is approached, the efficiency threshold falls to zero, so that detection with any efficiency will be better than our coherent loop in this limit. The ultimate performance of homodyne monitoring is obtained at ζ = 1, where σ m 11 =N (1 − χ/γ): hence, monitoring can in principle achieve stable squeezing (σ m 11 < 1) for all values of N , although only for χ > γ(1 − 1/N). Notice that, in principle, arbitrarily high squeezing may be stabilised at all noises (temperatures), whereas the coherent feedback loop we studied is bounded by the valueN /2. In order to achieve a conclusive comparison between coherent and measurement-based loops, we need to extend our treatment beyond a specific coherent feedback loop to include any possible interferometric scheme without additional sources of squeezing.
General Passive Coherent Feedback -Let us therefore consider the most general possible coherent feedback protocol which does not include any extra source of squeezing. A single system mode is coupled to l + m input modes through a coupling Hamiltonian of the same form as (6). We will give the label a to the modes interacting at the first l input-output interfaces, and the label b to the modes interacting at the remaining m interfaces. The first l input modes are environmental white noise, meaning we can writer in,a =r e,a = (r T e,1 ...r T e,l ) T . The corresponding output modes then undergo the most general Gaussian CP-map which does not include any form of squeezing. This is achieved by applying a passive transformation on the output modes, along with n ancillary white noise modes before tracing out the ancillas. After the transformation, the resulting m modes are then fed into the remaining m input interfaces of the system. We shall assume that the additional ancillary modes are also affected by the same thermal noise as the environment, so that it is still σ in =N ½ [21]. Let us stress that, by considering the most general passive symplectic transformation, the formalism below will allow for an elegant and compact description of the most general interferometric scheme mediating the coherent feedback loop.
The passive transformation on the output and ancilla modes can be represented as an orthogonal, symplectic, 2(l + n)-dimensional, square matrix Z: r in,b ⊕r anc,f = Z(r out,a ⊕r anc,i ) , wherer in,b = (r T in,(l+1) . . .r T in,(l+m) ) T andr out,a = (r T out,1 . . .r T out,l ) T . The initial and final states of the ancilla modes are given byr anc,i andr anc,f respectively wherer anc,i = (r T e,(l+m+1) ...r T e,(l+m+n) ) T The orthogonal symplectic matrix Z can be decomposed into block matrices Z = ( E F G H ). This representation of Z allows us to writer in,b = Er out,a + Fr anc,i . It is shown in [18] that the overall effect of this coherent feedback protocol is to couple the system mode to the white-noise environment, now given byr e,a ⊕r anc,i , through the coupling matrix: where C j indicates a 2×2j dimensional matrix of the form √ γ(Ω T . . . Ω T ). It is also shown that adding coherent feedback modifies the system Hamiltonian matrix H S by addition of a matrix: where Γ l is a 2l × 2 matrix of the form: Γ l = √ γ(½ 2 . . . ½ 2 ) T .
Optimal Coherent Feedback for Squeezing -Working within this general framework of coherent feedback, we will now find the optimal steady-state squeezing achievable. In particular, we will show that no coherent feedback protocol can improve upon the 3dB squeezing limit, for any choice of quadratic system Hamiltonian. This result will be outlined here, with some details left to [18]. We are after the smallest eigenvalue of σ ∞ , as given by Eq. (5), with D and A which are modified by the coherent feedback loop as per Eqs. (12,13) (recall that A and D are in turn functions of H S , C and σ in ), which is equivalent to the smallest value of v † σ ∞ v for a normalised vector v. It may be shown that [18], for any coherent feedback loop, the diffusion matrix is proportional to the identity and therefore has only a single eigenvalue given by δ = N γ(l + m − 2ǫ) where ǫ = jk E jk 11 is the sum over the 11-elements of each 2 × 2 submatrix E jk of E.
This simplifies our task greatly, since it is now apparent that the smallest value of v † σ ∞ v is obtained by setting v = λ 1 where λ 1 is the normalised eigenvector of A corresponding to the eigenvalue λ 1 with most negative real part: indeed, this choice minimises the positive integrand v † e A T t e At v at all times t. Whence the bound How negative the eigenvalue of A can be made is limited by the stability criterion, since making one eigenvalue more negative makes the other less negative. To ensure stability, the most negative eigenvalue of A must satisfy λ 1 > γ(2ǫ − l − m) [18]. In deriving this bound, the system Hamiltonian was assumed to be quadratic, but otherwise completely general. This takes into account the modifications made to the system Hamiltonian due to the coherent feedback, as well as any deliberate tuning. Inequality (14) along with the expressions for δ and λ 1 yield This result shows that there exists no combination of quadratic Hamiltonians and coherent feedback protocol which can beat the 3dB limit. Therefore, our comparison with the measurement-based strategy extends beyond the specific example we made above. Summarising, we have found that, in regard to the squeezing Hamiltonian − χ 4 {x,p}, coherent feedback loops are superior for χ < γ/2 (with optimal performances independent from the interaction strength), while measurement-based, homodyne feedback is better for χ ≥ χ/2 and efficiencies satisfying (10). Our comparison is definitive at zero temperature, forN = 1, in the sense that both measurementbased and coherent feedback were fully optimised for vacuum input noise (homodyning is then optimal), and that at optical frequencies one has (N − 1) ≈ 10 −6 .
It is also worth noting that our result hinges on the phase-insensitive nature of the input-output coupling (6), which implies a diffusion matrix D proportional to the identity, and would not apply, for instance, to a quantum Brownian motion master equation. In this regard, our finding may be considered as an extension of the wellknown 3dB squeezing limit that affects phase-insensitive amplifiers [22], which we showed to bound in-loop, stable squeezing too under any interferometric, coherent feedback scheme and any system quadratic Hamiltonian. Notice that stability is another essential ingredient in establishing the bound, as unstable coherent feedback loops would be able to achieve higher squeezing (but are typically not desirable in practice). Conclusions and Summary -We have developed a general framework for passive coherent feedback in the Gaussian regime and shown that no protocol within this framework can beat the 3dB squeezing limit at steady state. In contrast, homodyne monitoring of output fields can stabilise arbitrarily high squeezing at low enough noise and provided that detection efficiency is high enough.
The general treatment developed here provides the groundwork for further inquiries on passive coherent feedback, which may be extended to the optimisation of entanglement in multimode systems, to more general noise models, as well as applied to the cooling of concrete systems, such as quantum optomechanics [23].
We acknowledge discussions with M. Brunelli, who made us aware of additional literature, and M. Genoni, who flagged inaccuracies in the manuscript.
HOMODYNE MONITORING AT FINITE TEMPERATURE
Continuous, general-dyne monitoring of the output field turns the diffusive equation (4) into the following Riccati equation (see, e.g., [17] for a complete treatment of the theory): where the covariance matrix σ m parametrises the choice of measurement. We will consider the homodyne detection of the output field of a single-input single-output system as described in the section titled 'Squeezing with no control '. Homodyne detection of thex quadrature with efficiency ζ is obtained by setting which leads to a diagonal quadratic equation for the monitored steady state covariance matrix, whose diagonal elements σ m 11 and σ m 22 must satisfy (the off-diagonal elements vanish) with physical solutions The other solution for σ m 11 must be discarded since it is negative or zero, and would thus violate the strict positivity of σ m , stemming from the uncertainty principle. The solution for σ m 22 shows that, even under monitoring, the condition |χ| < γ is necessary for stability.
EFFECTIVE COUPLING MATRIX FOR GENERAL PASSIVE COHERENT FEEDBACK
The Hamiltonian which couples system and input modes can be written as: wherer tot = (r T ,r T in,a ,r T in,b ) T . There are l input modes at a and m input modes at b, sor tot is a (2 + 2l + 2m)dimensional vector. The coupling Hamiltonian matrix takes the form: where C j indicates a 2 × 2j matrix of the form √ γ(Ω T . . . Ω T ). This allows for an exchange of excitations between system and input field. When no coherent feedback is present, bothr in,a andr in,b are white noise environmental modes. When coherent feedback is included, the input modes at a are still white noise. We will call these modesr e,a to indicate this. The output modes at a undergo a passive Gaussian CP-map and then are used to replacer in,b .
The passive Gaussian CP-map is achieved by performing a passive symplectic operation on the joint stater out,a ⊕ r anc,i wherer anc,i is a 2n-dimensional vector representing the initial state of n environmental white noise modes. The resulting mode to be input at interface b, along with the final state of the ancilla mode can be written (r in,b ⊕r anc,f ) = Z(r out,a ⊕r anc,i ). The 2(l + n)-dimensional square matrix Z is symplectic, which ensures that the linear operation is physical, and orthogonal, which ensures that the operation is passive (i.e., that it does not involve any squeezing). We can write Z in terms of block matrices: Once the ancilla modes have been traced out, the effect of the CP-map can be written asr out,a → Er out,a + Fr anc,i which allows us to write (note that E and F are, respectively, 2m × 2l and 2m × 2n matrices): We now write the matrix form of the multimode input-output boundary condition in order to writer out,a in terms ofr in,a =r e,a : Combining the above equations, we obtain: The effect of adding coherent feedback is therefore to couple the system to a white noise environment given by (r T e,a ,r T anc,i ) T through a coupling Hamiltonian characterised by the matrix H cf C = L T H C L. This matrix is: which couples the system to the environment through the Hamiltonian operator H cf C = 1 2 (r T ,r T e,a ,r T anc,i )H cf C (r,r e,a ,r anc,i ) .
Notice that this results in a matrix equal to C m EΓ l + Γ T l E T C T m being added to the system Hamiltonian matrix and changes the effective coupling matrix to:
PROPERTIES OF THE ORTHOGONAL SYMPLECTIC MATRIX
We have considered an orthogonal symplectic matrix of the form which transformed a vector of operators as perr → Zr. Here, E is a (2m × 2l) matrix and F is a (2m × 2n). The condition of orthogonality means that ZZ T = ½, which gives us the following conditions on the submatrices: In particular, we shall make use of the relation EE T + F F T = ½. The condition of symplecticity means that ZΩZ T = Ω. Recall that we are using the convention that the dimension of Ω is specified by the context. In terms of the submatrices, this means that: From this we obtain the condition EΩE T + F ΩF T = Ω, which will be key later. The vector of operatorsr was ordered so thatr = (x 1 ,p 1 ...x n ,p n ) T . We can also consider an orthogonal symplectic matrix S acting on a vector of differently ordered operators:ŝ − → Sŝ whereŝ = (x 1 ...x n ,p 1 ...p n ). In this case, the transformation matrix takes the form [17]: When we use the ordering of variablesŝ = (x 1 . . .x n ,p 1 . . .p n ) T , the symplectic condition is SJS T = J, where J is the symplectic form Transforming between the two representations means that we can write each 2 × 2 submatrix of Z as: where x jk and y jk are the elements of matrices X and Y respectively. This fact will be used later.
EIGENVALUES OF THE DRIFT MATRIX
The drift matrix A can be expressed in terms of the Hamiltonian and coupling matrices H S and C as Note that CΩC T is an 2 × 2 antisymmetric matrix, since Ω T = −Ω. Therefore, for a single mode, 1 2 ΩCΩC T is proportional to the identity. We shall set 1 2 ΩCΩC T = β½. Also, since H S is a symmetric matrix, T r[ΩH S ] = 0, meaning that the eigenvalues of ΩH S can be written as ±h and the eigenvalues of A can be written λ = β ± h.
We will now find the value of β in the coherent feedback framework, where Eq. (32) determines the coupling matrix. We use the notation Γ k from earlier to indicate a 2k × 2-dimensional matrix of the form Γ k = √ γ(½ . . . ½) T . This satisfies ΩC k = Γ T k and ΩC T k = −Γ k . We can write We now use the symplectic property EΩE T + F ΩF T = Ω, derived in the previous section, to write (recall that the notation Ω refers to symplectic forms of different dimension, as appropriate for matrix multiplications to be consistent). Noting now that Γ T k Γ k = kγ½, one has ΩC cf ΩC T cf = −(l + m)γ½ − Γ T l ΩE T C T m + Γ T m EΓ l , . (43) The matrix Γ T m EΓ l can be written as γ i,j E jk , while Γ T l ΩE T C T m can be calculated in the same way: Now, we use Eq. (38) to write e jk 11 = e jk 22 . This allows us to put together the above results to obtain β = γ 2 (2ǫ − l − m) where ǫ = j,k e jk 11 = j,k e jk 22 [since e jk 11 = e jk 22 as in Eq. (submatrices)]. Recall that the two eigenvalues of A are λ = β ± h. In order for the system to be stable, we must have Re[λ] < 0 for both eigenvalues. This requires that β is negative. It also means that the most negative eigenvalue of A cannot be lower than 2β, since this would mean that the other eigenvalue would violate the stability criterion. We have therefore obtained the bound λ 1 > γ(2ǫ − l − m) on the most negative eigenvalue of A.
EIGENVALUES OF THE DIFFUSION MATRIX
The diffusion matrix D takes the form D = ΩCσ in C T Ω T . Notice that since Ω is a unitary matrix, the eigenvalues of D are the same as the eigenvalues of Cσ in C T . The input state is taken to be a vacuum or thermal state with uniform noise, so σ in =N ½ withN ≥ 1. This means that we can find the eigenvalues of D for coherent feedback by finding the eigenvalues of the matrix C cf C T cf and multiplying them byN : Using the orthogonality condition EE T + F F T = ½ and the fact that C k C T k = kγ½ 2 , we obtain: Writing in terms of the 2 × 2 submatrices of E gives: where we have used e jk 12 = −e jk 21 and e jk 11 = e jk 22 , as per Eq. (38), and ǫ = jk e jk 11 = jk e jk 22 . Therefore, the diffusion matrix under coherent feedback is proportional to the identity with eigenvalue δ =N γ(l + m − 2ǫ). | 7,754.6 | 2019-10-16T00:00:00.000 | [
"Physics"
] |
Rapid Evolution of the White Dwarf Pulsar AR Scorpii
Analysis of AR Sco optical light curves spanning 9 yr shows a secular change in the relative amplitudes of the beat pulse pairs generated by the two magnetic poles of its rotating white dwarf. Recent photometry now shows that the primary and secondary beat pulses have similar amplitudes, while in 2015 the primary pulse was approximately twice that of the secondary peak. The equalization in the beat pulse amplitudes is also seen in the linearly polarized flux. This rapid evolution is consistent with precession of the white dwarf spin axis. The observations imply that the pulse amplitudes cycle over a period of ≳40 yr but that the upper limit is currently poorly constrained. If precession is the mechanism driving the evolution, then over the next 10 yr the ratio of the beat pulse amplitudes will reach a maximum followed by a return to asymmetric beat pulses.
INTRODUCTION
AR Scorpii (AR Sco hereafter) is one of the most intriguing interacting binary stars known.It has been called a white dwarf "pulsar" (Buckley et al. 2017) because its bright, polarized, synchrotron flashes appear to be powered by the spin-down energy of its degenerate primary stellar component (Marsh et al. 2016;Stiller et al. 2018;Gaibor et al. 2020;Pelisoli et al. 2022).The binary consists of a low-mass red dwarf star and a rapidly spinning magnetized white dwarf (WD) orbiting over a 3.56-hour period.The WD spins (ω) with a period of 1.95 min (Marsh et al. 2016) and appears to generate two pulses per rotation, likely from two magnetic poles.In the Potter & Buckley (2018) model, the synchrotron emission is modulated by the angle between the magnetic poles of the spinning WD and the secondary star.The emission is enhanced just after a magnetic pole sweeps past the red dwarf, leading to variations Corresponding author: Peter Garnavich<EMAIL_ADDRESS>at the spin frequency plus strong pulses at the beat frequency, ω − Ω, where Ω is the orbital frequency.Because pulses are generated from both poles, power is also seen at the first harmonic of the beat frequency (aka the double-beat: 2(ω − Ω)).
Light curves of AR Sco obtained near its discovery clearly showed that the beat pulse pairs alternated in strength with the brighter pole being about twice the amplitude of the opposite pole (Marsh et al. 2016).This alternating strength was seen in the linearly polarized flux as well (Potter & Buckley 2018).However, in a Fourier component analysis, Pelisoli et al. (2022) noted that the quality of their model fit to the observed light curves was decreasing over time.They attributed this to stochastic changes in the relative strength of the pulse pairs.Takata et al. (2021) noted that the X-ray beat pulses were single peaked in 2016 but seen as doublepeaked in 2020.Here, we analyze nine years of AR Sco optical light curves to test for any systematic evolution in the beat pulse strengths.1.
DATA
We analyze light curves studied by Gaibor et al. (2020).In addition, we have obtained rapid-cadence light curves of AR Sco using the Sarah L. Krizmanich Telescope (SLKT) from 2020 through 2023 and a light curves in 2023 from the South African Astronomical Observatory using the high-speed photomultiplier instrument (HIPPO, Potter et al. (2008)).We analyze both the total flux and polarized flux for two epochs obtained in 2016 and 2023 using the HIPPO instrument.Only light curves covering a substantial fraction of a binary orbit are included in this study.Properties of the photometric time-series datasets are listed in Table 1.
For each photometric time-series, we constructed Lomb-Scargle periodograms (L-S hereafter, Lomb 1976;Scargle 1982) ure 1.The properties of the optical light curves appear to have changed significantly between 2015 and 2023.In particular, the strength of the 'secondary' beat pulse has increased relative to the main pulse over this period.
To quantify the relative strengths of the beat pulses averaged over an orbit, we use the L-S periodogram peaks at the beat frequency (A (ω−Ω) ) and the doublebeat frequency (A ′ 2(ω−Ω) ), to define the ratio: . (1) When R beat ≈ 0, only a single beat pulse is detected over a beat period.Alternatively, when R beat approaches unity, the two beat pulses are similar in amplitude.Here, is the amplitude of the double beat peak corrected for a contribution of the second harmonic of the beat frequency.The parameter H 2 is the ratio between the amplitudes of the beat frequency, and its second harmonic.Simulated light curves show that an H 2 = 0.30 accounts for this harmonic contribution 1 that only becomes significant when when R beat < 0.5, that is, when the secondary pulse is weak.
The measured beat pulse ratios, R beat , for 14 light curves are given in Table 1.Uncertainties on the ratio measurements were estimated by taking L-S periodograms of subsets of each light curve.The variance in R beat was then used to calculate the errorbars shown.We also estimated the beat pulse ratio in linearly polarized flux using photometric time-series obtained in 2016 and 2023 and these results are shown in Table 1.
DISCUSSION
Figure 2 displays the AR Sco beat pulse ratio over time.While there is scatter in the ratio from night to night, the trend is for an increasing ratio consistent with the pulse pairs evolving to nearly equal strength.The R beat parameter for the linearly polarized flux was also seen to increase significantly between 2016 and 2023.
Interpreting the changes in pulse amplitude is difficult given that the source of the relativistic electrons has not been established.Electrons may be accelerated by direct interaction between the WD magnetic field and the field of the secondary star (e.g.Takata et al. 2017;Garnavich et al. 2019).Slow changes in the secondary star's magnetic field could influence the WD field lines involved in trapping the emitting electrons.Katz (2017) proposed that the spin axis of the WD could be tilted relative to the orbital angular momentum vector, leading to a precession of the WD and its magnetic field configuration.Katz predicted that WD precession might impact the phase of the brightest point in the orbital modulation seen in AR Sco.Peterson et al. (2019) did not detect a shift in the orbital modulation curve using archival photometry 2 .However, the narrowness of the synchroton beams could provide a sensitive test of precessional motion if it results in relative changes in the viewing angles of the two poles.Katz (2017) predicted precession periods between 20 and 200 years primarily depending on the WD mass.
Precession Model
To test if WD precession could generate the observed variation in pulse amplitudes, we constructed a simple model illustrated in Figure 3.The model parameters include the binary inclination, i, the obliquity of the 1 Fourier analysis of Gaussian pulses in flux give a closed form for the harmonic ratio of H 2 = exp(−6π 2 σ 2 /P 2 ), where P is the time between pulses.However, analyzing light curves in magnitudes requires simulations to estimate H 2 . 2 Also see Littlefield et al. (2017) 56000 57000 58000 59000 60000 61000 spin axis, ϵ, and the angle between the magnetic dipole and the WD spin axis, χ.From this geometric model, we calculate the minimum angle, θ, between center of the synchrotron beams and the direction to the Earth and follow its variation over a precession cycle.
Based on the estimated binary mass ratio (Marsh et al. 2016), and lack of detected eclipses the orbital inclination is i ≈ 75 • (Garnavich et al. 2019).That there are two pulses per WD spin suggests that the magnetic field axis is highly inclined to the spin axis (Geng et al. 2016), however, the symmetry of truly perpendicular rotator (χ > 80 • ) would severely limit the range of pulse ratio variations over a precession cycle.
Following the beam model described in Potter & Buckley (2018), we approximate the synchrotron pulse profile by a Gaussian function with σ = 40 • .Thus, the intensity of the synchrotron beam will have fallen to half its peak value when viewed at an angle of θ ≈ 47 • .For simplicity, we assume that the peak intensity and profile distributions are identical for the two emitting poles.
Figure 3.The geometry of a precessing WD with synchrotron beams emitted from magnetic poles.The magnetic axis makes an angle χ relative to the spin axis (solid red lines).The binary orbital inclination is i, and the obliquity of the spin axis is ϵ.The path of spin axis precession is indicated by dashed red circles.The angle between the direction to Earth and the peak of a synchrotron beam is θ.For simplicity, the synchrotron beams are shown emanating from the WD, but they actually originate from the magnetic fields on the opposite side of the WD (see Potter & Buckley 2018).
We find that the observed change in R beat from 0.5 to 0.8 can be achieved by the precession model using reasonable values of the parameters (Figure 4).A WD obliquity near ϵ ≈ 10 • is sufficient to reproduce the observations.Interestingly, when the obliquity exceeds the co-inclination (90 − i), the secondary pulse strength can exceed the amplitude of the primary pulse, as shown in the lower panel of Figure 4.
Besides the R beat ratio parameter, an additional observational constraint on precession models is a direct measurement of the amplitude changes of the pulses from each pole.As displayed in Figure 4, the amplitude of the pulse from the dominant pole varies by only 10% over a quarter of the precession cycle for the first set of model parameters.In general, this small amplitude change for one pole results when the obliquity is low and χ ≈ i.For this geometry, changes in θ for one of the emitting poles remains small over a precession cycle, while the second pole creates most of the variation in the amplitude ratio.
The lower panels of Figure 1 show that the beat pulse amplitude of the dominant pole remained nearly the same between 2015 and 2023, and that most of the R beat evolution came from strengthening in the sec- ondary pulse.We estimate that the amplitude of the primary pulse varied by no more than 0.1 mag over the nine years of observation.We therefore infer that χ ≈ i for AR Sco.
Precession Period
Despite the rapid variation observed in R beat over nine years, the apparent linear increase with time poorly constrains estimates of the precession period.Figure 2 displays sinusoidal functions with periods between 30 and 100 yr that have been fitted by minimizing the χ 2 parameter.The χ 2 parameter steeply increases for periods less than 40 yr, but χ 2 is nearly constant for models with P > 40 yr.A linear fit of the R beat ratio shows it rising by 0.05 per year, meaning that the pulses could reach parity as early as the year 2027 (MJD≈61400).Sinusoidal extrapolations indicate that R beat will reach a maximum before the year 2029.
CONCLUSION
The AR Sco beat pulse pairs have evolved from a strong asymmetry to become nearly equal in amplitude over a decade of observations.This evolution is supported by the changes noted in the X-ray beat pulse noted by Takata et al. (2021).We also find that the beat pulses have evolved in linearly polarized flux and that their amplitudes are now nearly equal.This suggests that the evolution results from physical changes or viewing angle variations in the highly polarized synchrotron beams.
The precession model suggests that in 2023 we were viewing the WD spin axis oriented so that the synchrotron beams are seen symmetrically, while in 2015, precession of the spin axis resulted in a tilt that favored our view of one of the magnetic poles.If precession is the origin of this evolution, then over the next 10 years the ratio of the beat pulse amplitudes, R beat , will reach a maximum followed by a return to asymmetric beat pulses.
We dedicate this study to Tom Marsh.PMG thanks the Krizmanich family for their generous donations for the construction and support of the Sarah L. Krizmanich Telescope.
Figure 1 .
Figure 1.Left Column: The L-S periodogram (top) and section of the light curve (bottom) from 2015.The periodogram displays nearly equal amplitude peaks of the beat and double beat.This is seen in the light curve as the primary peak being twice the amplitude of the secondary one.The Fourier model is plotted as a solid black line and the data in faint red.Right Column: The L-S periodogram (top) and light curve (bottom) from 2023.The periodogram shows a very weak beat amplitude and a strong double beat peak.This is seen in the light curve as nearly equal amplitude beat pairs.
Figure 2 .
Figure2.The ratio of the double-beat to beat amplitudes measured for 14 light curves (solid circles).The R beat parameter measured from the linearly polarized flux in two epochs is displayed as open diamonds.The lines show a range cyclical models with periods between 30 and 100 yr.The χ 2 parameter increases sharply for models with periods shorter than 40 years, but long periods are poorly constrained without further observations.
Figure 4 .
Figure 4. Predictions for the R beat parameter as a function of precession phase for two sets of model parameters.Top: The relative pulse amplitudes over two precession cycles for a WD obliquity of 10 • .The right panel shows that the resulting R beat parameter varies nearly linearly between 0.5 and 0.8 over half of a precession period.Bottom: The same model except the obliquity has been increased to 20 • and now exceeds the co-inclination of the orbit.In this case, the poles can switch in dominance and create sharp features in the R beat curve. | 3,173.8 | 2023-11-08T00:00:00.000 | [
"Physics"
] |
Accurate Measurements of the Rotational Velocities of Brushless Direct-Current Motors by Using an Ultrasensitive Magnetoimpedance Sensing System
Reports on measurements of the rotational velocity by using giant magnetoimpedance (GMI) sensors are rarely seen. In this study, a rotational-velocity sensing system based on GMI effect was established to measure rotational velocities of brushless direct-current motors. Square waves and sawtooth waves were observed due to the rotation of the shaft. We also found that the square waves gradually became sawtooth waves with increasing the measurement distance and rotational velocity. The GMI-based rotational-velocity measurement results (1000–4300 r/min) were further confirmed using the Hall sensor. This GMI sensor is capable of measuring ultrahigh rotational velocity of 84,000 r/min with a large voltage response of 5 V, even when setting a large measurement distance of 9 cm. Accordingly, the GMI sensor is very useful for sensitive measurements of high rotational velocity.
Experimental Details
The apparatuses of measuring rotational velocity are shown in Figure 1. The rotational-velocity sensing system consists of a GMI sensor, a brushless direct-current (DC) motor I (57BL55S06-230TF9, 24 V, 3.3 A, a rated rotational-velocity of 3000 r/min and a rated power of 60 W, Beijing Times-Chaoqun Electronic Appliance Company, Beijing, China), a tunable DC power supply (0-20 V), a brushless DC controller (ZM-6405E, 24 V, Beijing Times-Chaoqun Electronic Appliance Company, Beijing, China), a switching mode power supply (S-100-24, 100W, 24 V, 0-4.5 A, Beijing Times-Chaoqun Electronic brushless DC controller (ZM-6405E, 24 V, Beijing Times-Chaoqun Electronic Appliance Company, Beijing, China), a switching mode power supply (S-100-24, 100W, 24 V, 0-4.5 A, Beijing Times-Chaoqun Electronic Appliance Company, Beijing, China), a digital oscilloscope (MSO 5204, Tektronix, Johnston, OH, USA), and a rotational-velocity meter (5 V), as shown in Figure 2. The DC brushless controller is connected with the switching mode power supply, the rotating velocity meter and the brushless DC motor. The DC motor is placed near the GMI sensor, the rotation shaft of which is about several centimeters (2 cm and 7 cm) away from the GMI sensor. The inset port and outlet port of the GMI sensor are connected with the DC power supply and oscilloscope respectively. Significantly, the Hall sensor in brushless DC motor I can also measure the rotational velocity of the shaft, which can be used to verify the reliability of the GMI-based rotational-velocity measurement results. In this work, the Hall sensor, installed inside the motor I is also used to measure the rotational velocity of the motor I, which can be used to validate the measurement results of using the GMI sensor. The operational principle of measuring the rotational velocity of the motor I using the Hall sensor is shown in Figure 2. The extended input wire and output wire of the Hall sensor are connected with the input terminal and output terminal on the brushless DC controller, respectively. The brushless DC controller provides a drive current of A for exciting the Hall sensor. The Hall sensor also can sense the presence of the magnetic field produced by the shaft. Outputting high and low voltage induced by the presence of magnetic poles of the shaft can also be used to determine the rotational velocity, which then are transformed into digital signals and counted through digital signal processing in the brushless DC controller. Finally, the rotational velocity is outputted on the rotational-velocity meter. We have also used a simplified system (deleting the brushless DC controller and the rotating velocity meter) to measure the ultrahigh rotational velocity over 80,000 r/min. The simplified rotational-velocity sensing system is composed of a GMI sensor, a brushless DC motor (LEXY, KCL, rated velocity of 80,000 r/min, a rated voltage of 24 V and a rated power of 300 W, KingClean, Suzhou, China), a switching mode power supply (~15 A), a tunable DC power supply (9-25 V), and a digital oscilloscope (MSO 5204), as shown in Figure 3. The adopted GMI sensor is purchased from Aichi Corporation. This GMI sensor is composed of a GMI sensing element (soft ferromagnetic microwire) and signal processing circuits. The sensor circuit provides a high-frequency (several kHz) alternatingcurrent (AC) pulses for exciting the soft ferromagnetic microwire. The magneto-impedance and the AC magnetic field of the microwire are influenced by the applications of the external magnetic field due to the GMI effect. Hence, due to electromagnetic induction, the potential difference is obtained in the pick-up coil wound around the microwire, and the voltage signal is output after analoguedigital signal processing is carried out through the sensor circuit. The GMI sensor has high fieldresolution (nT), high linearity (−40-+40 μT), and high field-sensitivity of (1V/μT). During testing, the GMI sensor was fixed, as shown in Figure 2, where the soft ferromagnetic microwire is perpendicular to the shaft of the motors since the GMI microwire is sensitive to the magnetic field in the longitudinal direction. The tunable DC power supply is connected with the input port of GMI sensor. The switching power supply is connected with the brushless DC motor. The brushless DC motor is placed near the GMI sensor, the rotational shaft of which is about several centimeters (3 cm and 9 cm) away from the GMI sensor. The outlet port of the GMI sensor is connected with the oscilloscope.
During the testing, the switching mode power supply provides a voltage of 24 V for the DC brushless controller which actuates the motor, and the rotational velocity of which was accurately regulated by revolving potentiometer knob on the DC brushless controller. The tunable DC power supply provides 5 V of voltage for driving the GMI sensor. The rotational shaft of the DC motor I possesses a half-cylindrical structure, as shown in Figure 1a and Figure 2. Theoretically, the flat side and cylindrical side of the shaft can produce one negative magnetic pole and one positive magnetic pole, respectively. Strong magnetic field around the magnetic pole usually induces a large impedance response. Therefore, it is predicted that there could be two high voltage and two low voltage per revolution. The rotational shaft of the DC motor II possesses a full-cylindrical structure, as shown in Figure 1d and Figure 3. Thus, there should be two magnetic poles (positive and negative) induced by application of the magnetic field of electrified coils in the motor. Therefore, it is predicted that there may be one high voltage signal and one low voltage signal per revolution. When the ferromagnetic shaft passes by the GMI sensor, the interference magnetic field of the ferromagnetic shaft changed the magnetic permeability of the microwire, and the impedance of the GMI sensor is changed dynamically. The impedance variation for the microwire transformed the voltage signals through the analog-digital converter using the internal electric circuits on the sensor, then, outputting timedependent wave forms on the oscilloscope after digital signal processing. On the other hand, the Hall We have also used a simplified system (deleting the brushless DC controller and the rotating velocity meter) to measure the ultrahigh rotational velocity over 80,000 r/min. The simplified rotational-velocity sensing system is composed of a GMI sensor, a brushless DC motor (LEXY, KCL, rated velocity of 80,000 r/min, a rated voltage of 24 V and a rated power of 300 W, KingClean, Suzhou, China), a switching mode power supply (~15 A), a tunable DC power supply (9-25 V), and a digital oscilloscope (MSO 5204), as shown in Figure 3. The adopted GMI sensor is purchased from Aichi Corporation. This GMI sensor is composed of a GMI sensing element (soft ferromagnetic microwire) and signal processing circuits. The sensor circuit provides a high-frequency (several kHz) alternating-current (AC) pulses for exciting the soft ferromagnetic microwire. The magneto-impedance and the AC magnetic field of the microwire are influenced by the applications of the external magnetic field due to the GMI effect. Hence, due to electromagnetic induction, the potential difference is obtained in the pick-up coil wound around the microwire, and the voltage signal is output after analogue-digital signal processing is carried out through the sensor circuit. The GMI sensor has high field-resolution (nT), high linearity (−40-+40 µT), and high field-sensitivity of (1V/µT). During testing, the GMI sensor was fixed, as shown in Figure 2, where the soft ferromagnetic microwire is perpendicular to the shaft of the motors since the GMI microwire is sensitive to the magnetic field in the longitudinal direction. The tunable DC power supply is connected with the input port of GMI sensor. The switching power supply is connected with the brushless DC motor. The brushless DC motor is placed near the GMI sensor, the rotational shaft of which is about several centimeters (3 cm and 9 cm) away from the GMI sensor. The outlet port of the GMI sensor is connected with the oscilloscope.
During the testing, the switching mode power supply provides a voltage of 24 V for the DC brushless controller which actuates the motor, and the rotational velocity of which was accurately regulated by revolving potentiometer knob on the DC brushless controller. The tunable DC power supply provides 5 V of voltage for driving the GMI sensor. The rotational shaft of the DC motor I possesses a half-cylindrical structure, as shown in Figures 1a and 2. Theoretically, the flat side and cylindrical side of the shaft can produce one negative magnetic pole and one positive magnetic pole, respectively. Strong magnetic field around the magnetic pole usually induces a large impedance response. Therefore, it is predicted that there could be two high voltage and two low voltage per revolution. The rotational shaft of the DC motor II possesses a full-cylindrical structure, as shown in Figures 1d and 3. Thus, there should be two magnetic poles (positive and negative) induced by application of the magnetic field of electrified coils in the motor. Therefore, it is predicted that there may be one high voltage signal and one low voltage signal per revolution. When the ferromagnetic shaft passes by the GMI sensor, the interference magnetic field of the ferromagnetic shaft changed the magnetic permeability of the microwire, and the impedance of the GMI sensor is changed dynamically. The impedance variation for the microwire transformed the voltage signals through the analog-digital converter using the internal electric circuits on the sensor, then, outputting time-dependent wave forms on the oscilloscope after digital signal processing. On the other hand, the Hall sensor in the DC motor I can also measure the rotational velocity of the shaft. The Hall signals are processed by the DC brushless controller, and outputting the rotational velocity on the rotating velocity meter. During testing, different distances of 2 cm, 3 cm, 7 cm, 9 cm are set between the GMI sensor and the shaft.
Magnetic flux densities of the shaft have been measured by using a gaussmeter (GM55, Shanghai Torke Industrial Co., Ltd, Shanghai, China). The surface magnetic flux density of the shaft is about 20 G. The magnetic flux densities of the shaft are about 8 G, 5 G, and 3 G at 2 cm, 7 cm and 9 cm distance away from the motors, respectively. The relationship between the magnetic dipole moment and the magnetic flux density at the measuring point can be written as [26]: where M B is the magnetic dipole moment, µ 0 is the permeability of vacuum, r is the distance between the motor and the measuring point. When the distance between the GMI sensor and the shaft is 9 cm, the value of r is 0.09 m, and the estimated magnetic dipole moment was 1 A·m 2 . According to the equation (1), the value of B is about 2.74 G when θ is 0, which is very close to the measured value of the gaussmeter.
Since the maximum rated velocity of the DC motors is 80,000 r/min, there should be 1333.33 high voltages or low voltages captured by the GMI sensor per second. Based on the sampling theorem [27], original signals can be completely covered if the sampling frequency is at least 2 times more than the maximal frequency of the original signals. Here, we set sampling frequency as 10 kHz, much larger than maximal frequency (2 × 1333.33 voltages/s), which is enough to cover all the high-voltage and low-voltage signals. When the ferromagnetic shaft passes by the GMI sensor, the impedance of which can be dynamically altered since the interference magnetic field of the shaft modify the cylindrical magnetic permeability of the GMI microwire. sensor in the DC motor I can also measure the rotational velocity of the shaft. The Hall signals are processed by the DC brushless controller, and outputting the rotational velocity on the rotating velocity meter. During testing, different distances of 2 cm, 3 cm, 7 cm, 9 cm are set between the GMI sensor and the shaft. Magnetic flux densities of the shaft have been measured by using a gaussmeter (GM55, Shanghai Torke Industrial Co., Ltd, Shanghai, China). The surface magnetic flux density of the shaft is about 20 G. The magnetic flux densities of the shaft are about 8 G, 5 G, and 3 G at 2 cm, 7 cm and 9 cm distance away from the motors, respectively. The relationship between the magnetic dipole moment and the magnetic flux density at the measuring point can be written as [26]: Where MB is the magnetic dipole moment, μ0 is the permeability of vacuum, r is the distance between the motor and the measuring point. When the distance between the GMI sensor and the shaft is 9 cm, the value of r is 0.09 m, and the estimated magnetic dipole moment was 1 A·m 2 . According to the equation (1), the value of B is about 2.74 G when θ is 0, which is very close to the measured value of the gaussmeter.
Since the maximum rated velocity of the DC motors is 80,000 r/min, there should be 1333.33 high voltages or low voltages captured by the GMI sensor per second. Based on the sampling theorem [27], original signals can be completely covered if the sampling frequency is at least 2 times more than the maximal frequency of the original signals. Here, we set sampling frequency as 10 kHz, much larger than maximal frequency (2 × 1333.33 voltages/s), which is enough to cover all the high-voltage and low-voltage signals. When the ferromagnetic shaft passes by the GMI sensor, the impedance of which can be dynamically altered since the interference magnetic field of the shaft modify the cylindrical magnetic permeability of the GMI microwire.
Results and Discussion
The rotational-velocity measurement results of DC motor I are shown in Figure 4. There are several positive wave crests and negative wave crests in Figure 4, obviously, the more wave crests, the quicker the motor rotates. One positive wave crest means that the shaft passes by the GMI sensor from flat surface to spherical surface one time. This is because the shaft can produce strong spontaneous magnetic field influencing on the impedance of the GMI sensor. On the contrary, one negative wave crest means that the shaft passes by the GMI sensor from spherical surface to flat surface one time with a reverse strong spontaneous magnetic field. Thus, two positive wave crests or two negative wave crests represents one rotation circle. Hence, the equation for calculation of the rotation velocity of DC motor I can be written as:
Results and Discussion
The rotational-velocity measurement results of DC motor I are shown in Figure 4. There are several positive wave crests and negative wave crests in Figure 4, obviously, the more wave crests, the quicker the motor rotates. One positive wave crest means that the shaft passes by the GMI sensor from flat surface to spherical surface one time. This is because the shaft can produce strong spontaneous magnetic field influencing on the impedance of the GMI sensor. On the contrary, one negative wave crest means that the shaft passes by the GMI sensor from spherical surface to flat surface one time with a reverse strong spontaneous magnetic field. Thus, two positive wave crests or two negative wave crests represents one rotation circle. Hence, the equation for calculation of the rotation velocity of DC motor I can be written as: where R is the number of the rotation turns in one minute, N is the number of the positive peak signals or negative peak signals in a time-span (T).
Micromachines 2019, 10, 859 5 of 11 Where R is the number of the rotation turns in one minute, N is the number of the positive peak signals or negative peak signals in a time-span (T). The rotational velocity (R) can be easily figured out by equation (2). For example, there are 30 positive wave crests from 0 s to 0.3 s in Figure 4d, so there are 50 circles in one second and 3000 circles in one minute, which agrees well with the rotation velocity measured by the Hall sensor outputting on the rotating velocity meter.
We have tested the rotation velocity over the rated velocity (>3000 r/min), as shown in Figure 5. For instance, there are 43 wave crests in 0.3 second in Figure 5b, using equation (2), the rotation shaft goes 4300 rounds in one minute, which is consistent with the standardized rotational velocity measured by the Hall sensor, outputting on rotating velocity meter. As we tested the rotational velocity without using any loadings on the motor, the unloaded rotational velocity can be measured over the rated velocity. When the rotation velocity passes over 4300 r/min, the rotational velocity becomes unstable because of the operating limit of the motor. Thus, the GMI sensor can accurately measure the unloaded rotational velocity of reaching 4300 r/min of the brushless DC motor I. Figure 4d, so there are 50 circles in one second and 3000 circles in one minute, which agrees well with the rotation velocity measured by the Hall sensor outputting on the rotating velocity meter.
We have tested the rotation velocity over the rated velocity (>3000 r/min), as shown in Figure 5. For instance, there are 43 wave crests in 0.3 second in Figure 5b, using equation (2), the rotation shaft goes 4300 rounds in one minute, which is consistent with the standardized rotational velocity measured by the Hall sensor, outputting on rotating velocity meter. As we tested the rotational velocity without using any loadings on the motor, the unloaded rotational velocity can be measured over the rated velocity. When the rotation velocity passes over 4300 r/min, the rotational velocity becomes unstable because of the operating limit of the motor. Thus, the GMI sensor can accurately measure the unloaded rotational velocity of reaching 4300 r/min of the brushless DC motor I. 7 cm space between the GMI sensor and the shaft was set for testing different rotational velocities, as shown in Figure 6. The number of positive or negative wave crests in Figure 6 is accord with the previous results (2 cm) except that the wave forms exhibit a little deformation. The rotation shaft goes 4300 r/min, namely, 71.667 r/s. So the motor only spends 0.014 s to rotate one circle, exhibiting a high rotational velocity. Compared with the close-range (2 cm) measurement results (Figure 4 and Figure 5) which shows a series of square waves, the long-distance (7 cm) measurement results ( Figure 6) show different waveforms (sawtooth waves). This is probably because the inhomogeneous magnetic fields of the shaft become more diffuse with increasing the distance. Significantly, the voltage amplitude is almost kept to 5 V even a large space of 7 cm is set, indicating the high sensitivity of the GMI sensor. 7 cm space between the GMI sensor and the shaft was set for testing different rotational velocities, as shown in Figure 6. The number of positive or negative wave crests in Figure 6 is accord with the previous results (2 cm) except that the wave forms exhibit a little deformation. The rotation shaft goes 4300 r/min, namely, 71.667 r/s. So the motor only spends 0.014 s to rotate one circle, exhibiting a high rotational velocity. Compared with the close-range (2 cm) measurement results (Figures 4 and 5) which shows a series of square waves, the long-distance (7 cm) measurement results ( Figure 6) show different waveforms (sawtooth waves). This is probably because the inhomogeneous magnetic fields of the shaft become more diffuse with increasing the distance. Significantly, the voltage amplitude is almost kept to 5 V even a large space of 7 cm is set, indicating the high sensitivity of the GMI sensor. 7 cm space between the GMI sensor and the shaft was set for testing different rotational velocities, as shown in Figure 6. The number of positive or negative wave crests in Figure 6 is accord with the previous results (2 cm) except that the wave forms exhibit a little deformation. The rotation shaft goes 4300 r/min, namely, 71.667 r/s. So the motor only spends 0.014 s to rotate one circle, exhibiting a high rotational velocity. Compared with the close-range (2 cm) measurement results (Figure 4 and Figure 5) which shows a series of square waves, the long-distance (7 cm) measurement results ( Figure 6) show different waveforms (sawtooth waves). This is probably because the inhomogeneous magnetic fields of the shaft become more diffuse with increasing the distance. Significantly, the voltage amplitude is almost kept to 5 V even a large space of 7 cm is set, indicating the high sensitivity of the GMI sensor. The rotational-velocity measurement results of DC motor II are shown in Figure 7. For instance, there are 24 high voltages from 0 s to 0.02 s in Figure 7c, using equation (3), so there are 1200 circles in one second and 72,000 circles in one minute. We have tested the rotational velocity of the brushless DC motor over the rated velocity (>80,000 r/min) through increasing the input voltage and input current, the results of which are shown in Figure 8a (81,000 r/min) and Figure 8b (84,000 r/min). For instance, there are 28 high voltages in 0.02 second in Figure 8b, using equation (3), the rotational shaft goes 84,000 rounds in one minute. As we tested the rotational velocity without using any loadings on the motor, the unloaded rotational velocity can be measured over the rated velocity. Thus, the GMI sensor can accurately measure the unloaded rotational velocity of reaching 84,000 r/min. As can be seen from Figure 8b, there are about 7 high voltages from 0.000 to 0.005 s. Since one high voltage represents one rotational circle, so the motor only spends 0.000714 s to rotate one circle, demonstrating the quick response of the GMI sensor. The cylindrical rotational shaft only has a positive magnetic pole and a negative magnetic pole, which can induce one high voltage and one low voltage per rotation. Hence, the number of high voltage or low voltage is actually the number of the rotation circle. The equation for the calculation of the rotational velocity of the DC motor II can be written as: The rotational-velocity measurement results of DC motor II are shown in Figure 7. For instance, there are 24 high voltages from 0 s to 0.02 s in Figure 7c, using equation (3), so there are 1200 circles in one second and 72,000 circles in one minute. We have tested the rotational velocity of the brushless DC motor over the rated velocity (>80,000 r/min) through increasing the input voltage and input current, the results of which are shown in Figure 8a (81,000 r/min) and Figure 8b (84,000 r/min). For instance, there are 28 high voltages in 0.02 second in Figure 8b, using equation (3), the rotational shaft goes 84,000 rounds in one minute. As we tested the rotational velocity without using any loadings on the motor, the unloaded rotational velocity can be measured over the rated velocity. Thus, the GMI sensor can accurately measure the unloaded rotational velocity of reaching 84,000 r/min. As can be seen from Figure 8b, there are about 7 high voltages from 0.000 to 0.005 s. Since one high voltage represents one rotational circle, so the motor only spends 0.000714 s to rotate one circle, demonstrating the quick response of the GMI sensor. (3), the rotational shaft goes 84,000 rounds in one minute. As we tested the rotational velocity without using any loadings on the motor, the unloaded rotational velocity can be measured over the rated velocity. Thus, the GMI sensor can accurately measure the unloaded rotational velocity of reaching 84,000 r/min. As can be seen from Figure 8b, there are about 7 high voltages from 0.000 to 0.005 s. Since one high voltage represents one rotational circle, so the motor only spends 0.000714 s to rotate one circle, demonstrating the quick response of the GMI sensor. 9 cm space between the GMI sensor and the rotational shaft of DC motor II was set for testing different rotational velocities, the results of which are shown in Figure 9. Obviously, the number of sawtooth waves is accord with the previous results ( Figure 7 and Figure 8). Significantly, the voltage response of the GMI sensor is almost kept to 5 V even when a large space of 9 cm is set, we have made comparison between the GMI sensor and other rotational-velocity sensors. For instance, the rotational-velocity response amplitude of the current GMI sensor is about 10 times larger than that of the giant magnetoresistance (GMR) sensor [9], and is about 100 times larger than that of Hall sensor [10], and is about 3 times larger than that of coil [11]. Furthermore, the measurement distance of using the GMI sensor can be as large as 9 cm while maintaining a high response amplitude of 5 V, which is about several times larger than that of Hall sensor [10] and 20 times larger than that of GMR sensor [9], respectively. The theoretical response velocity of GMI sensor is 10 MHz, therefore, there is a great potential of GMI sensor in measuring higher rotational-velocity. In future, we plan to use the GMI sensor to measure the rotational velocity over 100,000 r/min. 9 cm space between the GMI sensor and the rotational shaft of DC motor II was set for testing different rotational velocities, the results of which are shown in Figure 9. Obviously, the number of sawtooth waves is accord with the previous results ( Figure 7 and Figure 8). Significantly, the voltage response of the GMI sensor is almost kept to 5 V even when a large space of 9 cm is set, we have made comparison between the GMI sensor and other rotational-velocity sensors. For instance, the rotational-velocity response amplitude of the current GMI sensor is about 10 times larger than that of the giant magnetoresistance (GMR) sensor [9], and is about 100 times larger than that of Hall sensor [10], and is about 3 times larger than that of coil [11]. Furthermore, the measurement distance of using the GMI sensor can be as large as 9 cm while maintaining a high response amplitude of 5 V, which is about several times larger than that of Hall sensor [10] and 20 times larger than that of GMR sensor [9], respectively. The theoretical response velocity of GMI sensor is 10 MHz, therefore, there is a great potential of GMI sensor in measuring higher rotational-velocity. In future, we plan to use the GMI sensor to measure the rotational velocity over 100,000 r/min. 9 cm space between the GMI sensor and the rotational shaft of DC motor II was set for testing different rotational velocities, the results of which are shown in Figure 9. Obviously, the number of sawtooth waves is accord with the previous results (Figures 7 and 8). Significantly, the voltage response of the GMI sensor is almost kept to 5 V even when a large space of 9 cm is set, we have made comparison between the GMI sensor and other rotational-velocity sensors. For instance, the rotational-velocity response amplitude of the current GMI sensor is about 10 times larger than that of the giant magnetoresistance (GMR) sensor [9], and is about 100 times larger than that of Hall sensor [10], and is about 3 times larger than that of coil [11]. Furthermore, the measurement distance of using the GMI sensor can be as large as 9 cm while maintaining a high response amplitude of 5 V, which is about several times larger than that of Hall sensor [10] and 20 times larger than that of GMR sensor [9], respectively. The theoretical response velocity of GMI sensor is 10 MHz, therefore, there is a great potential of GMI sensor in measuring higher rotational-velocity. In future, we plan to use the GMI sensor to measure the rotational velocity over 100,000 r/min.
Conclusions
The rotational-velocity (1000 -84,000 r/min) of brushless direct-current motors was accurately measured by using a rotational-velocity sensing system based on GMI effect. The GMI-based rotational-velocity measurement results agree well with the Hall-based rotational-velocity measurement results. Successive square waves were found under small measurement distances and low rotation velocities, while successive sawtooth waves were found under large measurement distances and high rotation velocities. Positive wave crests and negative wave crests were found in one rotation due to the presence of positive and negative magnetic poles of the shafts. Consequently, the GMI sensor offers great potential for sensitive rotational-velocity measurement applications.
Conflicts of Interest:
The authors declare no conflict of interest.
Conclusions
The rotational-velocity (1000-84,000 r/min) of brushless direct-current motors was accurately measured by using a rotational-velocity sensing system based on GMI effect. The GMI-based rotational-velocity measurement results agree well with the Hall-based rotational-velocity measurement results. Successive square waves were found under small measurement distances and low rotation velocities, while successive sawtooth waves were found under large measurement distances and high rotational velocities. Positive wave crests and negative wave crests were found in one rotation due to the presence of positive and negative magnetic poles of the shafts. Consequently, the GMI sensor offers great potential for sensitive rotational-velocity measurement applications. | 6,955.2 | 2019-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
On The Impact of the Locality on Short-Circuit Characteristics: Experimental Analysis and Multiphysics Simulation of External and Local Short-Circuits Applied to Lithium-Ion Batteries
Emulating true, fi eld-like internal short-circuits (ISCs) by experimental methods is a complex task with mostly unsatisfactory outcome. However, understanding the evolution and impact of ISCs is crucial to mitigate safety issues related to lithium-ion batteries. Local short-circuit (LSC) conditions are applied to single-layered, small-sized (i.e. < 60 mAh), and single-side coated graphite/NMC-111 pouch-type cells in a quasi-isothermal test bench using the nail/needle penetration approach. The cell ’ s impedance, capacity, and the contact resistance at the penetration site mainly de fi ne the short-circuit current and, hence, the terminal voltage and heat generation rate associated with polarization effects and electrochemical rate limitations, which are correlated to the cell ’ s behavior during external short-circuits (ESCs) at various short-circuit resistances. Measuring the electrical potential between the needle and the cell ’ s negative tab allows to evaluate the polarization across the electrodes and to estimate the short-circuit intensity. LSC simulation studies are used to correlate current fl ux and resistance to ESC conditions. Double-layered cells are penetrated to create short-circuit conditions within either a single or both electrode stacks to study the difference between multiple LSCs (e.g. during a nail penetration test) and a single LSC (e.g. due to a particle/dendrite). Post-mortem analysis reveals copper dissolution/deposition across both electrodes.
Recent reports 1 summarizing critical incidents involving lithiumion batteries (LIBs) revealed similar characteristics of initial overheating, followed by smoke [2][3][4] and/or spark emission, 1 and, in case of a self-accelerating heat generation, leading to explosions 5 and/or fire and flames released by the battery. 6,7 Unless the safety of LIBs cannot be maintained under all conditions to minimize or even exclude any harm to the environment/individuals, the current trend toward optimizing cost 8 and performance of LIBs involving higher energy densities 9,10 and/or an improved rate capability 11 may impede the market penetration for mobile, automotive, and stationary energy storage applications.
Safety issues of LIBs can be caused by a variety of internal and external triggers related to manufacturing issues, shortcomings in design, and/or operation strategy, 7 as well as mechanical, 8 electrical, and thermal abuse conditions, 12,13 which can lead to external and internal short-circuit scenarios. Hence, there is a strong need for relevant test scenarios simulating such triggers, which help understanding the underlying mechanisms in order to derive suitable means to mitigate or even rule out safety issues related to LIBs (e.g. shutdown separators, 14 integrated circuits, 15 pyrotechnical safety systems, 16 etc.) by increasing the battery's tolerance toward ESCs and ISCs.
On the one hand, ESC tests revealing a good reproducibility 17 and relevance for simulating realistic high current and abusive shortcircuit conditions applied via the terminals of a LIB. On the other hand, simulating ISCs within a LIB by experimental means is a more complex task. As an example for a typical, field-like ISC failure, metallic particle contamination, followed by dissolution and deposition including dendrite growth can lead to a local penetration of the separator and initiate a short-circuit. [18][19][20] In order to reproduce such a field-like shorting scenario, the test must trigger the shorting only at a single site, set a low ohmic resistance, form over time/operation of the LIB, and should reveal sufficient reproducibility. Adjusting the locality of the short-circuit seems to be viable regarding the range of already existing test procedures including a complete or partial penetration the LIB with a nail or needle 21 or the insertion of local defects during assembly of the battery, 22 whereas controlling the shorting resistance may only be partly viable due to the variety of possible materials and contact conditions between the electrodes and current collectors 23 as well as possible changes during the shorting scenario. 24 The formation over time can hardly be recreated by experimental methods as the aforementioned defects form over the lifetime of LIBs and exceed practical operation times of safety tests by far. As a result, existing tests such as nail penetration 21,[24][25][26][27][28][29] or the more complex modification of LIB via insertion of local defects in the electrode stack/jelly roll 22,30 cannot satisfactorily simulate a real, field-like ISC scenario but at least approximate similar high current scenarios with a strong local heat generation.
The insertion of local defects such as low-melting temperature alloys 22 require a modification of the electrode stack/jelly roll which may alter the battery's behavior beside time-and cost-intensive efforts to manufacture these prototype cells. In comparison, a nail penetration test can be applied easier using similar cells as used for ESC tests. Investigating both short-circuit conditions applied to the same cells enables for a comparison/correlation to understand the electrical and thermal behavior of locally applied short-circuits.
In sum, so far there is no test that can satisfactorily recreate a realistic, field-like ISC in LIBs. Based on its straightforward applicability, a nail or needle penetration technique was applied to create not field-like ISCs but local short circuits (LSCs) within a cell. In addition to studying the cell's LSC characteristics, ESC tests were applied in accordance with our previous work. 31 The influence of the electrical electrode design was minimized by using the same singleside coated electrodes, with a counter-tab design throughout all tests. By further studying cells with one or two electrode stacks within a quasi-isothermal calorimetric test bench, effects related to the cell's thermal design or the locality of heat generation were minimized. In this work, we investigate the influence of the locality of the shorting scenario via triggering the short-circuit in the center of the electrodes using the nail/needle penetration technique and eventually compare the cell's local short-circuit characteristics to its external shortcircuit behavior. As the external resistance directly correlates to both the current flux and heat generation rate and, hence, defines the intensity of the ESC, a low-ohmic resistance range as expected for the LSC tests was investigated, which enables for a comparison/ correlation of the terminal voltage and the heat generation rate in order to evaluate the intensity of the applied LSC scenario. As the locality of the shorting affects local electrode polarization during the LSC tests, a correction of the terminal voltage based on a multidimensional simulation study must be applied in order to allow for a direct comparison of LSC and ESC test results, which eventually allows estimating the shorting resistance evoked via nail/needle penetration. Using the quasi-isothermal, calorimetric test bench, the short-circuit proceeds without triggering a high local heat generation rate, which may lead to thermal, self-accelerating processes such as a thermal runaway scenario. Usually, this applies when local particle insertion procedure or nail penetration tests are applied for emulating ISC scenarios in LIBs. By applying our technique, we can mitigate the influence of these thermal effects and study the pure electrical short-circuit behavior at the beginning of the short-circuit (i.e. <1 s) and the subsequent, various electrochemical rate limitation effects (i.e. >1 s), 31,32 which are caused by either the anode or the cathode within the tested cells. To further study various LSC conditions in a stacked electrode configuration, nail/needle penetration were further applied to cells with two electrode stacks with and without a hole within one of the electrode stacks. This allows for applying either a LSC across both electrode stacks representing a complete penetration during a common nail penetration test or a LSC within only one of the two electrode stacks representing a local piercing of a separator such as occurring within the final stage of an ISC. Various diameters of the needle were used during the penetration resulting in differently sized penetration sites and consequently, different short-circuit resistances. To increase the understanding of the electrical and thermal behavior during the ESC and LSC scenario, the characteristic current, electrical potential and heat rate signals of all tests are analyzed toward significant plateau and transitionzones 31,33 referring to the cell's polarization and rate limiting electrochemical processes within the electrodes. The observed overdischarge occurring during all tests can be correlated to severe copper dissolution of the negative current collector including copper deposition throughout and across both electrodes using post-mortem analysis.
Experimental
Calorimetric test bench for short-circuit tests.-The calorimetric setup for the ESC and LSC test is schematically shown in Fig. 1. In our previous work, the test bench was used for applying ESC tests 31 and is modified in this work for applying LSC tests (i.e. nail/needle penetration tests) as well. For the potentiostatic measurements of current flux and electrical potential, a potentiostat (SP-300, Bio-Logic Science Instruments) and a source measurement unit (SMU, B2901A, Keysight Technologies) were used. In terms of ESC tests, a 10 A/5 V amplifier (SP-300, Bio-Logic Science Instruments) extends the current range to apply the expected current peaks around 10 A in the very beginning of the short-circuit. Besides applying a 0 V condition at the cell's terminals via the potentiostat, an external resistance (i.e. 5, 50, and 500 mΩ) was used to vary the intensity of the ESC condition 31,33 as depicted in Fig. 1 (left) whereas the SMU measures the cell's voltage (E sc ) at the cell's terminals. The current flux at the tabs (I sc ) can only be measured in case of ESC tests, and not for the LSC tests. Regarding the LSC tests, only the terminal voltage is measured via the potentiostat without the amplifier. The SMU is used to measure the electrical potential (Φ sc ) at the penetration site in the center of the active electrode area (i.e. needle) vs the cell's negative tab (see Fig. 1, right). The stainless steel needles (2R2, Unimed) are electrically connected to measure the expected potential drop across the electrodes because of the current flux, geometrical configuration, and contact resistance, and may be correlated to the polarization of the cell. For the calorimetric measurement, three digital multimeters (DMM, 34 470A, Keysight Technologies) measure the cell's temperature at the positive current collector tab/terminal (T tab ), the bottom (T cu,1 ), and the upper (T cu,2 ) copper bar (45 × 45 × 90 mm, CW004A) which mechanically clamp the tested cell. The clamping pressure is expected not to distort the electrical-thermal behavior of the cell. The temperature signals during ESC and LSC tests are used to calculate the heat generation rate from the short-circuit scenario. The upper and bottom copper bar exhibit a narrow through-hole and a shallow-hole for the penetration needle, which requires a new calibration similarly to our previous work. 31 Pt100 sensors at an accuracy of ±0.15 C at 0 C (DIN/IEC Class A) centrally measure the temperature of the copper bars (installed with a thermal adhesive). To reduce the thermal contact resistance between the copper bars and the cell, ceramic foils (86/600 Softtherm, Kerafol Keramische FolienGmbH) of 0.5 mm thickness and 6 W m −1 K −1 were used at the interface as shown in Fig. 1. The measurement device is embedded in a 12 cm extruded polystyrene foam (XPS) at a thermal conductivity of 0.04 W m −1 K −1 to impede the heat exchange to the surrounding climate chamber. The whole setup is placed in a custom built climate chamber 34 incorporating resistive heating and Peltier-cooling to set the ambient temperature to 25 C. Reference measurements with a thermometer (1524, Fluke Corporation) revealed a temperature accuracy of ±0.03 C.
As shown in Fig. 1, the LSC is triggered via rotation of the shortcircuit device (1) formed of a an indexing plunger 35 with a plastic rod attachment (PEEK) which incorporates the needle, subsequent forward movement (2) via a linear spring of x = 9.7 mm displacement at a spring rate of 7.861 N mm −1 (1 × 6 × 18 mm, Febrotec), and finally penetration (3) of the tested cell with the needle.
In sum, the adaption of the calorimetric test bench incorporated the insertion of the short-circuit device to apply the nail penetration for the LSC test and the adjustment of the copper bars, which requires a re-calibration of the setup.
Calibration of the calorimetric test bench.-The calibration procedure is used for the temperature sensors, the determination of heat capacities, and losses to the environment. The calibration of the three Pt100 sensors uses a reference thermometer (1524, Fluke Corporation) equipped with a platinum resistance thermometer (5662, Fluke Corporation). 31 To determine the calorimetric constant (i.e. heat capacity and losses to the environment), a single-layered pouch-type cell (i.e. calibration cell) 31 similar to the cells of this work is equipped with two resistive heaters connected in series (1218.4 Ω, Thermo Technologies) and using the SMU, three different heat rates (0.1, 5, and 10 W) were applied for different durations (7200, 144, and 72 s) resulting in an overall applied amount of heat around 720 J. The measured temperature increase of the two copper bars A more detailed description of the calibration and the processing of the measured data is given in the supplementary material of this work at (stacks.iop.org/JES/167/090521/mmedia). The heat capacities (C p,i ) are calculated to 660.6 J K −1 (407.8 J kg −1 K −1 ) and 659.8 J K −1 (407.2 J kg −1 K −1 ) for the bottom and the upper copper bar. The heat capacity of the pouchtype cell (C p,c ) is iteratively determined to approximately 900 J kg −1 K −1 (5.9 J kg −1 K −1 ), which is well in line with comparable pouch-type cells. 36,37 The time lag ( ¥ t ) accounts to 5.9 s and the linearized heat offset ( ¥ Q ) is depicted in the supplementary material.
To conclude, the modification of the calorimetric test bench for LSC tests reveal a shorter time lag due to shorter maintenance intervals for the ceramic foils and slightly increased mechanical clamping, and comparable calorimetric constants as shown in our previous work. 31 Pouch-type lithium ion cells for short-circuit tests.-19 custom built (Custom Cells Itzehoe GmbH), pouch-type LIBs were investigated under quasi-isothermal external (4 cells) and local (15 cells) short-circuit conditions. The four different pouch-type LIBs (i.e. configuration P1, P2, P3, and P4) studied within this work mainly differ in their stacking sequence of electrode and separator layers which is schematically shown in Fig. 2. The stacking sequence of separator (SEP), graphite anode (A) and NMC-111 cathode (C) from configuration P1 to P4 are as follows: • P1: SEP/A/SEP/C/SEP • P2: SEP/A/SEP/C/SEP/C/SEP/A/SEP • P3: SEP/A * /SEP/C * /SEP/C/SEP/A/SEP • P4: SEP/C * /SEP/A * /SEP/A/SEP/C/SEP A polyolefin separator (SEP) of 20 μm electronically insulates the electrode pairs and is wrapped around the entire electrode stack to ensure its position. 1 M of LiPF 6 solved with ethylene carbonate (EC) and dimethyl carbonate (DMC) at a weight ratio of 1:1 and 2 wt% vinylene carbonate (VC) was used as electrolyte. Configuration P2 (double-layered) differs from P1 (single-layered) only in the total number of electrode pair. Configuration P3 (doublelayered) differs from P2 as the upper electrode pair (see Fig. 2) includes a centered hole ( 5 mm) through the anode (A * ) and the cathode (C * ) to enable penetration only in the bottom stack in order to initiate an LSC, which subsequently applies an ESC in the upper stack via the current collector paths. Configuration P4 (doublelayered) differs from P3 only in the sequence of the layers as the anodes are facing each other in the middle part to investigate if the sequence of electrode penetration influences the short-circuit behavior in terms of varying shorting resistances. All electrodes were single-side coated to guarantee comparability between the resulting cell polarization in the ESC and LSC tests. All tests carried out in this work are summarized in Table I showing the initial cell voltage, state of charge (SoC), initial cell temperature/ambient temperature (T ∞ ), and the shorting condition for the ESC (0 V as well as 5, 50, and 500 mΩ, Power Metal Strip, Vishay Intertechnology Inc.) and the LSC tests with varying nail/ needle diameters (d of 0.5, 1, and 2 mm, 2R2, Unimed), respectively.
In order to determine the balancing and analyze the expected overdischarge, 31 Finally an anodic delithiation and cathodic lithiation profile was used after stable capacity retention appeared (<0.01 %). In sum, LSC tests were applied to the P1-type cells to correlate the electrical-thermal characteristics to the P1-type ESCs at various external resistances. The set of experiments proposed on singlelayered ("P1-LSC") and double-layered cells ("P2-LSC" and "P3/P4 -coupled LSC/ESC") give the opportunity to investigate and decouple the different phenomena occurring in a stacked, pouchtype LIB during LSC tests.
Measurement procedure for ESC and LSC tests.- Table II shows the procedure for the ESC and LSC test, starting with "Initial cycles" using a battery cycler (CTS, Basytec GmbH) and a climate chamber at 25 C (KT115, Binder) to exclude any influence of formation processes. Pulse measurements at 50% SoC were applied to characterize the dynamic electrical behavior at different C-rates. . Despite various stacking sequence chosen for the configuration P3 and P4, the main difference to P2 is that the upper electrode pair comprises a 5 mm hole to enable LSC tests based on the penetration of only one electrode stack (see "Electrode Dimensions"). The geometrical size of the test cells is depicted under "Cell Dimensions" with the centered position for the nail penetration site used for the LSC test.
A single "Capacity check-up" was used to determine the initial capacity (C 0 ) at 0.5 C CC discharge from 4.2 to 3 V and in the subsequent "Preconditioning" 0.2 C CC charge with a CV period until 0.01 C prepares the cells at 4.2 V (=100% SoC). Afterwards, the cells were embedded in the calorimetric test bench (see Fig. 1), electrochemical impedance spectroscopy (EIS, see Table II) determined the cell's impedance (R i,0 ) and the setup was rested for 12 h to allow for thermal equalization.
The "Quasi-isothermal short-circuit scenarios at 25°C" initiates after a resting period and subsequently differs for the ESC and LSC tests in terms of the potensiostatic sampling (see Table II). In case of the ESC, a constant voltage phase (4.2 V) to precondition the measurement device is applied and after 5 s, a 0 V condition is set in reference to the cell's terminals. The sampling rate is subsequently reduced to sufficiently but not excessively record the dynamic behavior and limit memory allocation. The ESC is terminated, when the current measured by the potentiostat falls below 100 μA and finally a resting phase of 17 h records relaxation. In case of the LSC, the short-circuit device is triggered right after the resting period (10 s) and the sampling rates are continually reduced as well. The cells tested in the LSC tests are exposed to a longer discharge than those tested in the ESC tests as the LSC test was terminated after ≈20 h. Simultaneously to the potentiosatic measurements, the calorimetric measurement includes the temperatures of the cell and the upper/bottom copper bar. Finally, EIS was applied to determine the cell's impedance (R i,sc ) and the terminal voltage (E sc,end ) after the short-circuit event.
As possible investigations toward the influence of initial state of charge and ambient temperature have been thoroughly discussed for ESCs, 31 similar influences are expected for the studied LSCs and the tests are consequently carried out at 100% SoC and 25 C without exception.
Correlation of ESC and LSC tests.-Local variations in electrode polarization (i.e. along the electrodes' thicknesses, widths, and lengths) are expected between the external and the local short-circuit scenario and, hence, different spatial distribution of the current flux.
Assuming the same shorting current (i.e. the same shorting intensity) from an ESC and LSC test applied to identical cells, a certain offset of the resulting terminal voltages can be expected simply due to the spatial distribution of current flux. To correlate the resulting terminal voltages from the ESC and the LSC tests, the local variations of the electrode polarization should be considered. Therefore, multidimensional multiphysics simulation studies investigate exemplary ESC and LSC scenarios for the P1-type cells corresponding to an ESC test at an external short-circuit resistance of 243.9 mΩ, which lies in the range of the cells' initial impedances. The simulative work is outlined in the supplementary material as it exceeds the experimental focus of this work. Both short-circuit simulations reveal nearly the same shorting current over time whilst the local polarization effects (i.e. along the electrodes' thicknesses, widths, and lengths) reveal significantly differing local current flux and potential distribution. As a result, the offset between the terminal voltages is calculated and normalized with respect to the ESC results. Extrapolation from the external short-circuit resistance applied in the ESC case, reveals a high prediction accuracy of the local short-circuit resistance with errors below 2% until 100 ms.
Regarding the measured terminal voltages from the P1-LSC cells, a simulation-derived correction factor of 0.062 at 100 ms was used in this work to account for the aforementioned local polarization effects and enable for the comparison to the P1-type ESC test results. The corresponding external resistance for the shorting scenario can be calculated for the LSC tests, which must have been applied to gain the same current flux/shorting intensity resulting from a P1-type ESC test. The calculation itself uses the electrical potential difference to the P1-type ESC results showing a higher (i.e. 50 mΩ ESC test) and a lower terminal voltage (i.e. 5 mΩ ESC test) at 100 ms. By further interpolating the calculated effective external short-circuit resistances, the short-circuit resistance of the LSC tests can be estimated.
Post-mortem analysis.-Post-mortem analysis is used to qualitatively study effects such as active material degradation and/or copper dissolution/deposition occurring during the short-circuit tests. The cells were opened in an argon-filled glove box (H 2 O, , M.Braun Inertgas-Systeme GmbH) for a first visual inspection and 14 mm samples were subsequently extracted for scanning electrode microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) measurements. The samples were washed with diethyl carbonate (DEC) and dried before applying SEM/EDX (JCM-600, JEOL Ltd.) where a MP-00040EDAP detector at 15 kV acceleration voltage offered magnifications levels from 150 to 2000 of the electrodes.
Results and Discussion
Beginning with the DVA analysis, Fig. 3a shows the open circuit potentials (OCPs) of the coin cells together with their superposition ("Graphite + NMC-111 coin cells") as a function of full cell SoC (i.e. P2#10). The superposition reveals marginal errors (see Fig. 3b) around 10 mV with increased deviations at low SoCs due to the steep rise of the anode potential at low lithiation levels. The overcharge and overdischarge zone are depicted beyond the safe operation window between 0 to 100% SoC referring to 3 and 4.2 V. Regarding the 1st derivative in Fig. 3c, the balancing of the anode and cathode in reference to the full cell is shown with similar deviations. The ESC and LSC tests considerable result in an overdischarge of the tested cells, which most likely provokes side reactions besides a highly delithiated anode and a highly lithiated cathode. In this context, the differential capacities are shown in Figs. 3d and 3e, where the capacity gain from de-/intercalation reaction during overdischarge approaches zero. As a result, the overdischarged capacity may not be related to de-/intercalation reaction within the active materials but most likely to side reactions such as copper dissolution/deposition occurring at ≷3.2 V vs a) Used for DVA. b) Used for post-mortem analysis.
Li/Li + . 38 The ESC and LSC test conditions lead to high anodic overpotentials (>1.6 V) 31 and together with the low lithiation stages in the graphite anode resulting in potentials >1.7 V vs Li/Li + (see Fig. 3a), oxidation of the copper current collector is most likely triggered, which is indicated here via DVA analysis and will be verified in the post-mortem section. Table III mV, see Table III) are caused by the aforementioned shorter short-circuit exposure compared to the LSC tests, which is also reflected in a lower, local copper deposition across the electrodes shown in the post-mortem part (see Postmortem analysis).
Potentiostatic correlation of ESC and LSC tests.-The difference in applying the ESC conditions compared to the LSC using nail penetration raises the question, if and to which extent the resulting electrical and thermal behavior differs and how the intensity of the shorting scenario (i.e. hard or soft) can be compared/correlated from the resulting current flux, potential, and temperature measurements. Evaluation toward various ESC tests, which vary in their intensity and the appearing onset of electrochemical limitation mechanisms caused by the applied external condition (i.e. 0 V as well as 5, 50, and 500 mΩ), are using the terminal voltage for a first, simple correlation as the current flux is not measureable for the LSC tests. As shown in our previous works 31,32 investigating P1-type ESC tests, the cell's polarization correlates to the electrochemical of Li-ions due to solid-phase diffusion limitation near the separator) and liquid (i.e. depletion of Li-ions in the electrolyte near the current collector) mass transport limitations at the cathode surface throughout the entire electrode (stage "a") followed by a saturation at the cathode particle (stage "b") • Transition zone II-III: Depletion of anode particles' surfaces lead to polarization increase with further current and potential drop, as well as possible copper dissolution/deposition from the negative current collector Regarding the terminal voltage in Fig. 4c, the different stages appear similarly for the P1-LSC results. The terminal voltages lie in between the 5 and 50 mΩ ESC test and the zones I, I-II, and II can be determined as shown in Table IV. Comparing the ESC and LSC results in the very beginning of the short-circuit (i.e. zone I, <3 ms), the lowest voltage values appear for the LSC cases which indicate a very high intensity or a so called hard short. As soon as electrochemical rate limitation effects initiate (i.e. zone I-II), the electrical behavior shows subsequently (i.e. zones I-II to III) similar damping/ attenuation characteristics as the ESC cases. To conclude, the locality of the short-circuit defines the electrical behavior in the very beginning (i.e. <1 s, zone I), but the subsequent electrical shortcircuit behavior is similar to the ESC scenarios and is mainly defined by electrochemical rate limitation effects. Regarding the comparison of the terminal voltages in Fig. 4c, one could simply estimate the intensity of the LSC tests in between the 5 and 50 mΩ ESC test. The noisy signals appearing for the LSC cases are most likely caused via marginal mechanical oscillation by the shorting device in stage I and caused by the measurement equipment in the following zones. Figure 4d is used to compare the terminal voltage (E sc ) to the electrical potential measured between the cell's negative tab and the needle (Φ sc ) in order to evaluate the local polarization effects during the P1-LSC tests. The locally measured potentials show a higher ohmic drop just at the beginning and proceed similarly to the tab potential at lower electrical potentials. Comparing the P1-LSC tests, the P1#3 tests shows a higher potential offset to the terminal voltage as the P1#2 test. As the cell's initial impedances are approximately the same (see Table III), a lower ohmic resistance in the penetration site is expected resulting in the overall lower electrical potentials and the higher spread between them. A certain measurement fuzziness is expected such as the contact condition 23 may change in the penetration site, the ohmic drop due to the current flux through the needle, and the polarization along the negative current collector. Nevertheless, the measured lower potentials at the penetration site indicate that the locality of the LSC (i.e. short-circuit in the center of the electrode stack) complicates the correlation to ESC tests when the measured terminal voltages are compared. As a result, simply correlating ESC and LSC via the terminal voltages as mentioned before may incorporate a certain error due to the different polarization effects across the electrodes caused by the ESC and LSC condition. In the following part, simulation studies help to evaluate the electrical potential fields across the electrodes for P1-type ESC and P1-LSC tests, which simulate the same current flux during the shorting scenario. The correlation uses the resulting terminal voltage difference and the derived correction factor is applied to the measured terminal voltages of the P1-LSC tests in order to estimate the current flux and shorting resistance by interpolating between the P1-type ESC results.
Correction of LSC polarization effects.-As discussed before, comparing the terminal voltages may incorporate a certain error due to the expected high local polarization around the penetration site in the LSC tests as indicated by the local potential measurements (see To do so, a multidimensional multiphysics model [39][40][41][42][43][44] previously validated for the P1-type cells 32 is presented in the supplementary material and used to simulate an exemplary ESC and LSC case, which reveal a similar current flux either through the tabs or the internally shorted area in the simulation model. As the same current flux is simulated, similar shorting resistances are expected and most likely both scenarios occur at the same short-circuit condition and shorting intensity. The resulting difference of electrical potential distribution across the electrode results in different terminal voltages, which overall occur under similar short-circuit conditions. Using the terminal voltage difference from the P1-LSC to the P1type ESC simulation, a correction factor for the terminal voltage is derived, which accounts for the local polarization effect in the P1-LSC case and enables for its correlation to the P1-type ESC test in terms of current flux and shorting resistance. The ESC shorting scenario simulates an external shorting of 243.9 mΩ which corresponds approximately to the P1-type cells' impedance range (see Table III) and the LSC scenario corresponds to a nail penetration similarly to the P1-LSC tests using the 1 mm needle. The resulting potential fields and transient voltage drops are shown in the supplementary material. Using the resulting correlation factor of 0.062 from the simulation results at 100 ms, the measured terminal voltage of the P1-LSC tests is corrected and the offsets to the 5 and 50 mΩ ESC tests (see Fig. 5b) are used for interpolation of the estimated shorting resistance (R LSC,est ) as shown in Table V. The 5 and 50 mΩ case were used, as the corrected P1-LSC terminal voltages from zone I to III lie in between theses cases similar as seen for the uncorrected signals in Fig. 5. The estimated shorting resistance and the corrected terminal voltage can now be used to calculate the expected current flux (I LSC,est ) at 100 ms as shown in Table V and lie in between the 5 and 50 mΩ ESC case.
A more profound analysis and discussion of the modelling and simulation part will be addressed in future as it would exceed the content of this work, but is used here to emphasize the local polarization differences, which makes a correction of the overall measured signals such as the terminal voltage necessary in order to gain a physically meaningful correlation between ESC and LSC scenarios.
To summarize, the P1-LSC test results revealed a rather hard short (see Table V and Fig. 4c) and show a very similar electrical behavior compared to the ESC tests, especially after the onset of electrochemical rate limitations (i.e. zone I-II to III). Considering the aforementioned correction for local electrode polarization, most likely an ESC test with an appropriately chosen external short-circuit resistance (see Table V) could emulate a LSC test.
Calorimetric correlation of ESC and LSC tests.-Beneficially, the actual measurement signal (i.e. temperature) for calculating the heat rate is not directly affected of the local polarization effects due to the expected thermal uniformity in the copper bars as shown in our previous works. 31,32 Hence, the P1-type ESC and P1-LSC shorting scenarios can be analyzed regarding the plateau and transition zones of the heat rate, which appears similarly to the current flux, terminal voltage, and local electrical potential (see Figs. 4a-4c) only with a certain delay in time 31 due to the inertia of heat transport phenomena and the calorimetric test bench. Figure 5 shows the calorimetric results of the P1-type ESC and P1-LSC tests after 1 s. The total heat ranges from 450 to 353 J depending on the highest to lowest capacitive cell (i.e. P1#7 and P1#2, see Fig. 5a). As the cell's capacity defines the total amount of heat, 31 the heat rate (see Fig. 5b) is related to the cell's capacity in order to enable for a better correlation between the cells. The capacity related heat rate vs SoC of the ESC tests is shown in Fig. 5c, which helps to estimate the onset of overdischarge as shown at 100% SoC. Figure 5d magnifies the spread of heat rate where the highest external resistance shows the lowest intensity as expected. The maximum, capacity related heat rates in zone I-II can be observed for the 50 mΩ condition and the 0 V as well as the 5 mΩ case appear slightly below due to the aforementioned deviances in the cells' capacity and initial impedance (see Table III). Most interestingly, the ESC and LSC condition result in similar characteristics as shown in Fig. 5e, which allows for a correlation in the zones I-II to III exemplarily shown for the 0 V case (see also Fig. 4a). Looking into Fig. 5f, the P1-LSC results lie initially in between the 5 and 500 mΩ ESC cases and the current flux may be in between as well. The initial current flux is most likely defined by the shorting resistance (i.e. either external or at the penetration site) and the cell's impedance. For the P1-LSC, cell P1#2 indicates a lower shorting resistance due to a lower offset between the potential at the tabs and the penetration site (i.e. E sc vsΦ sc ) and an overall slower terminal voltage decay compared to cell P1#3. As their impedances are approximately the same (see Table III), the expected higher shorting current for P1#3 results in the observed higher heat rates. Regarding stage II in Fig. 5e, the P1-LSC cases reveal lower heat rates compared to all ESC cases, which indicates higher mass transport limitations and/or variation of the short-circuit resistance. The local polarization effects (i.e. E sc vs Φ sc ) coming with very high local currents around the penetration site or the variation 23 of the contact condition may cause the observed earlier onset of mass transport limitations, accompanied with a stronger current drop, and, consequently lower heat rates for the P1-LSC cases. Again, cell P1#2 shows a lower plateau than cell P1#3 due to the aforementioned difference in shorting resistance and resulting current flux.
Comparing all P1-LSC tests, the initial impedances (see Table III) vary in the range from 298.7 (P1#9) to ≈360 mΩ (P1#2 and #3) as well as the cell's capacities (28.1, 17.1, and 19.1 mAh). Regarding the resulting heat generation rate profiles, the higher the impedance and the lower the capacity, the lower the initial current in zone I-II, and the higher the subsequent mass transport limitations accompanied with lower heat rates in zone II. Note, that also the altering contact condition during the short-circuit may most likely affect the heat generation rate profile. Unfortunately, none of the tested P1-LSC cells revealed a similar, initial impedance as the P1type ESC cells. Most likely, the applied pressure condition between the copper bars during the short-circuit experiments influenced the impedance of the cells, which will be investigated in the future. Regarding zones II-III (see Fig. 6d) and III, similar heat rate decays for the ESCs and the P1#9 LSC case appear due to the lower potential, heat rate and expected current flux plateau during zone II compared to the cells P1#2 and #3. Interpreting the P1-LSC tests using the calorimetric results may reveal a slightly more inaccurate correlation to the P1-type ESC tests (i.e. between 5 and 500 mΩ), but helps to better understand the correlation of the cell's initial impedance, initial capacity and the shorting resistance at the penetration site during the short-circuit scenario.
LSC applied to double-layered pouch-type cells.-To investigate LSCs occurring simultaneously in multiple electrode layers, nail penetration is applied in the P2-LSC tests to trigger short-circuits in both electrode stacks. In comparison, only a single electrode stack is penetrated with the nail in the P3/P4-coupled LSC/ESC tests and the second one undergoes an ESC via the current collector path. Figure 7 shows the resulting terminal voltages of the P2-LSC and P3/P4-LSC/ESC in reference to the P1-type ESC and P1-LSC cases. After the attenuation of mechanical oscillations of the short-circuit device, all LSC tests (P2, P3, and P4) lie in between the 50 and 500 mΩ ESC case (i.e. from zone I to II-III ref. to the 50 mΩ case) until the onset of zone III. At the transition I-II, the P2-LSC shows faster voltage decrease compared to the P3/P4-LSC/ESC tests, which more or less show no significant difference in their electrical behavior. From zone II until III, the P3/P4-LSC/ESC test approach the 500 mΩ ESC case, which indicates a higher ohmic resistance behavior caused by lower mass transport limitations during zone II. The P2-LSC shows increased mass transport limitations during zone II and a faster voltage decay, and remains in between the aforementioned ESC cases.
Ideally, similar contact conditions in both penetration sites of the P2-LSC test should be achieved and the terminal voltage should assimilate the P1-LSC test. At the very beginning (≈2 ms), the lower initial impedance and the higher capacity of P2#1 (see Table III) compared to the P1#2 cell leads to a lower onset of the terminal voltage during zone I and a higher initial current peak is expected. During zone I, the contact condition in both electrode layers most likely forms/alters and results in a higher ohmic resistance behavior, which leads to the subsequent offset from zone I-II to III. As a result, non-ideal penetration may lead to a higher ohmic contact condition in one or both electrode stacks, when two stacks are penetrated at once.
The P3-and P4-type cells' impedances differ (see Table III), which leads to the appearing marginal lower voltage plateau of P4#1 compared to P3#2 in zone I, but overall no significant difference in the electrical behavior appears. The higher ohmic resistance behavior during the transition zone I-II is probably correlated to the simultaneous ESC in the second electrode stack. As no difference between the P3 and P4 case were observed, penetrating first the anode or the cathode has negligible influence on the resulting shorting behavior.
The total amount of heat is shown in Fig. 8a and totals of 874, 821, and 799 J for the P2-LSC, the P3, and P4-LSC/ESC test appear. Relating the heat rates shown in Fig. 8b to the cell capacity, Fig. 8c allows to evaluate the shorting scenarios as discussed for the P1-type cells. Beside the contact condition at the penetration site, higher capacity and lower impedances (see Table IIId) most likely lead to higher initial currents in stage I-II compared to the P1-type cells. Similar mass transport limitations appear for the P2-and P4-type cell shown in Fig. 8e, which assimilate the 50 mΩ ESC case until zone II-III. The P3-type LSC test shows slightly increased mass transport limitations resulting in a marginal lower heat rate plateau in zone II, similar to the P1-type LSC test. Analyzing the electrical potential at the penetration site for the P3-type cell, a slightly higher offset to the terminal voltage appeared compared to the P2-and P4-type cells, which indicates an increased local polarization across the electrodes and therefore increases the limitation behavior as discussed before. This corresponds to the slightly lower terminal voltage plateau of the P3-type cell shown in zone II (see Fig. 7). As the current flux during zone II is most likely higher for all double-layered cells compared to the P1-LSC cells, faster discharge/voltage decay appears in zone II-III (see Fig. 8f) similar to the P1-type 50 mΩ ESC case. From zone I-II to II-III, the P2-LSC cell shows higher heat rates than the P3/P4-type cells which corresponds to the terminal voltage profiles in Fig. 7, and ends in zone III with the fastest discharge/limitation of the P2-LSC cell. To conclude, the results of the P2-LSC test revealed an unexpected offset to the P1-LSC test, which is most likely caused by non-similar contact condition in one or both penetrated electrode stacks. Penetrating both (P2-LSC) or only a single electrode stack (P3/P4-coupled LSC/ESC) in a double-layered test cell, revealed significant differences for the terminal voltage profile after the transition zone I-II, which corresponds to the observed capacity related heat rate profile. The actual local shorting conditions (i.e. either LSC applied to all electrode stacks or a single LSC in one electrode stack leading to a subsequent ESC in the remaining one) must therefore be considered, when the results of a nail/needle penetration test are interpreted in terms of emulating ISC scenarios in LIBs.
LSC applied to double-layered pouch-type cells using various needle diameter.-P2-type LSCs are applied with a needle of 0.5, 1, and 2 mm to analyze the correlation of penetration size, the resulting contact condition, and the appearing short-circuit intensity. Figure 9 shows the capacity related heat rates in comparison to the P1-type ESC and P1-LSC cases.
Regarding zone I-II in Fig. 9b, the 2 mm case reveals the highest (>160 W Ah −1 ) heat rate, the 0.5 mm case the lowest (≈120 W Ah −1 ) heat rate but higher than the 500 mΩ ESC as well as the P1-LSC cases, and the 1 mm case lies in between these two cases, assimilating the 50 mΩ case as discussed before. Considering the cells' impedances and the capacities (see Table III), most likely the resulting maximum heat rates in zone I-II correlate well with the diameter of the needle as: The larger the diameter of the needle, the higher the heat rate and the underlying shorting intensity. Regarding zone II in Fig. 9c, increased mass transport limitations are seen for the 2 mm case, which shows the lowest heat rates compared to the 1 and 0.5 mm case, and the lowest mass transport limitation are seen for the 0.5 mm case. Due to the highest capacity, the cell applied with 1 mm shows the slowest heat rate decay in zone II-III (see Fig. 9d) compared to the 0.5 and 2 mm case. As a conclusion, the intensity of the LSC test is directly affected by the shorted area in the penetration site, which correlates well with the chosen needle diameter.
Post-mortem analysis.-Post-mortem analysis by means of visual inspection, SEM, and EDX is applied to cells used in the ESC and the LSC tests in order to evaluate the degradation of the graphite and NMC-111 electrodes. Similar results were observed for all studied cells depicted in the supplementary material and the results of cell P1#10 (ESC at 5 mΩ) and P1#2 (LSC with 1 mm) are presented in the following. Figure 10 shows the opened cell P1#10 revealing partly delamination of the graphite composite electrode (a) and a mechanically rather intact NMC-111 cathode (b). Magnifications (see Figs. 10c and 10d) at a factor of 1000 show SEM images of the electrode surface revealing depositions on the anode and cracked or even burst NMC-111 active material particles on the cathode as discussed in our previous work. 31 Applying EDX measurements at these positions (e to h) indicates significant amounts of copper on both electrodes, which is not the case for the pristine materials (i to l) before the ESC. Low lithiation levels and high overpotentials in the anode most likely cause copper dissolution from the negative current collector during the ESC and subsequent deposition across the electrodes as impurities during disassembly, handling and preparation during the post-mortem analysis could be excluded. Most likely, the deposition of copper in the anode is caused by significant potential differences through the thickness of the graphite coating during the short-circuit and in the cathode by its higher potential levels. The amount of oxygen is most likely caused by handling the samples outside the argon-filled glove box and the carbon content is referred to the actual active material (f, i) and the content of binder (h, l). Regarding the P1-LSC test of cell P1#2 shown in Figs. 11 and 12, the graphite anode in Fig. 11a clearly shows delamination of its composite material and looking into the magnifications near the tab (b, x50) and near the bottom (h, x500), entire holes (≈⊘ 129.6 μm) or partly surficial dissolutions (» 3.2 μm) appear (see 11i) where copper is completely or partly dissolved. Magnifications near the penetration site (c) reveal deep radial cracks through the thickness of the graphite composite, which indicates significant amounts of copper (d and e) compared to the pristine material (f and g).
Around the penetration site, all cells showed complete dissolution of the copper foil, which indicates the highest current densities and overpotentials and thus maximum local intensity of the shorting scenario. Similar to the ESC analysis, copper dissolution and deposition could be observed at strongly delaminated spots across the anode, where the coating came off during disassembly. Beside cracked or burst active material particles of the NMC-111 cathode shown in Fig. 12, no delamination but deep cracks were observed throughout the coating (SEM x150 and x1000, h and i) near the penetration site, which underlines a higher local intensity of the shorting scenario. The magnification in b (SEM × 40) shows the penetration site itself with clear marks of cutting and crumpling of the cathode caused by the needle penetration. A magnification (SEM x500, c) offers a cross view analysis through the coating thickness as shown in Fig. 12e (SEM × 1000, see d). Compared to the pristine material (f and g), contents of copper are significantly indicated not only on the surface of the electrode, but also throughout the entire thickness of the cathode as well as near the aluminum current collector. Overall, significantly increased indications of copper dissolution and deposition can be observed throughout the coating thickness as well as across both anode and cathode. As the LSC tests result in a deeper discharge condition compared to the ESC test (i.e. ESC stops at I sc < 0.1 mA), the observed increased intensity of copper dissolution and deposition seems justified. The expected intense locality appearing in the LSC scenario was shown by complete dissolution of the copper current collector around the penetration site and increased degradation signs in the NMC-111 cathode.
Conclusions
The electrical and thermal short-circuit behavior of externally and locally applied short-circuits (i.e. needle/nail penetration) was investigated on single or double-layered graphite/NMC-111 pouch-type LIBs using a quasi-isothermal, calorimetric test bench. The quasi-isothermal short-circuit conditions enable for analyzing the electrical and thermal short-circuit behavior without triggering a high local heat generation rate, which may lead to thermal, self-accelerating processes such as a thermal runaway scenario. By applying our technique, we can mitigate the influence of such local, thermal effects and analyze the pure electrical short-circuit behavior in the very beginning (i.e. zone I) until various current rate limitation effects (i.e. zones I-II to III) appear, which are caused by either the anode or the cathode within the tested cells. Comparing the P1-type ESC and LSC results in the very beginning of the short-circuit (i.e. zone I), differences in the electrical behavior were seen but as soon as electrochemical rate limitation effects within both the anode and the cathode initiate (i.e. from zone I-II to III), the electrical behavior shows similar damping characteristics. As a result, the locality of the short-circuit defines the electrical behavior in the very beginning (i.e. <1 s, zone I), but the subsequent rate limitation effects proceed similarly for the ESC and the LSC test. The observed hard short-circuit conditions caused by the needle penetration can thus be emulated by an ESC test with an appropriately chosen external short-circuit resistance for the very beginning (i.e. zone I), which also accounts for the discussed terminal voltage variance calculated from the presented simulation results. The measured local Figure 11. Post-mortem analysis of cell P1#2 after the P1-LSC test showing the entire graphite anode (a) and magnified sites near the tab (SEM x50, b), the penetration site (SEM × 150, c), and at the bottom (SEM x500, h) depicting holes in the copper current collector, cracks through the electrode or initially dissolved copper sites (SEM x2000, i), respectively. The crack shown in c) is magnified (SEM x2000, f) for EDX analysis (e) revealing significant contents of copper compared to the pristine material shown in f) and g) before the P1-LSC test. potential at the penetration site via the electrically connected needle vs the cell's negative tab shows the same characteristics as the measured terminal voltage, only with a significant potential offset caused by electrode polarization, current flux over the needle, and the altering short-circuit contact condition. Most likely, the differently appearing potential offsets in the P1-LSC tests may be correlated to a higher or lower current flux around the penetration site and may indicate higher or lower polarization effects in the cell and thus can be used to evaluate the local short-circuit intensity at the penetration site. Overall, a cells' initial impedance, initial capacity and the electrical contact condition at the penetration site mainly determine the electrical and thermal LSC behavior resulting in a higher or lower current rate limitation behavior. The ESC test offers higher reproducibility, practicability of the actual measurement, and can emulate a LSC scenario in terminal voltage and Figure 12. Post-mortem analysis of cell P1#2 after the P1-LSC test showing the entire NMC-111 cathode (a) and magnified sites near the penetration area (SEM x150/x1000 in h/i and SEM x40 in c). The penetration site in b) reveals cut (bottom) and crumpled (top) areas (SEM x40) and EDX applied over the coating thickness (SEM x500/x1000, c/d) indicates significant copper content (e) compared to the pristine cathode (SEM x1000, f and g) before the P1-LSC test.
heat rate profile. Based on these results, the presented ESC test method is recommended not only to emulate external short-circuits, but also local/internal short-circuit scenarios in LIBs.
Applying needle penetration to both electrode stacks in doublelayered cells (i.e. P2-LSC), only marginal differences were observed for the electrical behavior in the very beginning (i.e. <1 s, zone I) compared to a short-circuit applied to a single electrode stack, which triggers an external short-circuit on the second one (i.e. P3/P4coupled LSC/ESC). However, a significant difference in the electrochemical rate limitation behavior was observed subsequently (i.e. >1 s, zone I-II to III), which indicates reduced rate limitation effects for a coupled LSC/ESC case. Increasing the shorting area investigated via various needle diameters (i.e. 0.5, 1, and 2 mm) leads to higher heat generation rates, which correlates well to a more intensive short-circuit or so called harder short. Similar to the singlelayered cells, capacity, impedance, and contact condition determine the short-circuit intensity where the latter shows severe dependency on the used needle diameter as well as the number of penetrated electrode stacks as seen from the results of the double-layered cells.
Overdischarge of the cells appeared in all tests as indicated via initial DVA and finally correlated to copper dissolution/deposition across both active areas of the electrodes analyzing the results of SEM and EDX measurements. Regarding the LSC tests, increased local degradation around the penetration site appeared and a deeper discharge resulting in more intense copper detection indicate the highly local polarization and longer exposure to high current conditions compared to the ESC tests.
Future work will focus on statistical relevance of the presented LSC tests regarding the variance of contact conditions within the penetration site and multidimensional multiphysics simulation studies of LSC and ESC scenarios in order to investigate the difference in local polarization effects throughout and across the electrodes. | 12,070.8 | 2020-04-17T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
DPY30 acts as an ASH2L-specific stabilizer to stimulate the enzyme activity of MLL family methyltransferases on different substrates
Summary Dumpy-30 (DPY30) is a conserved component of the mixed lineage leukemia (MLL) family complex and is essential for robust methyltransferase activity of MLL complexes. However, the biochemical role of DPY30 in stimulating methyltransferase activity of MLL complexes remains elusive. Here, we demonstrate that DPY30 plays a crucial role in regulating MLL1 activity through two complementary mechanisms: A nucleosome-independent mechanism and a nucleosome-specific mechanism. DPY30 functions as an ASH2L-specific stabilizer to increase the stability of ASH2L and enhance ASH2L-mediated interactions. As a result, DPY30 promotes the compaction and stabilization of the MLL1 complex, consequently increasing the HKMT activity of the MLL1 complex on diverse substrates. DPY30-stabilized ASH2L further acquires additional interfaces with H3 and nucleosomal DNA, thereby boosting the methyltransferase activity of the MLL1 complex on nucleosomes. These results collectively highlight the crucial and conserved roles of DPY30 in the complex assembly and activity regulation of MLL family complexes.
INTRODUCTION
H3K4 methylation is critical to the epigenetic regulation of gene transcription (Hyun et al., 2017;Shilatifard, 2008). Defects in H3K4 methylation have been closely associated with a broad spectrum of hematologic and solid malignancies (Rao and Dou, 2015;Yang and Ernst, 2017). H3K4 methylation is mainly mediated by MLL family proteins, including MLL1, MLL2, MLL3, MLL4, SET1A, and SET1B (Ansari and Mandal, 2010). Among them, MLL1 has drawn the most attention because its chromosomal translocations lead to various forms of acute lymphoid and myeloid leukemia (Krivtsov et al., 2017).
Dpy-30 was initially discovered in Caenorhabditis elegans as a regulator in X chromosome dosage compensation (Hsu and Meyer, 1994). Mammalian DPY30 can be assembled into MLL family complexes through its C-terminal 44-residue helical bundle, termed the docking and dimerization domain (DD domain), which directly interacts with the ASH2L C-terminal DPY30-binding motif (DBM) (Haddad et al., 2018;South et al., 2010). In embryonic stem cells (ESCs), knockdown of DPY30 reduces H3K4 trimethylation and impairs ESC plasticity in transcriptional reprogramming in vivo (Jiang et al., 2011). Similarly, knockdown of DPY30 by siRNA led to decreased H3K4 methylation levels and inhibited the proliferation and differentiation of hematopoietic progenitor cells (Yang et al., 2014). Complete depletion of DPY30 from conditional In this work, we reveal that DPY30 can stimulate the HKMT activity of the MLL1 complex through a nucleosome-independent mechanism and a nucleosome-specific mechanism. By combining crosslinking mass spectrometry (CX-MS), structural prediction, molecular dynamics (MD) analyses, biological small-angle X-ray scattering (SAXS), and biochemical assays, we demonstrate that DPY30 functions as an ASH2L-specific stabilizer. DPY30-stabilized ASH2L acquires functional improvement to enhance ASH2L-mediated interactions to promote the assembly and stabilization of the MLL1 complex, which explains the activity stimulation on a wide range of substrates (H3 peptide, H3/H4 tetramer, and octamer) by DPY30. DPY30-stabilized ASH2L further gains additional interfaces with nucleosomal DNA and H3, thereby specifically boosting HKMT activity on nucleosomes.
DPY30 enhances the HKMT activity of the MLL1 complex
To examine the role of DPY30 in stimulating MLL1 activity, we first characterized the HKMT activity of the MLL1 complex (MLL1-WDR5-RBBP5-ASH2L, abbreviated as M1WRA) in the presence and absence of DPY30 on different substrates by performing a western-blot-based methyltransferase assay. We found that DPY30 remarkably enhanced the HKMT activity of the MLL1 complex on NCPs but had negligible effects on octamer and H3-H4 tetramer substrates ( Figure 1A). To quantitatively dissect the roles of DPY30 on different substrates, we compared the reaction rates of MLL1 core complexes by using an MTase-Glo Methyltransferase Assay kit. DPY30 substantially boosted the methylation rate on NCP by $18-fold but only increased the reaction rate on the octamer, H3-H4 tetramer, and H3 1-9 peptide by $1.6-fold Figure 1B).
To further compare the kinetic difference of the MLL1 complex in the presence or absence of DPY30 on different substrates, we performed steady-state kinetic analyses of the MLL1 complex by fixing the concentration of AdoMet and changing the substrate concentrations of NCP or H3 1-9 . In the absence of DPY30, M1WRA exhibited relatively weak activity with a K m (NCP) of 0.16 mM and a turnover rate (k cat ) of 0.023 min À1 . In the presence of DPY30, the MLL1 complex (M1WRA + D) exhibited strikingly boosted activity on NCP with an $18-fold increase in k cat and a 1.5-fold decrease in K m ( Figure 1C). Thus, for NCP substrates, the catalytic efficiency (k cat /K m ) of the MLL1 complex with DPY30 was 27-fold higher than that of the MLL1 complex without DPY30. In contrast, the activity-stimulating effect of DPY30 was much weaker on the H3 1-9 peptide, with a 1.2-fold increase in k cat and a 1.5-fold decrease in K m ( Figure 1D). The 1.8-fold increase in the catalytic efficiency (k cat /K m ) of the MLL1 complex indicates a weak but appreciable effect of DPY30 in stimulating the activity of the MLL1 complex on H3 peptides. These data not only reveal the important role of DPY30 in ensuring optimal HKMT activity of the MLL1 complex but also suggest that DPY30 could stimulate HKMT activity through a nucleosome-independent mechanism and a nucleosome-specific mechanism. iScience Article The activity-stimulating effect of DPY30 is dependent on ASH2L We sought to explore the underlying mechanism by which DPY30 stimulates HKMT activity of the MLL1 complex. Previous studies have reported that DPY30 only interacts with ASH2L in the MLL1 complex by binding to the C-terminal DBM of ASH2L (Haddad et al., 2018;Patel et al., 2009). We first investigated whether the activity-stimulating role of DPY30 is dependent on its interaction with ASH2L. By utilizing an ASH2L mutant (L513E/L517E/V520E) (hereafter referred to as ASH2L 3E ) that completely abolished the interaction with DPY30 (Chen et al., 2012), as shown in the GST pull-down assay ( Figure S1A), we found that DPY30 could not increase the HKMT activity of the MLL1 complex reconstituted with this DPY30-binding-deficient ASH2L mutant on NCPs or H3 peptides (Figures S1B and S1C). Moreover, the DPY30 dimerization domain (DPY30 DD , 45-99), mediating the interaction with ASH2L (Haddad et al., 2018), was sufficient and necessary to stimulate the HKMT activity of the MLL1 complex ( Figure S1B). A previously-identified DPY30 mutant (L69D), which impairs DPY30 dimerization and abolishes its binding to ASH2L ( Figure S1D) (Tremblay et al., 2014), could not increase the HKMT activity of the MLL1 complex (Figures S1B). These results reveal that the stimulatory effect of DPY30 relies on its interaction with ASH2L.
Next, we investigated which ASH2L domain(s) contribute to DPY30-induced stimulation of MLL1 activity. ASH2L encompasses an N-terminal PHD-WH domain (1-177), an intrinsically disordered region (IDR) (178-229), a pre-SPRY motif (230-285), a split SPRY domain (286-499) with an SPRY-insertion (400-440), and a C-terminal DPY30-binding motif (DBM, 500-534) ( Figure 2A). The essential roles of the ASH2L SPRY domain in the regulation of MLL1 complex activity have been firmly established (Li et al., 2016), but the functions of other ASH2L domains remain elusive. We used a series of ASH2L constructs with the deletion of each domain iScience Article or motif to probe how these individual domains or motifs affect the DPY30-dependent activity stimulation of the MLL1 complex. All ASH2L domain-deletion mutants were eluted at the expected peak positions corresponding to their molecular weights on gelfiltration chromatography and could be assembled into M1WRA complexes ( Figures S2A-S2C), suggesting that these domain-deletion mutants retained the structural integrity of ASH2L. To our surprise, all ASH2L domains were required for maintaining the optimal HKMT activity of the MLL1 complex. Deleting the PHD-WH domain slightly reduced the stimulatory effect of DPY30, whereas deletion of IDR, pre-SPRY, and SPRY-insertion severely decreased the stimulatory effects of DPY30 on NCP substrates ( Figure 2B). Combined deletion of IDR, pre-SPRY, and SPRY-insertion motifs (DLL) completely abolished DPY30-dependent activity stimulation on NCP ( Figure 2B). Notably, deletion of pre-SPRY or SPRY-insertion, but not PHD-WH or IDR, also disrupted the stimulatory effect of DPY30 on iScience Article H3 1-9 peptides ( Figure 2C). These results indicate that these previously noteless domains or motifs of ASH2L play essential roles to enhance the HKMT activity of the MLL1 complex: the pre-SPRY and SPRY-insertion are critical for both nucleosome-independent and nucleosome-specific activity stimulation, whereas PHD-WH and IDR are only required for nucleosome-specific activity stimulation by DPY30.
DPY30 interacts with multiple regions of ASH2L
The functional interplay between DPY30 and these less-characterized motifs of ASH2L intrigued us to reexamine the DPY30-mediated interactions in the MLL1 complex. We performed crosslinking mass spectrometry (CX-MS) to probe the potential differences in protein-protein interaction networks of the MLL1 complex induced by DPY30. By using a cutoff of spectrum counts of more than three and an E score value smaller than 0.02, we identified 158 crosslinked peptides in the MLL1 complex with DPY30 (M1WRAD) and 122 crosslinked peptides in the MLL1 complex without DPY30 (M1WRA) (Data S1). The majority of the crosslinked peptides were shared in the two complexes, but the addition of DPY30 induced 47 specific crosslinks identified exclusively in the M1WRAD complex ( Figure S3A and Table S1). DPY30 is intensely crosslinked to the SPRY-insertion, IDR, and PHD-WH domains of ASH2L ( Figure 2D). It should be noted that no crosslink was detected between DPY30 and ASH2L DBM because there is no lysine residue in the ASH2L DBM region to enable crosslinking. In addition to the newly found DPY30-ASH2L intermolecular crosslink, we also found that DPY30 induced a substantial enrichment of the intramolecular crosslinks in ASH2L, especially the extensive crosslinks between SPRY-insertion and other regions of ASH2L ( Figure 2D).
The widespread crosslinks between multiple regions of ASH2L and DPY30, as well as between RBBP5 and DPY30, indicate that these ASH2L or RBBP5 regions may interact with DPY30 or be in closeness with DPY30.
To distinguish these two possibilities, we performed GST pull-down assays to characterize DPY30 interactions with ASH2L or RBBP5. No interaction was detected between DPY30 and RBBP5 under our assay conditions (Figure S3B), indicating that DPY30 and RBBP5 may just be in spatial proximity but do not directly interact with each other. In sharp contrast, ASH2L was readily pulled down by GST-DPY30 ( Figure 2E). Moreover, we found that deletion of PHD-WH and IDR did not affect the amount of ASH2L pulled down by GST-DPY30, but the deletion of pre-SPRY or SPRY-insertion severely decreased the ASH2L-DPY30 interactions ( Figures 2E and S3C), suggesting that pre-SPRY and SPRY-insertion of ASH2L may be directly involved in the interaction with DPY30.
The structural basis of the interaction between DPY30 and ASH2L
Next, we sought to determine how pre-SPRY and SPRY-insertion of ASH2L contribute to DPY30 binding. After extensive but unsuccessful attempts to crystallize the ASH2L-DPY30 complex, we decided to use AlphaFold2 to predict the ASH2L-DPY30 complex structure and apo ASH2L structure (Figures 3A and S4A) (Mirdita et al., 2022). ASH2L exhibits a two-lobe structure separated by a flexible IDR: one lobe (aa 1-177) is the PHD-WH domain, and the other lobe (aa 230-534) is composed of pre-SPRY, SPRY, SPRY-insertion, and DBM motifs ( Figure S4A). Each lobe shows a compact fold with high pLDDT (prediction local distance difference test) values, indicating high confidence of structural prediction, but the ASH2L IDR loop (178-229) connecting the two lobes does not have any defined structure ( Figure S4B). As a result, the PHD-WH domains show random orientations relative to ASH2L 230-534 in five apo ASH2L models ( Figure S4C). The ASH2L-DPY30 complex also shows a two-lobe feature ( Figures S4D and S4E). Although ASH2L IDR is still unstructured, the PHD-WH domain in the ASH2L-DPY30 complex has a fixed orientation relative to ASH2L 230-534 ( Figure S4F). The PAE (predicted aligned error) plots also confirmed that the ASH2L PHD-WH domain has some inter-domain packings with ASH2L pre-SPRY, SPRY-insertion, DBM, and one DPY30 ( Figure S4G). This suggests that DPY30 binding restrains the rotational freedom between ASH2L 1-177 and ASH2L 230-534 . This notion was supported by the observation that DPY30 induced more intramolecular crosslinked peptides of ASH2L in the M1WRAD complex, especially dramatically increased crosslinking peptides between PHD-WH and SPRY-insertion ( Figure 2D).
A recent publication reported that the pre-SPRY and SPRY-insertion motifs of ASH2L are intrinsically disordered regions (IDRs), and DPY30 induces conformational changes in ASH2L IDRs to form ordered structures (Lee et al., 2021). However, the structural prediction of apo ASH2L indicates that pre-SPRY and SPRY-insertion motifs have well-defined structural features with high confidence ( Figures S4A and S4B). The structures of the apo ASH2L 230-534 and ASH2L 230-534 -DPY30 complexes can be superimposed with a root-main-square deviation (rmsd) value of 0.32 Å ( Figure S4H). The pre-SPRY and SPRY-insertion motifs in both structures are almost identical. Molecular dynamic simulation analyses further indicated that the secondary structures of pre-SPRY and SPRY-insertion motifs in both apo ASH2L and ASH2L-DPY30 remained stable during the simulation process ll OPEN ACCESS iScience 25, 104948, September 16, 2022 5 iScience Article ( Figures S5A-S5D), suggesting that DPY30 may not directly induce the conformational change of ASH2L pre-SPRY and ASH2L SPRY-insertion, as proposed by the previous publication (Lee et al., 2021) In the representative complex structure model, the main DPY30-ASH2L binding interface is established by the ASH2L DBM helix docked into the hydrophobic cleft of DPY30 dimers ( Figure 3B). The pre-SPRY and the SPRY-insertion motifs of ASH2L function as two arms to clamp the ASH2L DBM helix, thus embracing the DBM helix together with two DPY30 dimerization helices ( Figure 3B). Consistent with the structural model, deletion of pre-SPRY and SPRY-insertion decreased the interaction between ASH2L and DPY30 ( Figure 2E). In addition, the ASH2L PHD domain directly contacts ASH2L pre-SPRY and one copy of DPY30 ( Figure 3B).
The importance of pre-SPRY and SPRY-insertion in maintaining the ASH2L-DPY30 interaction can also be in- Figure S6A), indicating that the inclusion of pre-SPRY and SPRY-insertion motifs ensures the conserved ASH2L-DPY30/Bre2-Sdc1 binding mode in different species. In addition, the predicted ASH2L-DPY30 structure can be superimposed into the M1WRAD-NCP structure without any clash with other MLL complex or NCP components ( Figure S6B), suggesting that the similar ASH2L-DPY30 configuration could be maintained in the M1WRAD-NCP complex.
A complex array of electrostatic interactions stabilizes the tetrapartite interface composed of ASH2L pre-SPRY , ASH2L SPRY-insertion , ASH2L DBM , and DPY30 DD ( Figure 3C). For example, DPY30 D58 forms three saltbridges with H257 and R280 from ASH2L pre-SPRY , and R280 is further secured by electrostatic interactions with D255, which additionally coordinates two positively charged residues, including K413 from ASH2L SPRY-insertion and H519 from ASH2L DBM . DPY30 R54 is linked to K413 from ASH2L SPRY-insertion through the D515 bridge from ASH2L DBM ( Figure 3C). In addition, the PHD domain in ASH2L PHD-WH interacts with pre-SPRY and DPY30 mainly through electrostatic and hydrogen-bonding interactions ( Figure 3D). In support of the important roles of these electrostatic interactions, mutations of ASH2L R280A, K413A, D515A/Y518A/H519A, or DPY30 R54A/D58A, which did not affect the structural integrity of ASH2L ( Figure S6C), specifically decreased the activity stimulation by DPY30 ( Figure 3E). Collectively, these results reveal some previously unrecognized interaction interfaces between ASH2L and DPY30.
DPY30 stabilizes ASH2L
We then explored how the ASH2L-DPY30 interaction affects the structures and functions of ASH2L and the MLL1 complex. The structural prediction indicated that DPY30 could restrain the turbulent motion between two ASH2L lobes, leading to stabilized ASH2L with a relatively fixed conformation ( Figure S4F). We wondered whether the structural stabilization of ASH2L by DPY30 could lead to the functional improvement of ASH2L. We first compared the thermostability of ASH2L or the ASH2L-DPY30 complex by nano differential scanning fluorimetry (nanoDSF), which monitors the intrinsic tryptophan fluorescence of proteins. Because DPY30 does not contain any tryptophan, the fluorescence signals reflect the folding status of ASH2L. The unfolding curves clearly showed that ASH2L exhibited a polyphasic unfolding transition with the first melting temperature (Tm1) at 32.3 C and the second melting temperature (Tm2) at 44.4 C, consistent with the multiple independent structural domains in ASH2L. The presence of DPY30 yielded a cooperative unfolding transition with one melting temperature at 50.0 C ( Figure 4A), indicating that the ASH2L-DPY30 complex exhibits a compact fold with a much higher thermostability than ASH2L. Disrupting the ASH2L-DPY30 interaction by introducing ASH2L 3E or DPY30 L69D abolished the DPY30-dependent Tm increase ( Figures 4B and 4C). The DPY30 R54A/D58A mutant only slightly increased the Tm of ASH2L ( Figure 4D). Deletion of pre-SPRY or SPRYinsertion of ASH2L, which impaired the ASH2L-DPY30 interaction, also severely destabilized the ASH2L-DPY30 complex with slightly increased Tm1 and unchanged Tm2 compared to ASH2L alone ( Figures 4E and 4F). These results highlight the essential role of DPY30 in improving the thermostability of ASH2L.
We then checked the ability of DPY30 to prevent ASH2L aggregation. We found that the addition of increasing amounts of DPY30 substantially reduced ASH2L aggregation at 37 C as monitored by light scattering ( Figure 4G). A stoichiometric quantity of DPY30 (DPY30:ASH2L = 2:1) effectively inhibited aggregation, and extra DPY30 did not further suppress the aggregation of ASH2L Figure 4G). The temperaturedependent protein aggregation curves measured by nanoDSF further confirmed that DPY30 substantially increased the T agg (temperature of aggregation) of ASH2L from 44.8 C to 62.2 C Figure 4H). Collectively, DPY30 can decrease the internal structural flexibility of ASH2L to maintain a compact conformation of ASH2L and stabilize ASH2L with increased thermostability and a reduced aggregation tendency.
DPY30 promotes the assembly of the MLL1 complex
We speculate that DPY30-dependent ASH2L stabilization may enhance ASH2L interaction with other proteins and facilitate the formation of a more stable MLL1 complex with increased HKMT activity. The crosslinking-MS data partially supported this speculation. DPY30 induced a substantial enrichment of the intramolecular crosslinks in ASH2L and intermolecular crosslinks in MLL1-ASH2L and RBBP5-ASH2L pairs ( Figure 2D), indicating an enhanced internal interaction network in MWRA on DPY30 binding. iScience Article ASH2L-RBBP5 interaction. For FP assays, we used a fluorescence-labeled RBBP5 AS-ABM (residues 330-363), a minimal RBBP5 fragment required for ASH2L binding (Li et al., 2016). Our FP assays showed that RBBP5 AS-ABM binds to free ASH2L with a dissociation constant (K d ) of 0.72 mM, but it binds to DPY30-bound ASH2L with a K d of 0.16 mM, a 4-fold increase in binding affinity ( Figure 5A). These results suggest that DPY30 binding to ASH2L confers positive cooperativity for the ASH2L-RBBP5 interaction. iScience Article DPY30-dependent enhancement of internal interactions in the MLL1 complex might result in a more compact M1WRAD complex than M1WRA. To check whether DPY30 contributes to the assembly of the MLL1 complex, we utilized small-angle X-ray scattering (SAXS) to characterize the conformation of the M1WRA in the presence or absence of DPY30. The Guinier regions of the scattering curves were linear at low q (range of momentum transfer), indicating that the samples were not aggregated (Figures S7A and S7B). The SAXS data were used to calculate the maximum particle dimension (d max ) and the radius of gyration (R g ). Although the molecular weight of the M1WRAD complex (203.3 kDa) is 12.4% larger than M1WRA (180.8 kDa), the M1WRAD complex has a similar Rg as M1WRA and a smaller d max (175 Å ) than that of M1WRA (185 Å ) (Figures 5B and Table S2), suggesting that DPY30 promotes the compaction of the MLL1 complex.
DPY30-induced compaction of the MLL1 complex might ensure a more stable DPY30-containing MLL1 complex. Indeed, nanoDSF analyses showed that DPY30 could elevate the Tm1 and Tm2 of the MLL1 complex from 40.3 C to 44.5 C and 44.3 C-53.5 C, respectively Figure 5C), but could not increase the Tm of the See also Figure S7 and iScience Article MLL1 complex reconstituted with DPY30-binding-deficient ASH2L (ASH2L 3E ) ( Figure 5D). Moreover, DPY30 L69D and R54A/D58A mutations impaired the stabilization ability of DPY30 because these DPY30 mutants only mildly increased the Tm of the MLL1 complex ( Figures 5E and 5F). These results indicate that DPY30dependent ASH2L stabilization promotes the assembly and stability of the MLL1 complex. We reason that this DPY30-induced structural stabilization and compaction of the MLL1 complex may account for the mild nucleosome-independent activity stimulation by DPY30 on non-nucleosome substrates, including histone octamers, H3-H4 tetramers, and H3 peptides ( Figures 1B and 1D).
The DPY30-ASH2L complex provides additional anchors on nucleosomes
Although DPY30-induced compaction of the MLL1 complex could explain the nucleosome-independent activity stimulation by DPY30, an additional mechanism must exist that determines the nucleosome-specific activity boosted by DPY30. To probe how DPY30 affects the M1WRA-NCP interaction, we performed crosslinking mass spectrometry analyses of the M1WRAD-NCP and M1WRA-NCP complexes, aiming to identify potential M1WRAD-NCP interfaces induced by DPY30. There were 102 crosslinked peptides identified in the M1WRA-NCP sample, and 10 of them were between M1WRA and NCP histones (Figures S8A, S8B and S8D and Data S2). For comparison, 20 out of 148 crosslinked peptides were identified between M1WRAD and NCP histones (Figures S8A, S8C and S8E and Data S2). The 10 crosslinked peptides between M1WRA and NCP histones identified in M1WRA-NCP were all found in M1WRAD-NCP ( Figure 6A). In the presence of DPY30, there are 10 specific crosslinks between M1WRAD and NCP ( Figure 6B), assumably providing additional anchors for M1WRAD on nucleosomes.
Notably, the N-terminal tail of H3 was extensively crosslinked to the SPRY-insertion and IDR elusively found in M1WRAD-NCP, indicating a potential direct interaction between ASH2L and the H3 tail in the presence of DPY30 Figure 6B). To provide direct evidence that DPY30 may enhance ASH2L's interaction with the H3 tail, we used a fluorescence polarization assay to characterize the ASH2L interaction with a fluorescent H3 1-36 peptide. FP assays showed that ASH2L or DPY30 alone had a negligible H3-binding ability, but the ASH2L-DPY30 complex had an appreciable interaction with the H3 tail Figure 6C), suggesting that DPY30-stabilized ASH2L could interact with the H3 tail.
In addition to the ASH2L-H3 interfaces, DPY30-stabilized ASH2L may achieve the ability to bind DNA, as indicated by the previous M1WRAD-NCP structure showing the close proximity between ASH2L and nucleosomal DNA (Park et al., 2019;Xue et al., 2019). Because our previous studies demonstrated that ASH2L had DNA-binding activity (Chen et al., 2011), we wondered whether DPY30 might modulate ASH2L's DNA-binding activity. The electrophoretic mobility shift assay (EMSA) confirmed that apo ASH2L had DNA-binding activity but mostly formed protein-DNA aggregates not migrating into the gel, as judged by the density of the shifted band ( Figure 6D). Although DPY30 itself does not have any DNA-binding activity, the inclusion of DPY30 not only increased the binding affinity between ASH2L and DNA but also facilitated the formation of a soluble protein-DNA complex running as a sharp band on EMSA gels Figure 6D).
Previous studies have shown that the PHD-WH domain of ASH2L can bind DNA (Chen et al., 2011;Sarvan et al., 2011). We found that the deletion of PHD-WH (ASH2L DPHD-WH ) indeed decreased the ASH2L-DNA association, but ASH2L DPHD-WH still responded to DPY30. DPY30 greatly increased the binding affinity between ASH2L DPHD-WH and DNA ( Figure 6E), consistent with the observation that the deletion of PHD-WH had a marginal effect on the HKMT activity on NCP ( Figure 2B). The ASH2L IDR, pre-SPRY, and SPRY-insertion motifs are more critical for DNA-binding activity of ASH2L. The deletion of these motifs (ASH2L DLL ) completely abolished the DNA-binding ability of ASH2L and severely disrupted the DPY30dependent DNA-binding activity of ASH2L ( Figure 6F). Thus, we conclude that the presence of DPY30 makes ASH2L competent for DNA binding through ASH2L IDR, pre-SPRY, and SPRY-insertion motifs. Taken together, these results support the notion that DPY30-stabilized ASH2L acquires additional interaction interfaces with histones and DNA, which may reduce the dynamics of the MLL1 complex on NCP and ensure the correct priming of the H3 substrate to boost the enzymatic activity of the MLL1 complex on nucleosomes.
The conserved role of DPY30 in stimulating HKMT activity of MLL-family complexes Figure 7A). We then measured the reaction rates of different MLL family complexes by using the MTase-Glo Methyltransferase Assay kit. The MLL complexes with DPY30 possessed 10-to 25-fold higher methyltransferase activity than the corresponding complexes without DPY30 Figure 7B). These results demonstrate that (E) EMSA showed that ASH2L DPHD-WH had a decreased DNA binding ability, but the ASH2L DPHD-WH -DPY30 complex had a similar DNA binding ability as wild type ASH2L-DPY30. F. EMSA showed that ASH2L DLL failed to bind DNA and that the ASH2L DLL -DPY30 complex had severely decreased DNA binding ability. DLL indicates the deletion of ASH2L IDR, pre-SPRY, and SPRY-insertion. See also Figure S8 and iScience Article the DPY30-dependent activity-stimulation mechanism derived from studies of the MLL1 complex can also be applied to other MLL family methyltransferases.
DISCUSSION
DPY30 is the smallest subunit (99 amino acids) in the MLL complex and associates peripherally with the MLL complex through its interaction with ASH2L (Patel et al., 2009). Although DPY30 plays an essential role in maintaining H3K4 methylation levels in vivo (Jiang et al., 2011;Yang et al., 2014Yang et al., , 2016, its biochemical role in maintaining the HKMT activity of the MLL complex has been underestimated. Here, we demonstrate that DPY30 plays a crucial role in the activity regulation of the MLL complex through two complementary mechanisms: a nucleosome-independent mechanism and a nucleosome-specific mechanism. First, DPY30 improves the stability of ASH2L by interacting with much broader interfaces of ASH2L than previously characterized. The stabilized ASH2L gains multiple functional improvements, including increased thermal stability, less aggregation, and enhanced interaction with RBBP5. All these DPY30-dependent iScience Article properties collectively contribute to the assembly of a compact and stable MLL complex with enhanced HKMT activity Figure 7C). This DPY30-dependent compaction and stabilization of the MLL complex could explain the previously ignored nucleosome-independent activity stimulation by DPY30, as observed on histone octamer, H3-H4 tetramer, and H3 peptide ( Figure 1B).
Second, DPY30 significantly enhances the HKMT activity of the MLL complex on nucleosomes (Kwon et al., 2020;Lee et al., 2021) ( Figure 1C). This nucleosome-specific activity enhancement relies on the newly generated interfaces between DPY30-stabilized ASH2L and nucleosomes. The extra interfaces of ASH2L-H3 and ASH2L-DNA may ensure a relatively fixed configuration of the MLL complex on nucleosomes, thereby boosting the HKMT activity of the MLL complex ( Figure 7C). It should be noted that the major role of these newly generated interfaces is not to increase the binding affinity between the MLL complex and nucleosomes. Our kinetic analyses showed that DPY30 decreased the K m by only 1.5-fold ( Figure 1C). Thus, the primary roles of these newly generated ASH2L-nucleosome interfaces are to prime the MLL1 complex in a correct orientation on nucleosomes and facilitate H3 alignment into the active pocket of the SET domain to catalyze H3K4 methylation more efficiently. This can explain why DPY30 causes more of a change in k cat rather than a K m change ( Figure 1C).
A recent report from the Dou laboratory concluded that DPY30 enhanced the activity of the MLL1 complex on nucleosomes by restricting the rotational dynamics of the MLL1 complex on NCP (Lee et al., 2021). Their work and our present study complement each other to reveal how ASH2L and DPY30 interact to affect the conformation of the MLL complex on nucleosomes. Notwithstanding the similar conclusions we both held, our studies reveal some new aspects of the ASH2L-DPY30 interaction. For example, the study from the Dou lab only addressed the nucleosome-specific activation mechanism (Lee et al., 2021), but our study also reveals the nucleosome-independent mechanism, explaining the activity stimulation by DPY30 on broad-spectrum substrates. Moreover, our study provides direct biochemical evidence that DPY30 enhances ASH2L's interactions with the H3 tail and DNA (Figure 6), explaining why DPY30 reduces the rotation dynamics of the MLL1 complex on NCP.
In addition, our study provides an alternative explanation for how DPY30 affects the structures and functions of ASH2L. Dou's work emphasized the importance of ASH2L IDRs and proposed that the major role of DPY30 was to induce the conformational change of ASH2L IDRs (Lee et al., 2021). Here, we show that the so-called ASH2L IDRs in the previous publication (Lee et al., 2021), especially the pre-SPRY and SPRY-insertion motifs, have well-defined structural features and are not obviously altered by DPY30 binding (Figures S4 and S5). The major role of DPY30 is not to induce the conformational change of ASH2L but to increase the stability of ASH2L and allosterically enhance ASH2L-mediated interactions (including interactions with RBBP5, H3, and DNA). DPY30-dependent enhancement of the internal interaction networks thus facilitates the formation of a compact MLL core complex and reduces the conformational flexibility of the MLL complex on nucleosomes.
The functions of DPY30 in increasing ASH2L thermostability, preventing ASH2L aggregation, and enhancing the ASH2L-dependent methyltransferase activity of the MLL1 complex are analogous to protein chaperones in regulating the stabilities and activities of client proteins. We propose that DPY30 functions as an ASH2L-specific chaperone to stabilize ASH2L. Whether DPY30 functions as a general chaperone remains to be determined. Notwithstanding, DPY30 is the peripheral protein of the MLL complex, and the dissociation of DPY30 does not lead to the disassembly of the MLL complex (Patel et al., 2009) but rather ''turns off'' the HKMT activity of the MLL1 complex. In certain circumstances, the MLL complex binds to the target chromatin regions in the priming state (without DPY30, low activity) and waits for the signal to quickly switch to the activation state (with DPY30 binding, high activity) to ''turn on'' HKMT activity. Therefore, DPY30 may serve as a delicate on/off switch for MLL-family complexes to precisely regulate the HKMT activity.
Owing to the critical role of DPY30 in maintaining H3K4 methylation, abnormal expression of DPY30 can lead to the initiation and progression of human diseases. Extensive studies have reported that DPY30 is overexpressed in many types of cancers, accompanied by increased H3K4me3 modification in cancer cells (Dixit et al., 2022;Gu et al., 2021;He et al., 2019;Hong et al., 2020;Lee et al., 2015;Shah et al., 2019;Yang et al., 2018;Zhang et al., 2018). Overexpression of DPY30 promotes proliferation, migration, and invasion of tumor cells (Lee et al., 2015;Yang et al., 2018;Zhang et al., 2018 iScience Article DPY30-ASH2L interface might be a therapeutic target for cancer treatment. As the first proof-of-concept for targeting the DPY30-ASH2L interaction, a peptide derived from ASH2L DBM (residues 510-529) decreased the global H3K4me3 and modestly inhibited the growth of MLL-rearranged leukemia (Shah et al., 2019), demonstrating the feasibility of targeting the DPY30-ASH2L interaction for cancer treatment. The newly identified DPY30-ASH2L interface and ASH2L-NCP interface revealed in the current study provide a foundation for designing or screening inhibitors with high potency and specificity to target the ''DPY30-ASH2L-MLL-NCP'' axis, hopefully contributing to the discovery of new therapeutic drugs for certain cancers.
Limitation of the study
Here we used the ColabFold powered by AlphaFold2 to predict the structure of the ASH2L-DPY30 complex.
Although the predicted structural model looks plausible and has been supported by mutagenesis studies, the exact structure of the ASH2L-DPY30 complex still awaits experimental determination by X-ray crystallography or other structural methodologies, which will reveal more detailed interface information between ASH2L and DPY30. The structural information will provide a foundation for the rational design of small molecules or peptide mimics to inhibit the ASH2L-DPY30 interaction for potential therapeutic usage.
Our biochemical data suggest that multiple regions of ASH2L and DPY30 are essential for maintaining the HKMT activity of MLL complexes on NCP. Unfortunately, currently available cryo-EM structures of MWRAD-NCP had the lowest resolution (8-12 Å ) in the ASH2L-DPY30 region, preventing us from building a reliable model for ASH2L-DPY30 on NCP. Moreover, the conformational change of ASH2L-DPY30 on binding with NCP is expected, especially in the highly flexible ASH2L IDR region (aa 178-229). Therefore, the high-resolution cryo-EM structure of MWRAD-NCP is required to dissect how ASH2L-DPY30 impacts the methylation ability of the MLL complexes on nucleosomes.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following: Article 25 mM Tris-HCl, pH 8.0, and 150 mM NaCl. Next, capillaries filled with samples were placed on the loading tray and heated from 20 C to 85 C at a heating rate of 1 C/min. The fluorescence at 330 and 350 nm and the light scattering signals were recorded. The melting temperature (Tm) and the aggregation temperature (Tagg) were determined by the PR. ThermControl software (NanoTemper Technologies, Germany).
ASH2L aggregation assay
To monitor thermoinduced ASH2L aggregation, ASH2L was diluted into 1 mL 37 C prewarmed buffer containing 25 mM Tris-HCl pH 8.0 and 150 mM NaCl in a cuvette to a final concentration of 1.5 mM. Different ratios of DPY30 (ASH2L: DPY30 = 1:0, 1:1, 1:2, or 1:4) were then added and continuously mixed with a small magnetic stirring bar. ASH2L aggregation was monitored by light scattering at 360 nm using a Thermo LUMINA fluorescence spectrophotometer (Thermo Fisher Scientific, USA) with a Peltier temperature controller.
Fluorescence polarization assay
ASH2L only or ASH2L-DPY30 complexes were diluted to a series of concentrations from 432 mM to 13.5 mM in buffer containing 25 mM Tris, pH 8.0, and 150 mM NaCl. Various concentrations of proteins (30 mL) were mixed with 2.4 mL of 1.35 mM FAM-labeled H3 1-36 peptide (ARTKQTARKSTGGKAPRKQLATKAA RKSAPATGGVK-FAM) (final concentration 100 nM) and incubated on ice in the dark for 30 min. The fluorescence polarization values in 384-well black plates were measured using a Synergy Neo Multi-Mode Reader (Bio-Tek, USA) at an excitation wavelength of 485 nm and an emission wavelength of 528 nm. Fluorescence was quantitated with GEN 5 software (Bio-Tek, USA), and data were analyzed with GraphPad Prism 8.0 (GraphPad, USA). Notably, K d values cannot be accurately determined because the binding is not saturated even at the highest protein concentration.
To measure the binding affinity between ASH2L and RBBP5, ASH2L (or the ASH2L-DPY30 complex) was diluted to a series of concentrations from 20 mM to 0.01 mM in buffer containing 25 mM Tris, pH 8.0, and 150 mM NaCl. Fifteen microliters of various concentrations of proteins were mixed with equal volumes of 200 nM RBBP5 330-363 -FAM and incubated on ice in the dark for 30 min. The following experimental operations were consistent with those mentioned above.
Electrophoretic mobility shift assay (EMSA)
ASH2L or the ASH2L-DPY30 complex was diluted to a series of concentrations ranging from 11.52 mM to 0.18 mM using buffer containing 25 mM HEPES pH 7.4, 150 mM NaCl, 0.5 mg/mL BSA, and 5% glycerol.7 mL of various concentrations of protein was mixed with 7 mL of 80 nM 25 bp 5 0 -FAM-labeled dsDNA (FAM-TCTCTAGAGTCGACCTGCAGGCATG). After incubation on ice for 30 min, each reaction mixture was separated by electrophoresis on a 6% native polyacrylamide gel. The gels were then visualized on a Bio-Rad ChemiDoc MP Imaging System (Bio-Rad, USA).
Structural prediction by ColabFold
Structural prediction was carried out by ColabFold, which was powered by AlphaFold2 and featured sequence alignment using MMseqs2 (Mirdita et al., 2022). The parameters were the default settings, including unrelaxed, no template information used, MMseqs2 (UniRef + Environment), pair mode (unpaired + paired), and model_type (complex prediction using Alpha-Fold-multimer-v2 and single-chain prediction using AlphaFold2-ptm). The results were similar to the models predicted by AlphaFold2 v2.0.0 (Jumper et al., 2021) installed in the local workstation using the nondocker installation. The output five models were aligned and inspected in PyMOL (Schrö dinger, USA).
Molecular dynamics simulations
For MD simulation, apo ASH2L 230-534 and the ASH2L 230-534 -DPY30 DD complex were used because the turbulent motion between ASH2L 1-177 and ASH2L 230-534 because of the highly flexible ASH2L IDR (residues 178-229) leads to overall dynamics in ASH2L FL . All systems were set up using GROMACS (Kutzner et al., 2015) and the CHARMM36 force field (Huang and MacKerell, 2013). The proteins were centered in a cubic box with a buffering distance of 1.0 nm and solvated with TIP3P water molecules. Charges were neutralized by adding Na + or Cl À accordingly, and the final NaCl concentration was kept at 0.15 M, consistent with the experiments. After energy minimization with the steepest descending algorithm, we gently heated the system to 300 K under NVT conditions with the position of the protein constrained with a harmonic potential of ll OPEN ACCESS iScience 25, 104948, September 16, 2022 iScience Article iScience Article | 8,421 | 2022-08-01T00:00:00.000 | [
"Biology"
] |
Axions as Dark Matter Particles
We review the current status of axions as dark matter. Motivation, models, constraints and experimental searches are outlined. The axion remains an excellent candidate for the dark matter and future experiments, particularly the Axion Dark Matter eXperiment (ADMX), will cover a large fraction of the axion parameter space.
Introduction
Despite our knowledge of dark matter's properties, what it consists of is still a mystery. The standard model of particle physics does not contain a particle that qualifies as dark matter. Extensions to the standard model do, however, provide viable particle candidates. The axion, the pseudo-Nambu-Goldstone boson of the Peccei-Quinn solution to the strong CP problem [1,2,3,4], is a strongly motivated particle candidate. We review the strong CP problem and its resulting axion in section 2.
In the early universe, cold axion populations arise from vacuum realignment [5,6,7] and string and wall decay [8,9,10,11,12,13,14,15,16,17,18,19,20]. Which mechanisms contribute depends on whether the Peccei-Quinn symmetry breaks before or after inflation. These cold axions were never in thermal equilibrium with the rest of the universe and could provide the missing dark matter. The cosmological production of cold axions is outlined in section 3.
Current constraints on the axion parameter space, from astrophysics, cosmology and experiments, are reviewed in section 4. The signal observed in direct detection experiments depends on the phase-space distribution of dark matter axions. In section 5, we discuss possible structure for dark matter in our galactic halo and touch on the implications for detection. Experimental searches and their current status are discussed in section 6. Finally, the current status of the QCD axion is summarized in section 7.
This is a brief review, designed to overview the current status of axions as dark matter. For more advanced details, we refer the reader to the extensive literature (e.g. reviews can be found in refs. [21,22,23,24,25]).
Strong CP problem
The strong CP problem arises from the non-Abelian nature of the QCD gauge symmetry, or colour symmetry. Non-Abelian gauge potentials have disjoint sectors that cannot be transformed continuously into one another. Each of these vacuum configurations can be labelled by an integer, the topological winding number, n. Quantum tunnelling occurs between vacua. Consequently, the gauge invariant QCD vacuum state is a superposition of these states, i.e., |θ = n e −inθ |n . (1) The angle, θ, is a parameter which describes the QCD vacuum state, |θ . In the massless quark limit, QCD possesses a classical chiral symmetry. However, this symmetry is not present in the full quantum theory due to the Adler-Bell-Jackiw anomaly [26,27]. In the full quantum theory, including quark masses, the physics of QCD remains unchanged under the following transformations of the quark fields, q i , quark masses, m i , and vacuum parameter, θ: where the α i are phases and γ 5 is the usual product of gamma matrices. This is not a symmetry of QCD due to the change in θ.
The transformations of Eq. (2) through Eq. (4) can be used to move phases between the quark masses and θ. However, the quantity, where M is the quark mass matrix, is invariant and thus observable, unlike θ. The presence of θ in QCD violates the discrete symmetries P and CP. However, CP violation has not been observed in QCD. An electric dipole moment for the neutron is the most easily observed consequence of QCD, or strong, CP violation. θ results in a neutron electric dipole moment of [21,22,23,24], where e is the electric charge. The current experimental limit is [28] |d n | < 6.3 × 10 −26 e cm (7) and thus, |θ| 10 −9 . There is no natural reason to expect θ to be this small. CP violation occurs in the standard model by allowing the quark masses to be complex and thus the natural value of θ is expected to be of order one. This is the strong CP problem, i.e. the question of why the angle θ should be nearly zero, despite the presence of CP violation in the standard model. The Peccei-Quinn (PQ) solution [1,2] to this problem results in an axion [3,4]. While other solutions to the strong CP problem have been proposed, the presence of the axion in the PQ solution makes it the most interesting when searching for the dark matter of the universe. Thus, we focus on this solution only in the following.
The Peccei-Quinn solution
The axion is the pseudo-Nambu-Goldstone boson from the Peccei-Quinn solution to the strong CP problem [1,2,3,4]. In the PQ solution, θ is promoted from a parameter to a dynamical variable. This variable relaxes to the minimum of its potential and hence is small.
To implement the PQ mechanism, a global symmetry, U (1) P Q , is introduced. This symmetry possesses a colour anomaly and is spontaneously broken. The axion is the resulting Nambu-Goldstone boson and its field, a, can be redefined to absorb the parameter θ. While initially massless, non-perturbative effects, which make QCD θ dependent, also result in a potential for the axion. This potential causes the axion to acquire a mass and relax to the CP conserving minimum, solving the strong CP problem.
As there are no degrees of freedom available for the axion in the standard model, new fields must be added to realize the PQ solution. In the original, Peccei-Quinn-Weinberg-Wilczek (PQWW) axion model, an extra Higgs doublet was used. We review this model to demonstrate the PQ mechanism.
Assume that one of the two Higgs doublets in the model, φ u , couples to uptype quarks with strengths y u i and the other, φ d , couples to down-type quarks with strengths y d i , where i gives the variety of up-or down-type quark. We label the up-and down-type quarks u i and d i , respectively (rather than q i , as in the previous section). With a total of N quarks, there are N/2 up-type quarks and down-type quarks. The leptons may acquire mass via Yukawa couplings to either of the Higgs doublets or to a third Higgs doublet. We ignore this complication here and simply examine the couplings to quarks.
The quarks acquire their masses from the expectation values of the neutral components of the Higgs, φ 0 u and φ 0 d . The mass generating couplings are where the matrices (a ij ) and (b ij ) are real and symmetric and the sum is over the two types of Higgs fields. With this choice of potential, the full Lagrangian has a global When the electroweak symmetry breaks, the neutral Higgs components acquire vacuum expectation values: One linear combination of the Nambu-Goldstone fields, P u and P d , is the longitudinal The orthogonal combination is the axion field, a = sin β v P u + cos β v P d .
(18) Using Eqs. (15), (16), (17) and (18) with Eq. (8), the axion couplings to quarks arise from The axion field dependence can be moved from the mass terms using the transformations of Eqs. (12), (13) and (14). Direct couplings between the axion and quarks will remain in the Lagrangian, through the quark kinetic term.
which can be absorbed by a redefinition of the axion field.
Non-perturbative QCD effects explicitly break the PQ symmetry, but do not become important until confinement occurs. These effects give the axion field a potential and when significant, the field relaxes to the CP conserving minimum. Hence the PQ mechanism, which replaces θ with the dynamical axion field, solves the strong CP problem.
Under the PQWW scheme, the axion mass is tied to the electroweak symmetry breaking scale, resulting in a mass of the order of 100 keV. This heavy PQWW axion has been ruled out by observation, as discussed in Section 4.1. This does not, however, eliminate the possibility of solving the strong CP problem with an axion. In the following section, we discuss viable axion models.
Axion models
"Invisible" axion models, named so for their extremely weak couplings, are still possible. In an invisible axion model, the PQ symmetry is decoupled from the electroweak scale and is spontaneously broken at a much higher temperature, decreasing the axion mass and coupling strength. Two benchmark, invisible axion models exist: the Kim-Shifman-Vainshtein-Zakharov (KSVZ) [29,30] and Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) [31,32] models. In both these models, an axion with permissable mass and couplings results.
In the KSVZ model, the only Higgs doublet is that of the standard model. The axion is introduced as the phase of an additional electroweak singlet scalar field. The known quarks cannot directly couple to such a field, as this would lead to unreasonably large quark masses. Instead, the scalar is coupled to an additional heavy quark, also an electroweak singlet. The axion couplings are then induced by the interactions of the heavy quark with the other fields.
The DFSZ model has two Higgs doublets, as in the PQWW model, and an additional electroweak singlet scalar. It is the electroweak singlet which acquires a vacuum expectation value at the PQ symmetry breaking scale. The scalar does not couple directly to quarks and leptons, but via its interactions with the two Higgs doublets.
PQ symmetries also occur naturally in string theory, via string compactifications, and are always broken by some type of instanton. While this could be expected to make the axion an outstanding dark matter candidate, string models favour a value of the PQ scale that is much higher than that allowed by cosmology (see the discussion in 3). A review of the current situation can be found in Ref. [33]. As discussed in [33], it is difficult to push the PQ scale far below 1.1 × 10 16 GeV and easier to instead increase its value.
Generically, axion couplings to other particles are inversely proportional to f a , however, the exact strength of these couplings are model dependent. For example, the coupling between axions and photons can be written, where E and B are the electromagnetic field components. The coupling constant is where α is the electromagnetic fine structure constant, 10 9 GeV f a 10 12 GeV is the axion decay constant, of the order of the PQ scale, and g γ is a constant containing the model dependence. Explicitly, where z is the ratio of the up and down quark masses, N is the axion colour anomaly and E, the axion electromagnetic anomaly [34]. The term containing the ratios of light quark masses is approximately equal to 1.95. The model dependence arises through the ratio, E/N . In grand-unifiable models, E and N are related and E/N = 8/3. The DFSZ axion model falls into this category and in this case, g γ = 0.36. For a KSVZ axion, E = 0 and g γ = −0.97. It is possible for an axion to solve the strong CP problem, as shown by the existence of the KSVZ and DFSZ axion models. While significant for that alone, the axion also provides an interesting candidate for the cold dark matter of the universe.
Properties of axion dark matter
Axions satisfy the two criteria necessary for cold dark matter: (1) a non-relativistic population of axions could be present in our universe in sufficient quantities to provide the required dark matter energy density and (2) they are effectively collisionless, i.e., the only significant long-range interactions are gravitational.
Despite having a very small mass [21,22,23,24], axion dark matter is non-relativistic, as cold populations are produced out of equilibrium. There are three mechanisms via which cold axions are produced: vacuum realignment [5,6,7], string decay [8,9,10,11,12,13,14,15,16,17,18] and domain wall decay [18,19,20]. In this section, we discuss the history of the axion field as the universe expands and cools to see how and when axions are produced. We also review vacuum realignment production in detail, as there will always be a contribution to the cold axion populations from this mechanism and, as discussed below, it may provide the only contribution. A complete description of the cold axion populations can be found in ref. [35].
Topological axion production
There are two important scales in dark matter axion production. The first is the temperature at which the PQ symmetry breaks, T P Q . Which of the three mechanisms contribute significantly to the cold axion population depends on whether this temperature is greater or less than the inflationary reheating temperature, T R . The second scale is the temperature at which the axion mass, arising from nonperturbative QCD effects, becomes significant. At high temperatures, the QCD effects are not significant and the axion mass is negligible [36]. The axion mass becomes important at a critical time, t 1 , when m a t 1 ∼ 1 [5,6,7]. The temperature of the universe at t 1 is T 1 ≃ 1 GeV. The PQ symmetry is unbroken at early times and temperatures greater than T P Q . At T P Q , it breaks spontaneously and the axion field, proportional to the phase of the complex scalar field acquiring a vacuum expectation value, may have any value. The phase varies continously, changing by order one from one horizon to the next. Axion strings appear as topological defects.
If T P Q > T R , the axion field is homogenized over vast distances and the string density is diluted by inflation, to a point where it is extremely unlikely that our visible universe contains any axion strings. In the case T P Q < T R , the axion field is not homogenized and strings radiate cold, massless axions until non-perturbative QCD effects become significant at temperature, T 1 . Agreement has not been reached on the expected spectrum of axions from string radiation and there are two possibilities. Either strings oscillate many times before they completely decay and axion production is strongly peaked around a dominant mode [8,9,11,12,13,14,15,16] or much more rapid decay occurs, producing a spectrum inversely proportional to momentum [10,37]. Rapid decay produces ∼ 70 times less axions than slow string decay, leading to different cosmological bounds on the axion mass (see section 4.2).
When the universe cools to T 1 , the axion strings become the boundaries of N domain walls. For N = 1, the walls rapidly radiate cold axions and decay (domain wall decay). If N > 1, the domain wall problem occurs [38] because the vacuum is multiply degenerate and there is at least one domain wall per horizon. These walls will end up dominating the energy density and cause the universe to expand as S ∝ t 2 , where S is the scale factor. Although other solutions to the domain wall problem have been proposed [18], we assume here that N = 1 or T P Q > T R .
Thus, if T P Q < T R , string and wall decay contribute to the axion energy density. If T R < T P Q , and the axion string density is diluted by inflation, these mechanisms do not contribute significantly to the density of cold axions. Then, only vacuum realignment will contribute a significant amount.
Vacuum realignment mechanism
Cold axions will be produced by vacuum realignment, independent of T R . Details of this method are discussed below, but the general mechanism is as follows. At T P Q , the axion field amplitude may have any value. If T P Q > T R , homogenization will occur due to inflation and the axion field will be single valued over our visible universe. Non-perturbative QCD effects cause a potential for the axion field. When these effects become significant, the axion field will begin to oscillate in its potential. These oscillations do not decay and contribute to the local energy density as nonrelativistic matter. Thus, a cold axion population results from vacuum realignment, regardless of the inflationary reheating temperature.
To illustrate vacuum realignment, consider a toy axion model with one complex scalar field, φ(x), in addition to the standard model fields. The potential for φ(x) in our toy model is When the universe cools to T P Q ∼ v a , φ acquires a vacuum expectation value, The relationship between the axion field, a(x), and θ(x) is The axion decay constant in this model is For the following discussion, we set N = 1. At T ∼ Λ, where Λ is the confinement scale, non-perturbative QCD effects give the axion a mass. An effective potential, is produced. The axion acquires mass, m a , due to the curvature of the potential at the minimum. This mass is temperature, and thus time, dependent due to the temperature dependence of the potential [36].
In a Friedmann-Robertson-Walker universe, the equation of motion for θ is where S(t) is the scale factor and H(t), the Hubble constant, at time t. Near θ = 0, sin(θ) ≃ θ.
We now restrict the discussion to the zero momentum mode. This is the only mode with significant occupation when T P Q > T R , so the final energy density calculated will be for this case. When T R > T P Q , higher modes will also be occupied. For the zero momentum mode, neglecting spatial derivatives, the equation of motion reduces tö i.e., the field satisfies the equation for a damped harmonic oscillator with timedependent parameters. At early times, the axion mass is insignificant and θ is approximately constant. When the universe cools to the critical temperature, T 1 , which we define via the field will begin to oscillate in its potential [39]. Given the definition of the critical time, t 1 , in Eq. (32) [35], and The axion field can realign only as fast as causality permits, thus the momentum of a quantum of the axion field is for f a ≃ 10 12 GeV, which corresponds to m a ≃ 6 µeV, by Eq. (24). Thus, this population is non-relativistic or cold. The energy density of the scalar field around its potential minimum is By the Virial theorem, As axions are non-relativistic and decoupled, Thus, the number of axions per comoving volume is conserved, provided the axion mass varies adiabatically. The initial energy density of the coherent oscillations is and θ 1 is the initial, "misalignment" angle. The energy density in axions today is Using Eqs. (32) and (39), which implies the axion energy density, Ω a ≃ 0.15 f a 10 12 GeV using Eqs. (24), (33) and (34).
As the axion couplings are very small, these coherent oscillations do not decay and make axions a good candidate for the dark matter of the universe.
Laboratory bounds
The original PQWW axion would have been of order 100 keV mass, and possessed couplings large enough to enable the axion to have been produced and detected in conventional laboratory experiments. These included searches for axions emitted from reactors, where axions would compete with M1 gamma transitions in radioactive decay; nuclear deexcitation experiments; beam-dump experiments; and axion decay from 1 − heavy quarkonia states, i.e. J/ψ and Υ. All results were negative, thus excluding the original PQWW axion within a decade of its prediction. As these limits are much weaker than the current astrophysical upper bounds for both the mass and coupling to radiation, the reader is directed to earlier reviews [21,22,34], and the Particle Data Group Review of Particle Properties for discussion and annotated limits [40].
Searches for axions have also been performed exploiting the coherent mixing of axions with photons in experiments realized with lasers and large superconducting dipole magnets. These include both searches for vacuum dichroism and birefringence through the production of real or virtual axions in a magnetic field [41], and photon regeneration (shining light through walls) [42,43,44]. Axion-photon mixing will be discussed briefly further on, but the all such experiments to date have set limits on the axion-photon coupling, g aγγ ∼ 2 × 10 −7 GeV −1 (weakening significantly for m a > 0.5 meV) orders of magnitude weaker than current astrophysical limits and direct searches for solar axions. While there are no prospects for the polarization experiments to compete with these latter (g aγγ ∼ 10 −10 GeV −1 ), a new strategy to resonantly-enhance photon regeneration may enable them to improve on these limits by up to an order of magnitude [45], and at least one such experiment is in preparation.
Torsion-balance techniques have enabled searches for axions through their coupling to the nucleon spin, and rigorous bounds have been set on their short-range interactions for masses below 1 meV. While impressive tour de force experiments, how to relate these limits to expectations from the Peccei-Quinn axion is not straightforward [46].
Cosmological bounds
Whether by the vacuum realignment mechanism, or by radiation from topological strings, the production of very light axions in the early universe implies an increasing energy density of the universe in axions for lower masses: Ω a = ρ a /ρ c ∝ m −7/6 a . As the relative importance of the mechanisms is still controverted, a reliable lower bound on the axion mass should obtain where the vacuum realignment contribution becomes of order unity, Ω a = O(1). From Eq. (42), this corresponds to m a ∼ 6 µeV(see section 3.3), although there is uncertainty in this estimate itself, owing to lack of knowledge of the initial value of θ within our horizon; the estimate above based on the assumption that this parameter is of order unity. An accidentally small value would drive the mass associated with closure density downwards, including arbitrarily small masses. On the other hand, within the concordance model, Ω DM = 0.23, implying that the axion mass should be roughly a factor of 4 higher than cited above. Thus the ADMX microwave cavity experiment conservatively began its search campaign at m a ∼ 2 µeV, and continues to work upwards. Recent discussions of the cosmological bound from vacuum realignment can be found in refs. [47,48].
The above vacuum realignment bound applies when inflation has homogenized the axion field over our horizon. When the PQ scale breaks after inflation, cold axions are additionally produced by string and domain wall decay. These extra contributions mean that we require inflation to have produced less axions to avoid overclosing the universe and thus, the axion mass lower bound increases (see section 3.3). As the spectrum of axions from string radiation is debated, we review the possible bounds. If axion strings decay rapidly, giving a spectrum inversely proportional to momentum, the lower bound for the axion mass is ∼ 15 µeV [17]. If the decay is less rapid and strings go through many oscillations, an analysis based on local strings [15] gives a lower bound of 100 µeV. A similar analysis based on global strings [16] gives a smaller lower bound of 30 µeV. Given that the PQ symmetry is global, it is likely that the lower bound of 30 µeV is applicable if the PQ symmetry breaks after inflation and axion strings decay slowly.
Astrophysical bounds
In general, introducing a channel for direct or free-streaming energy loss from a star's core accelerates the star's evolution. The core will contract and heat up under the influence of gravity when axions (or other exotica) compete with the production of strongly trapped photons, whose radiation pressure acts to counterbalance gravitational pressure. Furthermore, for each stellar system, axions are excluded only over a finite range of couplings. As the axion's coupling is increased in the stellarevolution simulations, the free-streaming lower limit of the axion's excluded couplings is reached at a point where deviations from an axion-free model first become noticeable. However, as the coupling is further increased, the axions themselves eventually become strongly trapped; the upper limit corresponds to the regime where their influence on evolution diminishes below the threshold of observation. A comprehensive treatment of such constraints on properties of axions and other exotica has been published by Raffelt [49]. The two most relevant astrophysical limits in framing the region of interest where axions may be the dark matter are described below.
The most stringent constraint on the axion-photon coupling presently is due to horizontal branch (HB) stars, i.e. those that are in their helium-burning phase, within globular clusters. Globular clusters provide a cohort population of stars of all the same age; those that are seen today are have masses somewhat less than that of our sun. By the ratio of the number observed in the HB phase to those in the red giant phase, i.e. after exhaustion of core-hydrogen burning but before the helium flash, one statistically infers an average HB lifetime. The concordance (within 10%) between the calculated and inferred HB lifetime precludes Primakoff production of axions (γ + Ze → a + Ze, i.e. axions produced by the interaction of a real plus virtual photon) at a level corresponding to an upper bound of g aγγ < 10 −10 GeV −1 . A definitive study of axion cooling in stars using numerical methods aims to extend this analysis [50].
The lowest-lying upper bound for the axions mass is due to SN1987a. Axions produced by nucleon-nucleon bremsstrahlung (N + N → N + N + a) during the core-bounce of the protoneutron star would have competed with neutrino emission. That the duration of the neutrino pulse observed between the IMB and Kamioka water Cherenkov detectors (19 events over 10 seconds) was in good accord with corecollapse models precludes axions in the mass range 10 −3 eV < m a < 2 eV. This range corresponds to the free-streaming regime; as mentioned previously, for axions above this range axions themselves are strongly trapped and thus are not effective in quenching the neutrino signal.
Phase-space structure of halo dark matter
The local velocity distribution of dark matter axions is of vital importance for direct detection. The Axion Dark Matter eXperiment (ADMX), detailed in Section 6.2 and ref. [51], in this volume, uses a microwave cavity to search directly for axions. The observed signal is the power output from the cavity due to axion conversion to photons, as a function of frequency. The frequency corresponds to the energy distribution of axions undergoing conversion. As axions are non-relativistic, the signal frequency is given by: The local velocity distribution thus determines the signal shape. The signal amplitude is determined by the density of axions of a particular energy. Thus, the phase-space distribution determines the signal observed. We expect that the dark halo of the Milky Way consists of a number of components which ADMX is capable of observing: (1) a thermalized component with a Maxwell-Boltzmann velocity distribution, (2) discrete flows, from tidal stripping of satellite halos or coherent dark matter flows crossing the halo, and (3) overdense regions that are not gravitationally bound, known as caustics.
The ADMX search technique assumes that the rates of change of velocity, velocity dispersion and flow density are slow compared to the time scale of the experiment. The ADMX medium resolution (MR) channel searches for an isothermal component of the halo as dark matter axions. The ADMX high resolution (HR) channel searches for signals with a narrow velocity dispersion, such as discrete flows.
Numerical simulations produce large halos within which hundreds of smaller clumps, or subhalos, exist [52,53]. Tidal disruption of these subhalos leads to flows in the form of "tidal tails" or "streams." The Earth may currently be in a stream of dark matter from the Sagittarius A dwarf galaxy [54,55]. This stream of subhalo debris satisfies both the requirements of small velocity dispersion and a repeatable signal and thus may be detectable by the ADMX HR channel.
Non-thermalized flows from late infall of dark matter onto the halo have also been shown to be expected [56,57]. The idea behind these flows is that dark matter that has only recently fallen into the gravitational potential of the galaxy will have had insufficient time to thermalize with the rest of the halo and will be present in the form of discrete flows. There will be one flow of particles falling onto the galaxy for the first time, one due to particles falling out of the galaxy's gravitational potential for the first time, one from particles falling into the potential for the second time, etc. Furthermore, where the gradient of the particle velocity diverges, particles "pile up" and form caustics. In the limit of zero flow velocity dispersion, caustics have infinite particle density. The velocity dispersion of cold axions at a time, t, prior to galaxy formation is approximately δv a ∼ 3 × 10 −17 (10 −5 eV/m a )(t 0 /t) 2/3 [58], where t 0 is the present age of the universe. A flow of dark matter axions will thus have a small velocity dispersion, leading to a large, but finite density at a caustic.
The caustic ring model, under the assumptions of self-similarity and axial symmetry, predicts that the Earth is located near a caustic feature [59]. This model, fitted to rises in the Milky Way rotation curve and a triangular feature seen in the IRAS maps, predicts that the flows falling in and out of the halo for the fifth time contain a significant fraction of the local halo density. The predicted densities are 1.5 × 10 −24 g/cm 3 and 1.5 × 10 −25 g/cm 3 [60], comparable to the local dark matter density of 9.2 × 10 −25 g/cm 3 predicted by Gates et al. [61]. The flow of the greatest density is known as the "Big Flow." A general treatment of the phase-space structure of dark matter halos, which does not require assumptions of self-similarity or symmetry, has recently been developed [62]. This treatment studies the statistics of dark matter caustics in the tidal debris remaining from mergers of smaller halos to form galaxies and from the primordial coldness of dark matter. While more general than the approach of ref. [60], this treatment only results in a statistical distribution and does not give specific predictions for our galactic halo.
Additionally, numerical methods have been developed to study caustics and flows in dark matter halos. To date, most numerical simulations are too coarse-grained to resolve caustic structure, although its presence can be observed when special techniques are used [63,64,65,66,67]. The recent work of ref. [68] predicts at least 10 5 discrete streams near our Sun, although specific predictions are not made for the stream densities and velocities.
It has also recently been shown that dark matter axions can exist in the form of a Bose-Einstein condensate (BEC) [69]. If this is the case, the formation of caustics is suppressed within the BEC. However, vortices are expected to form at the center of galactic halos, due to their net rotation. Within a vortex, axion dark matter will exist in the normal phase and flows and caustics will still be present.
The possible existence of discrete flows provides an opportunity to increase the discovery potential of ADMX. A discrete axion flow produces a narrow peak in the spectrum of microwave photons in the experiment and such a peak can be searched for with higher signal-to-noise than the signal from axions in an isothermal halo. If such a signal is found, it will provide detailed information on the structure of the Milky Way halo.
Axion-photon mixing
As pseudoscalars, axions can be produced by the interaction of two photons, one of which can be virtual, γ + γ * → a , the process being known as the Primakoff effect [70]. This implies that photons and axions may mix in the presence of an external electromagnetic field, through the Lagrangian density of Eq. (21). In all axion searches based on the Primakoff effect to date, E represents the electric field of the real photon, and B is an external magnetic field. Although the opposite combination is possible, it is vastly easier to produce and support a static magnetic field than the equivalent electric field; a 1 T magnetic field being equal to an electric field of 30 MV/cm in gaussian units. In fact, fields of order 10 T are readily achieved today with superconducting magnets. The coherent mixing of axions and photons within magnetic fields of large spatial extent enables searches of exceedingly high sensitivity, although there is yet no experimental strategy capable of reaching the standard Peccei-Quinn axion over the entire open range.
A general formulism for axion-photon mixing in external magnetic fields, including plasma effects, is found in Ref. [71].
The microwave cavity experiment for dark-matter axions
In 1983, Sikivie proposed two independent schemes to detect the axion based on the Primakoff effect [72,73]. The first was a search for axions constituting halo dark matter by their resonant conversion to RF photons in a microwave cavity permeated by a strong magnetic field. Tuning the cavity to fulfill the resonant condition, hν = m a c 2 (1 + O(β 2 ∼ 10 −6 )), and assuming axions saturate the galactic halo, the conversion power from an optimized experiment is given by: where B is the strength of the magnetic field, V the cavity volume and Q is the cavity quality factor. The most sensitive microwave cavity experiment (and in fact the only one currently in operation) is ADMX at Lawrence Livermore National Laboratory. This search has excluded axions of KSVZ axion-photon coupling as the local halo dark matter, for a narrow range of masses 1.9 < m a < 3.4 µeV.
The anticipated conversion power is miniscule, even for the largest superconducting magnets feasible; for ADMX the signal expected is of order 10 −22 watts. Furthermore, as the experiment will necessitate tuning orders of magnitude of frequency in small frequency steps, there are limits on how long one may integrate at each frequency to improve the signal-to-noise, as governed by the Dicke radiometer equation [74]: Here s/n is the signal-to-noise ratio, ∆ν the bandwidth of the signal, t the integration time, and P s and P n the signal and noise power respectively. This puts a clear premium on reducing the total system noise temperature, which is the sum of the physical temperature and the equivalent electronic noise temperature of the amplifier, T n = T phys + T elec . ADMX has recently completed an upgrade from conventional heterojunction field-effect transistors (HFETs, or HEMTs) with an noise equivalent temperature of T elec ∼ 2 K, to SQUID amplifiers, whose noise equivalent noise temperatures can reach the quantum limit, T elec ∼ 50 mK at 750 MHz when cooled to comparable physical temperatures. This strategy will enable them ultimately to reach the DFSZ model axions, as well as cover the open mass range much faster. The microwave cavity experiments is described by Carosi elsewhere in this volume [51].
Other current axion searches
In the same report, Sikivie also outlined how to detect axions free-streaming from the Sun's nuclear burning core [72,73]. Axion production would be dominated by the Primakoff process γ + Ze → a + Ze; for KSVZ axions, the integrated solar flux at the Earth would be given by F a = 7.4 × 10 11 m 2 a [eV] cm −2 s −1 , emitted with a thermal spectrum of mean energy ∼ 4.2 keV. For relativistic axions, the conversion probability to photons of the same energy in a uniform magnetic field is given by P (a → γ) = Π = (1/4)(g aγγ BL) 2 |F (q)| 2 , where B is the strength of the magnetic field, and L its length ‡. F (q) ≡ dxe iqx B(x)/B 0 L represents the form-factor of the magnetic field with respect to the momentum mismatch between the massive axion and massless photon of the same energy, q = k a − k γ = (ω 2 − m 2 a ) 1/2 − ω ∼ m 2 a /2ω. |F (q)| is unity in the limit ql << 2π, but oscillates and falls off rapidly for ql > 2π, where the axions are no longer sufficiently relativistic to stay in phase with the photon for maximum mixing.
Utilizing a prototype LHC dipole magnet as the basis for its axion helioscope, the CAST collaboration have recently published the best limits on the solar axions, g aγγ < 0.88 × 10 −10 GeV −1 , valid for m a < 10 −2 eV [75], slightly more stringent than those derived from horizontal branch stars. This collaboration has also pushed the sensitivity of the search upward in mass into the region of axion models, by introducing a gas ( 4 He) of variable pressure into the magnet bore. In this case, the plasma frequency ω p = (4παN e /m e ) 1/2 ≡ m γ endows the x-ray photon with an effective mass; thus full coherence of the axion and photon states can be restored, and the theoretical maximum conversion probability achieved for any axion mass, by the filling the magnet with a gas of the appropriate density [76]. The mass range can thereby be extended upwards in scanning mode, by tuning the gas pressure in small steps to as high as feasible. In this manner, axions have now been excluded part-way into the Peccei-Quinn model band, g aγγ < 2.2 × 10 −10 GeV −1 (95% c.l.), valid for m a < 0.4 eV [77]. This phase of the experiment continues with 3 He gas which will permit probing of yet higher masses; for further details, see Zioutas elsewhere in this volume [78].
Purely laboratory bounds on axions or generalized pseudoscalars have also been established without relying on either astrophysical or cosmological sources. In photon regeneration ("shining light through walls") axions are coherently produced by shining a laser beam through a transverse dipole magnet, and reconverted to real photons in a colinear dipole magnet on the other side of an optical barrier [79,80]. The probability to detect a photon per laser photon is given by P (a → γ → a) = Π 2 . While current limits from photon regeneration (g aγγ < 2 × 10 −7 GeV −1 , m a < 0.5 × 10 −3 eV) [42,43,44] have not so far been competitive with solar searches, the scheme may be resonantly enhanced utilizing actively locked Fabry-Perot optical cavities to strengthen limits potentially by an order of magnitude beyond those of CAST and Horizontal Branch stars [45].
Summary
The current state of the standard QCD axion is shown in Figure 1 [40]. While it represents an oversimplification of the situation, insofar as the various experimental ‡ A useful mnemonic in rate estimates for experiments is that, within a few percent, the factor (g 10 B 10 L 10 ) 2 ≈ 10 −16 , where g 10 ≡ 10 −10 GeV −1 , B 10 ≡ 10 T, and L 10 ≡ 10 m. Constraints on the PQ-scale fa and corresponding ma from astrophysics, cosmology and laboratory experiments. The light grey regions are most model-dependent. From ref. [40]. and observational limits on mass are model-dependent, the central point is that there is a substantial window for axionic dark matter, and that the upgraded ADMX will be able to cover about two of the three decades in mass. One should be open to surprise from experiments such as CAST which are looking in regions of mass and coupling constant where axions are not expected to be the dark matter, but could find axion-like pseudoscalars associated with our first forays into beyond-Standard Model phenomenology. | 9,189.6 | 2009-04-21T00:00:00.000 | [
"Physics"
] |
A mutant gene for albino body color is widespread in natural populations of tanuki (Japanese raccoon dog).
Albino mutants (white coat and red eyes) of tanuki (Nyctereutes procyonoides viverrinus) have been repeatedly found in the Central Alps area of Japan. We recently reported that an albino tanuki from Iida, a city in this area, lacks the third exon of the TYR gene encoding tyrosinase, which is essential for melanin synthesis. The absence of this exon was due to the chromosomal deletion of a complex structure. In the present study, we analyzed TYR of another albino tanuki that was found in Matsusaka, a city located outside the mountainous area. In this animal, the third exon was also lost, and the loss was due to a deletion in which the structure was identical to that of the Iida mutant. Our results indicate, in consideration of the complex structure of the deletion, that the two albino animals inherited a single deletion that arose in their common ancestor. Iida and Matsusaka are approximately 170 km apart. This is, to our knowledge, the first report of an albino mutant gene that is widely distributed in mammalian natural populations. As the origin of this mutation is not known, the distance covered by the mutant gene remains unclear. If we assume that the mutation occurred halfway between Iida and Matsusaka, we can predict the migration distance to be approximately 85 km; however, if the mutation occurred at any other place, a longer distance would be predicted. Natural selection against albino tanuki may be relaxed because of a recent increase in food resources and refuge in urban areas.
INTRODUCTION
Oculocutaneous albinism is the most extreme mode of albinism. It is characterized, in mammals, by a white coat and red eyes resulting from the absence of the melanin pigment in the hair and retina, respectively (Steingrímsson et al., 2006). This phenotype is considered to result in a survival disadvantage. The main putative factors for this disadvantage include a decrease in A mutant gene for albino body color is widespread in natural populations of tanuki (Japanese raccoon dog) UV blocking efficiency, a higher chance of being detected by predators and of being noticed by prey, and a decrease in visual acuity. Mutant alleles, even if they are recessive to the wild-type allele, tend to be eliminated from natural populations on a long time scale when strong natural selection acts against homozygotes. However, the situation in tanuki differs from this population genetics prediction. Animals exhibiting oculocutaneous albinism have been repeatedly found in the Central Alps area of Japan. According to the records of the Iida City Zoo, the zoo received tanuki with oculocutaneous albinism, which were found in vermin control traps, in July 1979, April 1990, November 1999 and January 2017. We used the newest animal for genetic analysis in our previous study (Mae et al., 2020). In addition, one of the authors of this paper is a professional wildlife photographer who has frequently photographed albino tanuki with motionactivated cameras set up at various locations in this area (Miyazaki, 1988). One interesting photograph, taken in August 2008, shows a family consisting of a mother with wild-type pigmentation and six offspring of which four are wild-type and two are of albino phenotype (Fig. 1). Assuming that this albinism was caused by a recessive allele, the family variation observed indicates that the mother was heterozygous for the albino and wild-type alleles, and that the father was either heterozygous or homozygous for the albino allele. In our recent study (Mae et al., 2020), we analyzed the TYR gene of an albino tanuki that is cared for in Iida City Zoo. This animal, named Ryu, was born in the wild and found in a suburb of Iida City (Nagano Prefecture), which is located in the Central Alps area (Fig. 2). We showed that the albino phenotype of Ryu is caused by a mutation in the TYR gene (Mae et al., 2020). This gene, consisting of five exons, encodes the tyrosinase enzyme that catalyzes the first two reactions in melanin biosynthesis with no alternative pathway (Körner and Pawelek, 1982). The mutation was a deletion of approximately 11 kb that resulted in the loss of the third exon, which carries codons for four amino acid residues that bind copper and are essential for tyrosinase function (Olivares et al., 2002;Goldfeder et al., 2014). An important point related to the present study is that the deletion was not a simple removal of a segment: it exhibits a complex structure that is thought to have been formed by multiple mutational changes (see Fig. 4; models are shown in Mae et al., 2020). Ryu was homozygous for this structurally complicated allele. According to the entry 203100 (oculocutaneous albinism, tyrosinase-negative; https://www.omim.org/ entry/203100) of OMIM (Online Mendelian Inheritance in Man), the vast majority of TYR mutations that cause oculocutaneous albinism syndromes are recessive to the wild-type TYR allele. This suggests that the structurally complicated allele carried by Ryu is recessive.
In the present study, we conducted cloning and sequencing analyses of TYR from another albino tanuki. In November 2014, an albino adult hit by a car was found on a road in Matsusaka City (Mie Prefecture). This animal was sent to the Oouchiyama Zoo, treated medically, retained there and named Pong. Matsusaka is located outside the Central Alps area (Fig. 2). As described below, our analyses revealed a deletion of the same structure in Pong as that in Ryu. Based on the results obtained, we discuss possible factors for the wide distribution of the albino mutant allele in tanuki natural populations.
MATERIALS AND METHODS
Ethics This study involved a recombinant DNA experiment that was approved in advance by the Recombinant DNA Experiment Safety Committee of Kyoto University (approval number 190058). The study did not include any animal experiments. Tanuki animals Pong was the animal whose genetic material was used in the sequencing analysis of TYR in the present study. For simplicity, Ryu and Pong will be denoted hereafter by TanA1 (TanA in our previous study) and TanA2, respectively. TanW is an animal with regular tanuki coloring that was used for comparison in our previous study. TanA1 and TanA2 exhibit oculocutaneous albinism, which is easily distinguishable from the wild-type pigmentation of TanW (Fig. 1).
PCR, cloning and sequencing To achieve an accurate comparison with our previous study, we did not make any change in the PCR primers or condition settings for PCR, cloning or sequencing (see Mae et al. (2020) for experimental details, including primer names).
RESULTS
Structure of TYR exon regions We conducted PCR amplification of all five exon regions, using genomic DNA from TanW, TanA1 and TanA2 (Fig. 3). The results reproduced our previous results: amplification of a single fragment of the expected length in all five exon regions from TanW DNA, and lack of amplification for the exon 3 region in TanA1. TanA2 exhibited an amplification pattern that was identical to that of TanA1.
Structure of the deletion We cloned and sequenced the PCR fragments originating from TanA2. The nucleotide sequences obtained were identical to those of TanA1 in all clones. Wave patterns at and around the deletion breakpoints are shown in the lower part of Fig. 4.
Follow-up examination of DNA samples Because the sequence data obtained from TanA2 were identical to those from TanA1, we considered that there might have been an accidental misuse of TanA1 DNA as TanA2 DNA. To address this possibility, we performed an additional experiment to check whether the mitochondrial DNA also exhibited identical sequences. Using the primers L15926 and H00651 for the control region (Kocher et al., 1989), we obtained PCR fragments from the same DNA samples of TanA1 and TanA2, as well as TanW, that were used for the main part of our analysis, and then sequenced them. As depicted in the alignment in Fig. 5, the TanA1 and TanA2 sequences were noniden- Orange triangles indicate the location and orientation of PCR primers to amplify each exon region (see Mae et al. (2020) for details). The double-headed arrow shows the region that is present in Tyr + but absent in Tyr del . (B) Results of PCR amplification. Above each electrophoresis photograph, the primers used and the sizes expected based on the Tyr + sequence are shown. No PCR product was observed in the TanW lane of the P3c/P3j panel. This result is in accordance with that obtained in a previous study (Mae et al., 2020). The reason is that there is an upper size limit in the PCR using fecal DNA as a template, and the expected size (12,385 bp) exceeded this limit. In the previous study, we conducted additional PCR assays using four pairs of primers that divided this long region into four overlapping segments. All of these primer pairs produced fragments of the expected sizes, and sequence assembly resulted in a single sequence of 12,385 bp. tical. This confirmed that these DNA samples were from different individuals and excluded the possibility of misuse of the samples.
DISCUSSION
Origin of the mutant Tyr allele As explained in detail in our previous study, the deletion in TYR in TanA1 exhibited a complex structure: two separate regions of 3.8 kb and 7.3 kb were removed, and a segment of 0.3 kb between these two regions was retained but in the reverse orientation (Fig. 4). This complex structure was also found in TanA2, indicating that the deletion arose in a common ancestor of TanA1 and TanA2 and was inherited by them. In addition to the deletion, exon 5 of TanA1 TYR had previously been shown to carry two nonsynonymous base substitutions (c.1530G > C and c.1535G > C leading to p.L500F and p.R502P, respectively) (Mae et al., 2020). In the present analysis we found that exon 5 of TanA2 TYR also harbors these base substitutions (data not shown), providing further evidence that a single mutant TYR allele was inherited by TanA1 and TanA2. The TYR allele carrying this deletion will be denoted by Tyr del , and other alleles that are dominant over Tyr del and cause melanin pigmentation will be represented by Tyr + .
Expansion in natural populations Iida and
Matsusaka are approximately 170 km apart (Fig. 2). The present work is, to our knowledge, the first report of an albino mutant gene that is widespread in mammalian natural populations. Because the origin of Tyr del is not known, the distance that the Tyr del allele has migrated over in natural populations remains unclear. If we assume that the mutation occurred halfway between Iida and Matsusaka, we can predict the migration distance to be approximately 85 km. If its source differs, then either Tyr del in TanA1 or Tyr del in TanA2 must have migrated over a longer distance. The greater the migration distance, the longer the allele is considered to have persisted in natural populations. From the viewpoint of population genetics, it needs to be taken into consideration that Tyr del causes the most extreme mode of albinism and is therefore likely to have deleterious effects on its host individuals in natural circumstances. This raises the question of what factors contributed to the survival and expansion of Tyr del .
Possibility of mediation by humans One possibility is that an animal carrying Tyr del was transported by humans. For example, a tanuki kept as a pet may have been released at a different place than where it originally lived, or a tanuki may have wandered into a vegetable container that was shipped from farmland to market. These are quite improbable scenarios but cannot be ruled out at present.
Possibility of adaptation to urban environments Next, we assume that transport by humans is not involved in the Tyr del migration. Adaptation of tanuki to urban environments may be an important factor in this allele migration. Urban development by humans pushes wildlife toward elimination in general, but tanuki may be an exceptional species. Competition among tanuki individuals may be weakened because of a recent relative increase in food resources and refuge in urban areas. Weakened competition may relax natural selection against individuals of the albino phenotype. Relaxed natural selection is expected to increase the relative contribution of random genetic drift to the frequency change of the Tyr del allele.
Establishment of a survey method To test this hypothesis, it is desirable to develop an experimental method to easily distinguish the Tyr + / Tyr del genotype from Tyr + / Tyr + . By applying it to wild tanuki animals, we can estimate the degree of geographical expansion of Tyr del . Fortunately, this method has already been established in a previous study (Mae et al., 2020) as well as the present study. Tyr del yielded a 2.1-kb fragment upon PCR using the P3c and P3j primers (Fig. 3). It is also beneficial that fecal samples are a usable source of genomic DNA. Tanuki have a habit (called "tamefun" in Japanese) of defecating at a fixed place on the ground, which is shared by all family members. A survey at various sample collection sites may reveal a geographical expansion of the Tyr del allele with a wider range than that shown in the present study. | 3,068.8 | 2021-02-01T00:00:00.000 | [
"Biology"
] |
Design and Fabrication of Strong Parts from Poly (Lactic Acid) with a Desktop 3D Printer: A Case with Interrupted Shell
The ability to form closed cavities inside the part printed is an important feature of Fused Filament Fabrication technology. A typical part consists of a dense shell bearing the primary load, filled with low-density plastic scaffold (infill). Such a constitution of the part provides in most cases appropriate strength and low weight. However, if the printed part shape includes horizontal (orthogonal to printer’s Z axis) flat surfaces other than its top and bottom surface, then the shell of the part becomes interrupted, which may lead to drastic drop in the ability of the part to withstand loads. In the current study, a representative sample of a part with interrupted shell and testing apparatus is developed. Influence of shell and base thicknesses, as well as influence of the infill density on the part strength, are studied. Different approaches to the sample shape modification were applied and tested. The part shape optimization made with respect to peculiarities of Fused Filament Fabrication technology resulted in increment of the force, required to fracture the part from 483 to 1096 N and in decreased part mass from 36.9 to 30.2 g.
Introduction
Modern digital additive fabrication machines (3D printers) are often subdivided into "desktop" (personal), "professional," and "industrial" ones. The main criterion determining whether the particular model belongs to one or another segment is the cost of the device. Thus, the desktop machines usually cost up to US$5000 (while most of them in this segment are below US$2000) [1]. The cost of most "professional" models is denoted with five digits, and the "industrial" machine rates are of six digits. The category of desktop additive machines features devices based on resin curing [2] and even on laser sintering [3], but the absolute majority of them are working on the principle of FDM [4][5][6] or FFF [7]. The broad spread of FFF technology happened in the last decade thanks to the RepRap [8][9][10] and other open projects.
Due to low cost of devices and consumables for them, the desktop FFF 3D printers are becoming the competitors for the traditional production processes. A small business with 10 to 30 3D printers can effectively compete in the market for fast production of small batches with companies using the technology of vacuum molding of plastics or RIM. Thousands of 3D printers owned by individual users can quickly accept the order to manufacture a batch of thousands of parts using platforms such as 3D hubs [11]. The potential of desktop (home, amateur, or personal) devices is huge. There are Figure 1. Two categories of stressed layered parts. Their interlayer bonding is not (I.) and is (II.) critical. The category I. includes cases of compression along the Z axis with no buckling (a) and compression (b) and tension (c) orthogonal to Z axis. The category II. includes cases of tension (d) and bending (e) along Z axis and torque or shear orthogonal to the Z axis (f).
The first subcategory includes parts with the shell not being interrupted during the printing process: polymer threads forming the shell of the next layer lie completely or partially on the threads forming the shell of a previous layer. An example of such part is the "hammer" [44] or the "box spanner adapter" [45]. The strength of parts from this subcategory is determined primarily by the strength of its shell. Accordingly, a simple and efficient recipe for increasing the strength of such parts is to invest material and printing time into shell formation. Ways to increase the shell strength (layer cohesion) for PLA FFF parts are studied in References [46,47].
Finally, the second subcategory comprises parts which have a shell that is interrupted one or more times during the printing process, that is, there are times when the threads forming the shell of the next layer lie on the filaments forming the base, infill, or support. Such models can be exemplified by the "Spool holder" [48], "Plane Handle" [49], or the "Hook" [50]. These parts have flat faces parallel to the XY plane outside the upper and lower bases of the part. Examples of parts of all the categories and subcategories considered are shown in Figure 2.
Behavior of critically stressed parts with shell interruptions (II-b.) seems to be the least predictable in comparison with others (I and II-a.). The hypothesis is that the interruption of shell in the most stressed area of a loaded 3D printed part will result in unacceptably low mechanical performance. Thus, the current work is devoted to the study of parts with interrupted shells and the ways to increase the strength of such parts by modifying its shape with respect to its constitution. Behavior of critically stressed parts with shell interruptions (II-b.) seems to be the least predictable in comparison with others (I and II-a.). The hypothesis is that the interruption of shell in the most stressed area of a loaded 3D printed part will result in unacceptably low mechanical performance. Thus, the current work is devoted to the study of parts with interrupted shells and the ways to increase the strength of such parts by modifying its s hape with respect to its constitution.
Sample Shapes
An item consisting of two connected coaxial cylinders of large (boss) and small (shaft) diameters was selected as a representative of part with an interrupted shell ( Figure 3a). As a test procedure, radial load is applied to the shaft with the boss is rigidly fixed (Figure 3b). Obviously, the most loaded (and the weakest) area of the part is the junction between the shaft and the boss , where the shell is interrupted.
Sample Shapes
An item consisting of two connected coaxial cylinders of large (boss) and small (shaft) diameters was selected as a representative of part with an interrupted shell ( Figure 3a). As a test procedure, radial load is applied to the shaft with the boss is rigidly fixed (Figure 3b). Obviously, the most loaded (and the weakest) area of the part is the junction between the shaft and the boss, where the shell is interrupted. Based on the shape described above and keeping main dimensions intact, four extra CAD models were prepared and tested in attempts to improve the part mechanical performance ( Figure 4). Based on the shape described above and keeping main dimensions intact, four extra CAD models were prepared and tested in attempts to improve the part mechanical performance (Figure 4). Shape 2 represents traditional approach to improve loaded part geometry-adding a fillet to the critical corner. The fillet literally rounds the sharp transition, removing the apparent stress concentrator and also adds material to the most loaded area. Adding a fillet with a radius of 6 mm significantly increases the interface area between the shaft and the boss.
Shape 3 represents an intuitive attempt to solve the interrupted shell issue made with respect to 3D printed part constitution (the superposition of threads mimicking shells, bases, and infills). When working with traditional manufacturing methods, whether it is molding, casting, forming, or Shape 2 represents traditional approach to improve loaded part geometry-adding a fillet to the critical corner. The fillet literally rounds the sharp transition, removing the apparent stress concentrator and also adds material to the most loaded area. Adding a fillet with a radius of 6 mm significantly increases the interface area between the shaft and the boss. Shape 3 represents an intuitive attempt to solve the interrupted shell issue made with respect to 3D printed part constitution (the superposition of threads mimicking shells, bases, and infills). When working with traditional manufacturing methods, whether it is molding, casting, forming, or machining, it is impossible to make the part stronger by removing some of the CAD model volume. In the case of the FFF technology, removing some of the volume from the CAD model does not necessarily imply reduction of the physical product mass. Additional open or closed cavities in the model lead to the appearance of additional shells in the slicer (and in the part printed). Shape 3 external dimensions are preserved same to the Shape 1, but axial and radial cuttings are added. As a result, thread deposition paths are generated completely differently: The shaft shell is now passing through the boss and is printed from the very printer table. In fact, the shell becomes continuous. Since the Shape 3 sample is formed exclusively from the shell and 100% infill, it can be considered a solid body and analyzed with computer-aided simulation techniques. An example of adequate numeric simulation of loaded 3D printed PLA part can be found in References [37,51]. In current paper, the SolidWorks Simulations extension of SolidWorks 2017 was used. The Figure 5 shows the stress distribution in the loaded areas of Shape 3.
Polymers 2018, 10, x FOR PEER R EVIEW 7 of 20 machining, it is impossible to make the part stronger by removing some of the CAD model volume.
In the case of the FFF technology, removing some of the volume from the CAD model does not necessarily imply reduction of the physical product mass. Additional open or closed cavities in the model lead to the appearance of additional shells in the slicer (and in the part printed). Shape 3 external dimensions are preserved same to the Shape 1, but axial and radial cuttings are added. As a result, thread deposition paths are generated completely differently: The shaft shell is now passing through the boss and is printed from the very printer table. In fact, the shell becomes continuous.
Since the Shape 3 sample is formed exclusively from the shell and 100% infill, it can be considered a solid body and analyzed with computer-aided simulation techniques. An example of adequate numeric simulation of loaded 3D printed PLA part can be found in References [37,51]. In current paper, the SolidWorks Simulations extension of SolidWorks 2017 was used. The Figure 5 shows the stress distribution in the loaded areas of Shape 3. After several cycles of shape modification and stress analysis, Shape 4 was designed, where the calculated stresses are distributed between the shaft and the base ( Figure 6). At the same time, it fully fits into the volume of the basic shape. Due to the existence of a through hole along the shaft axis, there are two shells (like in Shape 3) instead of one in it (like it was in Shapes 1 and 2), the inner and the outer ones. The inner shell of the Shape 4 part is continuous and the outer shell is interrupted, but it stands on the 100% infill foundation.
Finally, Shape 5 represents combined approach, with modification of initial part both by adding and subtracting CAD model volume. Shape 5 was obtained by removing the volume from the least After several cycles of shape modification and stress analysis, Shape 4 was designed, where the calculated stresses are distributed between the shaft and the base ( Figure 6). At the same time, it fully fits into the volume of the basic shape.
Polymers 2018, 10, x FOR PEER R EVIEW 7 of 20 machining, it is impossible to make the part stronger by removing some of the CAD model volume.
In the case of the FFF technology, removing some of the volume from the CAD model does not necessarily imply reduction of the physical product mass. Additional open or closed cavities in the model lead to the appearance of additional shells in the slicer (and in the part printed). Shape 3 external dimensions are preserved same to the Shape 1, but axial and radial cuttings are added. As a result, thread deposition paths are generated completely differently: The shaft shell is now passing through the boss and is printed from the very printer table. In fact, the shell becomes continuous.
Since the Shape 3 sample is formed exclusively from the shell and 100% infill, it can be considered a solid body and analyzed with computer-aided simulation techniques. An example of adequate numeric simulation of loaded 3D printed PLA part can be found in References [37,51]. In current paper, the SolidWorks Simulations extension of SolidWorks 2017 was used. The Figure 5 shows the stress distribution in the loaded areas of Shape 3. After several cycles of shape modification and stress analysis, Shape 4 was designed, where the calculated stresses are distributed between the shaft and the base ( Figure 6). At the same time, it fully fits into the volume of the basic shape. Due to the existence of a through hole along the shaft axis, there are two shells (like in Shape 3) instead of one in it (like it was in Shapes 1 and 2), the inner and the outer ones. The inner shell of the Shape 4 part is continuous and the outer shell is interrupted, but it stands on the 100% infill foundation.
Finally, Shape 5 represents combined approach, with modification of initial part both by adding and subtracting CAD model volume. Shape 5 was obtained by removing the volume from the least Due to the existence of a through hole along the shaft axis, there are two shells (like in Shape 3) instead of one in it (like it was in Shapes 1 and 2), the inner and the outer ones. The inner shell of the Shape 4 part is continuous and the outer shell is interrupted, but it stands on the 100% infill foundation. Finally, Shape 5 represents combined approach, with modification of initial part both by adding and subtracting CAD model volume. Shape 5 was obtained by removing the volume from the least loaded sections of Shape 2. The fillet (added volume) provides graduate transition from shaft to boss, while the axial cutting leads to forming of an extra continuous shell.
For CAD models with relatively large volumes (Shape 1 and Shape 2), eight different configurations were tested. Three parameters describing 3D printed part constitution varied at two levels: Shell thickness (1.2 and 2.4 mm), base thickness (0.6 and 1.2 mm), and infill value (20 and 60%). For CAD models with lower volume a single configuration was tested: Shell thickness 1.8 mm, 100% infill, and no bases (base thickness 0 mm).
Samples Fabrication
A desktop Ultimaker 2 (Ultimaker B.V., Geldermalsen, Netherlands) printer was used to produce all the samples. The specific machine used differs from the mass-market model with an installed alternative feed mechanism of BondTech (Bondtech AB, Värnamo, Sweden) brand, built on a stepper motor with an integrated gearbox and drive to both feed rollers, and an alternative 3D Solex (Cepta AS, Oslo, Norway) heating unit with an increased power heating element (~50 W). The alternative heating unit, unlike the stock one, allows changing nozzles. In this series of experiments, a brass nozzle with a channel diameter of 0.6 mm was used instead of the 0.4 mm standard nozzle in stock Ultimaker 2 hot end.
The poly (lactic acid), or PLA, is used as the material for samples fabrication. The main advantage of PLA in comparison to other polymers and blends used for FFF 3D printing is the low level of shrinkage and relatively low melting temperature. Other advantages of PLA include its biodegradability, absence of unpleasant odors when heated, and its overall environmental compatibility in all aspects of the life cycle. PLA emits ten times less potentially dangerous ultra-fine particles [52] than ABS and can withstand at least 25 kiloGray of gamma irradiation with no degradation of mechanical properties [53]. The most important disadvantages of the PLA are its relatively low softening temperature (the Vicat point is 55 • C), which makes it incompatible with elevated temperature environment, and deterioration of mechanical properties caused by hydrolysis [54], which makes it incompatible with wet environments. A turquoise PLA filament was used, produced by REC Company (Moscow, Russia). All the material came from the same batch produced in June 2018, according to the labels (six months before the experiment). The claimed diameter of the filament was 2.85 mm, but the actual average diameter, calculated on 60 measurements of six different spools, was 2.83 mm with standard deviation of 0.02 mm. This specific manufacturer of filament was chosen due to locally produced material and the desire to obtain results comparable with previous studies [46,47]. Papers [46,55] show that all other characteristics being identical the filament color influences strength of the products made from it. Thus, filament of the same color was used as in previous studies.
The For each observation mentioned in the work, a lot of five samples was made and tested. The paper presents the average values for each test lot, while the standard deviation is indicated after the average value in brackets.
The sample was placed at the center of the printer bed. The G-code file was prepared using Cura 15.02.1 software (slicer). All samples were weighed before mechanical testing using digital analytical scales ViBRA LF Series (Shinko Denshi Co. LTD, Tokyo, Japan). Measurement results were rounded to one decimal digit.
Mechanical Testing
Sample strength tests were carried out on a standard universal electromechanical testing machine IR 5057-50 (OOO Tochpribor, Ivanovo, Russia) with a digital control system. The samples were fixed with a specially designed and manufactured device (Figure 7). That fixture was mounted on a movable traverse of the testing machine. The top roller from the three-point bend test kit was used to apply load on the sample shaft.
Polymers 2018, 10, x FOR PEER R EVIEW 9 of 20 All samples were weighed before mechanical testing using digital analytical scales ViBRA LF Series (Shinko Denshi Co. LTD, Tokyo, Japan). Measurement results were rounded to one decimal digit.
Mechanical Testing
Sample strength tests were carried out on a standard universal electromechanical testing machine IR 5057-50 (OOO Tochpribor, Ivanovo, Russia) with a digital control system. The samples were fixed with a specially designed and manufactured device (Figure 7). That fixture was mounted on a movable traverse of the testing machine. The top roller from the three-point bend test kit was used to apply load on the sample shaft. The tests were carried out at constant speed (10 mm/min) and were held on until the sample was destroyed. During the tests displacements and loads were recorded. The reference point was the state of the machine with a load of 5N applied to eliminate mounting clearances.
The part strength was assumed to be equal to the fracture load (load at which the first apparent crack appears). Along with absolute strength, the relative strength (fracture load related to the sample mass) was also considered.
Basic Shape
The test results for eight configurations of Shape 1 are shown in the Table 1. The tests were carried out at constant speed (10 mm/min) and were held on until the sample was destroyed. During the tests displacements and loads were recorded. The reference point was the state of the machine with a load of 5N applied to eliminate mounting clearances.
The part strength was assumed to be equal to the fracture load (load at which the first apparent crack appears). Along with absolute strength, the relative strength (fracture load related to the sample mass) was also considered.
Basic Shape
The test results for eight configurations of Shape 1 are shown in the Table 1. The loading curves of the characteristic representatives for each of the lots tested are shown in the Appendix A, Figure A1. In general, the strength of all the considered configurations is very modest. The shaft itself is quite durable, but it becomes easily separated from the boss (all of the samples examined were destroyed at the interface between the shaft and the boss, see Figure 8. The reason for this lies in the fact that the strong shaft stands on the loose base of the thread grid forming the boss infill. The connection between the shaft and the boss passes through the infill and along the boundary between the upper base of the boss and the shaft shell. In other words, the part becomes weak due to the shell interruption. The loading curves of the characteristic representatives for each of the lots tested are shown in the Appendix, Figure A1. In general, the strength of all the considered configurations is very modest. The shaft itself is quite durable, but it becomes easily separated from the boss (all of the samples examined were destroyed at the interface between the shaft and the boss, see Figure 8. The reason for this lies in the fact that the strong shaft stands on the loose base of the thread grid forming the boss infill. The connection between the shaft and the boss passes through the infill and along the boundary between the upper base of the boss and the shaft shell. In other words, the part becomes weak due to the shell interruption. As it can be seen from Figure 9, increasing the base thickness and infill percentage has noticeable effect on the part strength, while the shell thickness has the minimal influence. Thus, the rule that works well for parts with a continuous shell (in order to increase the part strength , it is necessary first of all to invest time and material into shell) is absolutely inapplicable to parts of the shape considered. As it can be seen from Figure 9, increasing the base thickness and infill percentage has noticeable effect on the part strength, while the shell thickness has the minimal influence. Thus, the rule that works well for parts with a continuous shell (in order to increase the part strength, it is necessary first of all to invest time and material into shell) is absolutely inapplicable to parts of the shape considered. Acceptable values of part strength can be achieved by further increasing the infill value (up to 100%), but this will obviously lead to significant increase in the part mass. Considering relative values, one can see that increasing infill density from 20 to 60% leads to a negligible increase in relative strength. It is more rational to modify the part shape while keeping the main (coupling) Acceptable values of part strength can be achieved by further increasing the infill value (up to 100%), but this will obviously lead to significant increase in the part mass. Considering relative values, one can see that increasing infill density from 20 to 60% leads to a negligible increase in relative strength. It is more rational to modify the part shape while keeping the main (coupling) dimensions intact.
Shape Modification-The Traditional Approach (Shape 2)
The test results for eight configurations of Shape 2 are shown in Table 2, and characteristic loading curves are presented in the Appendix A, Figure A2. As can be seen from the results, simply adding a fillet dramatically affects the strength of the part. The smallest recorded strength for Shape 2 samples exceeds the maximum recorded one for Shape 1. Moreover, the nature of parameter influence changes completely. Samples with thicker shells (2.4 mm) and low infill density (20%) break at the interface between the boss and fillet. All others fail at the boundary between the fillet and the cylindrical part of the shaft (Figure 10). Accordingly, in most cases the base thickness does not have any influence, since the material forming the base does not lie in the critical zone. The difference in results between samples that only vary in base thickness is statistically insignificant. Thus, the number of configurations considered for Shape 2 samples can be reduced from eight to four (Figure 11). Accordingly, in most cases the base thickness does not have any influence, since the material forming the base does not lie in the critical zone. The difference in results between samples that only vary in base thickness is statistically insignificant. Thus, the number of configurations considered for Shape 2 samples can be reduced from eight to four (Figure 11). Accordingly, in most cases the base thickness does not have any influence, since the material forming the base does not lie in the critical zone. The difference in results between samples that only vary in base thickness is statistically insignificant. Thus, the number of configurations considered for Shape 2 samples can be reduced from eight to four (Figure 11). The infill density has a great influence on the absolute part strengt h for Shape 2, but for the relative strength, shell thickness becomes paramount: increase in the infill rate from 20 to 60% with other things remaining unchanged leads to a decrease in the relative part strength. The infill density has a great influence on the absolute part strength for Shape 2, but for the relative strength, shell thickness becomes paramount: increase in the infill rate from 20 to 60% with other things remaining unchanged leads to a decrease in the relative part strength.
Modifying the Shape with FFF Technology Specificity in Mind (Shape 3)
The characteristic test curve for the Shape 3 sample is shown in the Appendix A, Figure A3. Sample destruction occurred over the shaft section, slightly submerged (1-3 mm) into the boss (Figure 12). The characteristic test curve for the Shape 3 sample is shown in the Appendix, Figure A3. Sample destruction occurred over the shaft section, slightly submerged (1-3 mm) into the boss ( Figure 12). The average absolute strength of Shape 3 samples was 426 (18) N with a mass of 27.0 (0.1) g. The relative strength of the part was accordingly 15.8 N/g. The results obtained are inferior to the best absolute records obtained for Shape 1, but exceed the best relative ones. The results are significantly inferior to those obtained for Shape 2. It is important to note that Shape 3 fits into the basic shape volume, while Shape 2 has an element (fillet) protruding beyond its dimensions.
Shape Optimization using CAE (Shape 4)
In all previous cases, the load at which crack occurred was the largest load on testing curve. Four of five Shape 4 samples tested exhibited another behavior under critical loads (Appendix, Figure A4). After the appearance of the first crack, the load required for further deformation continues to increase. That is, the appearance and growth of a crack does not immediately lead to the shaft separation from the boss, and the crack appears and grows in the boss part of the sample (Figure 13, fracture A). One of five specimens tested fractured at the point of transition of the cylindrical shaft into a conical inlet (Figure 13, fracture B). If we come back to stress distribution in The average absolute strength of Shape 3 samples was 426 (18) N with a mass of 27.0 (0.1) g. The relative strength of the part was accordingly 15.8 N/g. The results obtained are inferior to the best absolute records obtained for Shape 1, but exceed the best relative ones. The results are significantly inferior to those obtained for Shape 2. It is important to note that Shape 3 fits into the basic shape volume, while Shape 2 has an element (fillet) protruding beyond its dimensions.
Shape Optimization Using CAE (Shape 4)
In all previous cases, the load at which crack occurred was the largest load on testing curve. Four of five Shape 4 samples tested exhibited another behavior under critical loads (Appendix A, Figure A4). After the appearance of the first crack, the load required for further deformation continues to increase. That is, the appearance and growth of a crack does not immediately lead to the shaft separation from the boss, and the crack appears and grows in the boss part of the sample (Figure 13, fracture A). One of five specimens tested fractured at the point of transition of the cylindrical shaft into a conical inlet (Figure 13, fracture B). If we come back to stress distribution in Figure 6, it can be seen that computer simulation revealed two critical zones. As the experiment showed, the sample destruction is possible in each of them with different probability.
inferior to those obtained for Shape 2. It is important to note that Shape 3 fits into the basic shape volume, while Shape 2 has an element (fillet) protruding beyond its dimensions.
Shape Optimization using CAE (Shape 4)
In all previous cases, the load at which crack occurred was the largest load on testing curve. Four of five Shape 4 samples tested exhibited another behavior under critical loads (Appendix, Figure A4). After the appearance of the first crack, the load required for further deformation continues to increase. That is, the appearance and growth of a crack does not immediately lead to the shaft separation from the boss, and the crack appears and grows in the boss part of the sample (Figure 13, fracture A). One of five specimens tested fractured at the point of transition of the cylindrical shaft into a conical inlet (Figure 13, fracture B). If we come back to stress distribution in Figure 6, it can be seen that computer simulation revealed two critical zones. As the experiment showed, the sample destruction is possible in each of them with different probability. The more likely fracture (Figure 13, fracture A) passes through the entire boss that is formed by 100% infill. Due to the mutually orthogonal threads arrangement of different infill layers, the crack The more likely fracture (Figure 13, fracture A) passes through the entire boss that is formed by 100% infill. Due to the mutually orthogonal threads arrangement of different infill layers, the crack in the sample does not only grow along the borders between the individual plastic threads, but also across the threads. It is the latter phenomenon that ensures the ductile nature of sample destruction.
If the Shape 4 part strength is to be defined as the load corresponding to the crack appearance, the average strength is 662 (51) N with a mass of 27.6 (0.1) g. Accordingly, the relative strength of the part was 24.0 N/g.
Combining Approaches (Shape 5)
The fracture of all Shape 5 samples occurred where the cylindrical shaft transitions into the fillet (Figure 14), the characteristic test curve is shown in the Appendix A, Figure A5. in the sample does not only grow along the borders between the individual plastic threads, but also across the threads. It is the latter phenomenon that ensures the ductile nature of sample destruction. If the Shape 4 part strength is to be defined as the load corresponding to the crack appearance, the average strength is 662 (51) N with a mass of 27.6 (0.1) g. Accordingly, the relative strength of the part was 24.0 N/g.
Combining Approaches (Shape 5)
The fracture of all Shape 5 samples occurred where the cylindrical shaft transitions into the fillet (Figure 14), the characteristic test curve is shown in the Appendix, Figure A5.
Summary
A summary of results is shown in Figure 15. Shape 1 can be considered an example of poor design: the sharp transition from the boss to the shaft is a flaw even for traditionally manufactured parts. In case of 3D printing, such a transition implies shell interruption and critical weak spot
Summary
A summary of results is shown in Figure 15. Shape 1 can be considered an example of poor design: the sharp transition from the boss to the shaft is a flaw even for traditionally manufactured parts. In case of 3D printing, such a transition implies shell interruption and critical weak spot appearance. As it is shown by results for shapes 3 and 4, redistribution of material within a given, initially flawed, shape can significantly increase the part strength, but the geometry optimization effect taking into account 3D printing features is less significant than the effect of geometry modification performed in accordance with the basic principles of designing products for convenient manufacturing (samples of Shape 2). The maximum effect is achieved by combined approach (Shape 5). Additional reserves to improve the part strength for an optimized shape can be sought in technological parameter optimization of the printing process.
Conclusions
In order to increase FFF part strength, its geometry can be optimized both by adding volume to the CAD model (rounding, adding fillets, smooth transitions) and by introducing cavities, and thus, by providing extra shells in the critical sections and by converting interrupted shells into continuous ones. This technique of shape optimization for the FFF production technology differs from that one for traditional production methods (such as casting, forming, machining), as well as other digital additive technologies (SLA, SLS, LOM), where extra cavities would not contribute to the absolute strength of a part.
Computer simulation methods are applicable to analyze the behavior of FFF part models under load if they are printed with infill only with 100% density (or without infill invocation) , at least at a qualitative level. Additional reserves to improve the part strength for an optimized shape can be sought in technological parameter optimization of the printing process.
Conclusions
In order to increase FFF part strength, its geometry can be optimized both by adding volume to the CAD model (rounding, adding fillets, smooth transitions) and by introducing cavities, and thus, by providing extra shells in the critical sections and by converting interrupted shells into continuous ones. This technique of shape optimization for the FFF production technology differs from that one for traditional production methods (such as casting, forming, machining), as well as other digital additive technologies (SLA, SLS, LOM), where extra cavities would not contribute to the absolute strength of a part.
Computer simulation methods are applicable to analyze the behavior of FFF part models under load if they are printed with infill only with 100% density (or without infill invocation), at least at a qualitative level. Acknowledgments: Authors are grateful to the reviewers, whose advice helped in improving the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
In the current work, the following notions are introduced and the following shorthand is used:
FDM
Fused Deposition Modeling is a technology of digital additive manufacturing based on layered deposition of melted thermoplastic; FFF Fused Filament Fabrication, which is the same as FDM, with the only difference being that the FDM is a trademark of Stratasys Inc., while the FFF is a term coined inside the RepRap community. Since the current study involves open-source 3D printer, the term FFF is used; Filament Plastic in the filament form used as supply in FFF process; Thread Extruded and deposited thread of plastic mimicking the FFF part;
Shell
A component of FFF part, reproducing the lateral surface of 3D model. The shell of a layer consists of single or multiple equidistant perimeters formed by the thread. The number of perimeters and the thread wideness are responsible for the shell thickness;
Infill
An internal component of FFF part, formed by threads in orthogonal straight lines. The distance between the threads defines the infill density; Base A component of FFF part, reproducing flat surfaces parallel to the 3D printer bed. Base may consist of multiple layers, usually printed mutually orthogonally in XY directions, and can be disabled;
Fracture load
The load observed during mechanical testing at which the sample exhibits the first crack, in the context of the study, the fracture load is the sample strength;
Relative Strength
A ratio of sample fracture load to its mass; Flow rate (mm 3 /s) The volume of plastic delivered through the nozzle per unit time; Printing speed or Feed rate The speed of nozzle traveling across XY plane while extruding the plastic.
Appendix A. Typical Loading Curves of Tested Samples
Polymers 2018, 10, x FOR PEER R EVIEW 16 of 20 Figure A1. Typical loading curves for Shape 1 samples manufactured in different configurations. Figure A1. Typical loading curves for Shape 1 samples manufactured in different configurations. | 8,318.8 | 2019-04-30T00:00:00.000 | [
"Materials Science"
] |
Numerical Studies on Teeter Bed Separator for Particle Separation
: Teeter Bed Separators (TBS) are liquid–solid fluidized beds that are widely used in separation of coarse particles in coal mining industry. The coal particles settle in the self-generating medium bed resulting in separation according to density. Due to the existence of self-generating medium beds, it is di ffi cult to study the sedimentation of particles in TBS through experiments and detection methods. In the present research, a model was built to investigate the bed expansion characteristics with water velocity based on the Euler–Euler approach, and to investigate the settling of foreign particles through bed based on the Euler–Lagrange approach in TBS. Results show that the separation of in TBS should be carried out at low water velocity under the condition of stable fluidized bed. Large particles have a high slip velocity, and they are easily flowing through the bed into the light product leading to a mismatch. The importance of self-generating bed on separation of particle with narrow size ranges are clarified. The model provides a way for investigating the separation of particles in a liquid–solid fluidized bed and provides suggestions for the selection of operation conditions in TBS application.
Introduction
Fluidization technology has been widely used in chemical reactions, mineral/coal separation and pneumatic conveying in the past few decades [1][2][3][4]. Teeter bed separator (TBS) is one of liquid-solid fluidized beds mainly applied to the separation of coal fines and mineral, desliming and tailings recovery [5][6][7][8]. It is one kind of gravity separators based on the principle of fluidization and hindering settling, in which particles with a wide range of densities and sizes have different settling velocities. The heavy particles have a higher settling velocity and settle through a multi-particle suspension consisting of water and intermediate mass particles, while the light particles have a lower settling velocity and rise through the suspension, thereby achieving the particle separation.
There are many investigations focusing on the application of TBS. For improving the separation efficiency, several measures were taken on the equipment, such as optimization of the feeding structure, installation of inclined plates and introducing of the pulsating water flow and bubbles and thus, a series of new separators were produced [9][10][11][12]. It has been reported that new separators were well applied in mineral sand, low grade iron ore, chromite and coal fines in industry, as well as pyrite, phosphate ore, heavy minerals, electronic waste and hematite in laboratory and pilot plants [13][14][15]. Applications separating these materials have been illustrated the improvement for separation performance of TBS. Moreover, the influence of operating variables, including bed density, water velocity, feed concentration and feed rate on classification or separation performance for coal and non-coal materials was studied [16][17][18]. Research showed that water velocity and bed density are much crucial for particles separation or classification and the significance of the parameters on cut size/density, possible particle size to perform numerical calculation. In other words, the simulation is conducted at a particle scale. The forces acting on particles are not calculated through the existing models but obtained by integrating viscous force and pressure force on the particle surface. There is no doubt that this method needs a huge amount of computational calculation. To better use this method, two methods called fictitious domain method and immersed boundary method have been developed to reduce the computational consumption, in which the domain grids will not change with flow time. Macro and micro information such as the law of particle force, particle trajectory, the velocity of particle and fluid and particle pulsation can be obtained by using these methods [45,46]. The dynamic mesh method, in which the mesh dynamically changes with flow time, is more practical for calculating the flow field due to the boundary motion. A good case is to simulate the change of flow field in a cylinder with piston motion. The method was also utilized to simulate the particle motion. Mitra et al. [47] studied the collision process between one glass bead and one water droplet. Ghatage et al. [32] predicted the settling velocity of a 6 mm foreign particle in a liquid-solid fluidized bed by the E-E approach coupled with dynamic mesh method. Presently, DNS method is still limited by the total number of particles.
As has mentioned, particles in TBS are fine in size and have wide range of densities. they move through the self-generating medium bed, resisting counter water flow and forces from fluidized particles. In the experimental investigations of Grbavčić and Vuković [28] and Van der Wielen et al. [30], they conducted the experiments by preparing special fluidized particles, using stopwatch and laying strong light behind the bed to trace the particle in a column. Even though PIV (particle image velocimetry) detected technology was utilized in the experiment of Ghatage [32], the fluidized particles made of glass were required to be larger in size for a good transparency. Based on the work of Richardson and Zaki [22], Galvin [21] proposed an empirical expression of the particle slip velocity in combination with the actual parameters of bed pressure in TBS. We can see that the experimental investigations are always difficult for high costs, limited experimental conditions and detection tools. However, the particle motion is fundamental and most important, whether it is for the evaluation of operating parameters on separation performance or a good prediction and optimization on the separation process. Numerical literature review demonstrated that the particle motion behavior in liquid fluidized bed can be simulated well. More details of particle-liquid flow were obtained which can be used to make up for the defects of the experiment effectively. It was also found that limited efforts have been put on numerical studies on the characteristics of teeter bed and particle settling behavior in TBS specifically. It is thought desirable to complete studies on two aspects for a good understanding of the separation process. Considering applicability of three numerical methods, we proposed that the pseudo-fluid method is suitable to simulate the fluidized bed composed of a large number of fine particles with less computational consumption. Discrete phase method is more preferable to track the motion of particles through fluidized bed.
Thus, an attempt is made to study the change of bed characteristics with operating water velocity and the settling behavior of particles introduced to the bed based on a combination of the Euler-Euler and Euler-Lagrange approach. The simulation results are compared with the empirical formulas from literature studies. In addition, the influence of operating water velocity on the separation is analyzed from the perspective of flow regimes in bed. The significance of this research is to establish a reasonable approach to study the fundamentals of particle-liquid flow in TBS numerically. The empirical formulas for comparison with simulated results are summarized in Table 1.
Model Description
The simulation was carried out on a commercial software, called FLUENT 15.0 based on the finite volume method. The present modeling of the fluidized bed (teeter bed) was based on a two-dimensional Euler-Euler approach where the fluidized particles are considered as continuous phase, namely pseudo-fluid. The resolve of their equation is based on the kinetic theory of granular flow. Then the Euler-Lagrange approach is combined to model the foreign particles motion in which particle is treated as discrete phase. The relevant governing equations for two approaches have been well developed and documented in the literature [34,35,52]. For a clear understanding, here, the models are briefly described below.
Governing Equation for Liquid and Solid Phases
The liquid and solid phases are considered as continuous phase, respectively, in the Euler-Euler approach, so they have the similar mass and momentum transport equations. These equations can be written as follows: The mass transport equations for liquid and solid phases are: The momentum transport equations for liquid and solid phases are: where = τ S presents stress tensor, Pa; p S presents the solid pressure, Pa. The interphase exchange coefficient (K SL ) is modeled using the Huilin-Gidaspow model which is suitable for both high-concentration system and dilute system and it is improved based on the coefficient of Wen and Yu and Eurgun [35]. The Huilin-Gidaspow model coefficient is defined as: where: For ε L > 0.8 where: For ε L < 0.8 where: If one foreign particle reaches the force balance in teeter bed, the motion equation for this individual particle is: d where g is gravitational acceleration originated from particles gravity. The term of CD 2 presents drag force. F other is other force caused by virtual mass force, pressure gradient force and Saffman lift. CD 2 is defined as follows: where a 1 , a 2 and a 3 are constants that apply over several ranges of Re (Re D ),
Simulation Conditions
A 2D geometric model with a diameter of 120 mm and height of 800 mm is utilized in the present simulation and the details for physical and geometrical parameters are given in Table 2. The mesh is divided with the ICEM CFD software contained in the software package of Ansys15.0. The mesh size is selected as 2 mm based on the literature [34,35] in which particles have the similar size and density with present work, and the Euler-Euler approach are also used to simulate the fluidized bed. In the literature study, the mesh size of 0.5 mm, 1 mm, 2 mm, 3 mm and 3.5 mm is set to explore the mesh-independent simulation. It is found that the bed voidage does not change below 2 mm size. Therefore, we choose the same mesh size of 2 mm. The literature also conducted investigations for the independence of time step and the value of 0.001 s is referred to the present study. In our paper, these setting parameters are finally compared with R-Z model in the next section of result and discussion. Choosing a right turbulence model is also crucial for the accuracy of the simulation. Although the operating water velocity is small, the strong turbulence existed in the bed is obvious. The standard k-ε is incorporated in the simulation of fluidized beds extensively and it exhibits a good description for the turbulence flow in bed. The theory about k-ε turbulence model can be referred to the publications [35]. The collision coefficient between fluidized particles is used with a default value of 0.9. The SIMPLE algorithm is employed to solve the pressure-velocity coupling and the governing equations are discretized by the second-order upwind scheme. Table 2. Simulation conditions and parameters.
Parameters Values
Width of the column 120 mm Height of the column 800 mm Diameter of fluidized particle 1 mm Density of fluidized particle 2607 kg/m 3 Diameter of foreign particle 2, 4, 6 mm Density of foreign particle 2607 kg/m 3 The whole simulation is performed in two steps. The first step is the study of the change of bed properties with water velocity based on the Euler-Euler approach. In this step, fluidized particles with size of 1 mm and a density of 2607 kg/m 3 filled the region of 120 mm width × 200 mm height of the column with an initial volume fraction of 0.55. The water flow is introduced from the bottom of the column and the boundary condition of the bottom is defined as velocity inlet. The boundary condition of the top of the column is set as pressure outlet, and two vertical walls are considered as walls with no slip boundary conditions. When all things are prepared well, the teeter bed begins to be fluidized under a wide variety of water velocities of 0.008-0.056 m/s. The judgment of the steady state of fluidization is that the calculated residuals is converged, and the particle volume fraction does not change with the simulated time. After the bed fluidization has stabilized. The DPM (discrete phase model) in FLUENT 15.0 based on Euler-Lagrange approach is started to inject foreign particles from the top of the column separately, and to track their classification velocity through the bed. The final data can be exported from FLUENT 15.0 by the selection of "export" option, and the subsequent data processing is conducted in Microsoft Excel. A schematic diagram of the bed fluidization and particle settling in the bed appears in Figure 1. According to Figure 1, the slip velocity of one foreign particle through the teeter bed can be calculated by using Equation (13): where u D is the slip velocity of foreign particle, also called relative velocity. u D presents the classification velocity which is the absolute velocity observed relative to the column. u L0 is the interstitial liquid velocity.
The Bed Expansion Characteristics
In coal processing, a large amount of narrow sized particles are fluidized in counter water flow to form the teeter bed. Therefore, water velocity is a key factor affecting the bed characteristics and also an important operating parameter for coal separation. In this section, the simplification of mono size particles was assumed to form a self-generating medium bed. Figure 2 shows the change of bed expansion with fluidization velocity varied from 0.008 to 0.056 m/s. It can be seen that the bed was fluidized at all water velocity. For the lower water velocity of 0.008 m/s and 0.01 m/s, the bed height was lower than the initial state value, and corresponding volume fraction was larger than the initial value of 0.55. As the water velocity increases, the bed expansion gradually increases and at the water velocity of 0.016 m/s, the bed height has exceeded the initial state. When water velocities in the range of 0.008-0.160 m/s, the consistent color in the figure revealed that the local volume fraction of particles was uniform along the whole column. When the water velocity increases further, especially in higher value of 0.056 m/s, the uniformity of the bed deteriorates because of the strong turbulence caused by many eddies. This phenomenon can also be illustrated from the flow patterns of the bed in Figure 3. The liquid flow along the column was stable and very regular at the relative lower water velocity, and the particle flows was small and symmetric which was helpful to achieve the homogeneous regime. However, at high water velocity, the regularity of the flow state was disturbed, for large circulation flows of liquid and particles increases the turbulence intensity resulting in heterogeneous regime in the bed.
The Bed Expansion Characteristics
In coal processing, a large amount of narrow sized particles are fluidized in counter water flow to form the teeter bed. Therefore, water velocity is a key factor affecting the bed characteristics and also an important operating parameter for coal separation. In this section, the simplification of mono size particles was assumed to form a self-generating medium bed. Figure 2 shows the change of bed expansion with fluidization velocity varied from 0.008 to 0.056 m/s. It can be seen that the bed was fluidized at all water velocity. For the lower water velocity of 0.008 m/s and 0.01 m/s, the bed height was lower than the initial state value, and corresponding volume fraction was larger than the initial value of 0.55. As the water velocity increases, the bed expansion gradually increases and at the water velocity of 0.016 m/s, the bed height has exceeded the initial state. When water velocities in the range of 0.008-0.160 m/s, the consistent color in the figure revealed that the local volume fraction of particles was uniform along the whole column. When the water velocity increases further, especially in higher value of 0.056 m/s, the uniformity of the bed deteriorates because of the strong turbulence caused by many eddies. This phenomenon can also be illustrated from the flow patterns of the bed in Figure 3. The liquid flow along the column was stable and very regular at the relative lower water velocity, and the particle flows was small and symmetric which was helpful to achieve the homogeneous regime. However, at high water velocity, the regularity of the flow state was disturbed, for large circulation flows of liquid and particles increases the turbulence intensity resulting in heterogeneous regime in the bed. [22] and compare model data with present water velocity to examine the simulation results. The R-Z model was intensively used in several publications to compare their study results with model values. The predicted values in the present model were also compared with parts of experimental and simulated data of the publication [34,35,53], which is summarized in Figure 5. It can see from Figure 5 that the present values exhibit good consistence with experimental data of the literature and R-Z model. Thus, it was can be concluded that Figure 5 show a good illustration for the rationality of present simulated values. In order to have a clear observation, Figure 4b Figure 4d shows the pressure drop changes with water velocity. It can be observed that the bed drop hardly varies with water velocity and the simulated value was mostly around the theoretical value of 1736.06 Pa in Figure 4d calculated by the Equation (14). This phenomenon proves the validity of the simulation results again.
where L presents the initial bed height. Its value was 200 mm in the present study. [22] and compare model data with present water velocity to examine the simulation results. The R-Z model was intensively used in several publications to compare their study results with model values. The predicted values in the present model were also compared with parts of experimental and simulated data of the publication [34,35,53], which is summarized in Figure 5. It can see from Figure 5 that the present values exhibit good consistence with experimental data of the literature and R-Z model. Thus, it was can be concluded that Figure 5 show a good illustration for the rationality of present simulated values. In order to have a clear observation, Figure 4b Figure 4d shows the pressure drop changes with water velocity. It can be observed that the bed drop hardly varies with water velocity and the simulated value was mostly around the theoretical value of 1736.06 Pa in Figure 4d calculated by the Equation (14). This phenomenon proves the validity of the simulation results again.
where L presents the initial bed height. Its value was 200 mm in the present study. Bed density is a reflection of particle density, solids volume fraction and liquid velocity. As shown in Figure 6, bed density reduces as the increase of water velocity and they show a good power relationship, which can be quantified by the fitting Equation (15). In real separation cases, when the bed density reduces resulting from the increase in water velocity, most of light particles were directly dragged to the overflow subjected to the hydraulic force without effective separation. When the water Bed density is a reflection of particle density, solids volume fraction and liquid velocity. As shown in Figure 6, bed density reduces as the increase of water velocity and they show a good power relationship, which can be quantified by the fitting Equation (15). In real separation cases, when the bed density reduces resulting from the increase in water velocity, most of light particles were directly dragged to the overflow subjected to the hydraulic force without effective separation. When the water velocity was lower, the bed density increases as the solid phase becomes dense inside bed. At the same time the viscosity of the suspension also increased as a large number of particles were crowded together. Thus, in a system with high concentration particles, the properties of the fluidized suspension was close to that of heavy liquid, in which particles were separated according to density.
where ρ b is bed density, g/cm 3 ; u is water velocity, m/s. Fluidized particles move differently with water velocity during the whole bed expansion process. Figure 7 plots the particle velocity along the height of the bed at different water velocity. The overall trend was that the particle velocity increases as the water velocity increases. At lower water velocity, the bed reaches good homogeneity which can be seen from former analysis and the velocity distribution of particles was also relatively uniform. The higher water velocity brings degradation for the uniformity of the particle velocity distribution. The result was consistent with former analysis of the impact of water velocity on the volume fraction of particles in the bed.
The above analysis shows that the movement of fluidized particles was relatively regular, and a better homogeneous flow was formed at lower water velocity, which was conducive to maintaining the bed internal stability. At higher water velocity, a large number of vortices will not only make the fluidized particles fluctuated strongly, but also promote fine particles easily to directly enter the overflow or underflow without being separated. Therefore, for coal material with a wide range of size and density, a better separation was recommended under the operating condition of a lower water velocity. Fluidized particles move differently with water velocity during the whole bed expansion process. Figure 7 plots the particle velocity along the height of the bed at different water velocity. The overall trend was that the particle velocity increases as the water velocity increases. At lower water velocity, the bed reaches good homogeneity which can be seen from former analysis and the velocity distribution of particles was also relatively uniform. The higher water velocity brings degradation for the uniformity of the particle velocity distribution. The result was consistent with former analysis of the impact of water velocity on the volume fraction of particles in the bed.
The above analysis shows that the movement of fluidized particles was relatively regular, and a better homogeneous flow was formed at lower water velocity, which was conducive to maintaining the bed internal stability. At higher water velocity, a large number of vortices will not only make the fluidized particles fluctuated strongly, but also promote fine particles easily to directly enter the overflow or underflow without being separated. Therefore, for coal material with a wide range of size and density, a better separation was recommended under the operating condition of a lower water velocity.
Settling Behavior of Foreign Particles in Bed
The teeter bed above provided a hindering settling environment for other particles (here, called foreign particles) which have different properties (size or density) with the fluidized particles. Foreign particles move resisting forces from the bed to the overflow or underflow and achieve the separation. In this section, several foreign particles were introduced to the teetered bed to study their motion behavior. The results are shown in Figure 8a that a 4 mm foreign particle with density of 2607 kg/m 3 was introduced from the top of the column. We can see that the particle has gone through four stages: accelerating, reaching balance in water, deceleration and reaching equilibrium in the bed. The particle velocity decreased significantly due to the resisting force from the bed; it decreased as the volume fraction of particles increased. For a clear comparison of simulated slip velocity with that of literature models, the particle slip velocity was normalized by dividing the free settling velocity. Figure 8b compares the normalized slip velocity of foreign particle of 4 mm size and density of 2607 kg/m 3 with the literature values. What needs to be explained: the free settling velocity of all particles involved in these models were the simulated data. For 2-mm, 4-mm and 6-mm glass particles, the simulated free settling velocities were 0.282 m/s, 0.445 m/s, 0.565 m/s, and for 2 mm, 4 mm, 6 mm steel particles, the values were 0.644 m/s, 0.969 m/s, 1.173 m/s. The deviation of simulated free settling velocity of particles were within 8% compared with Zigrang and Sylvester model [54]. In Figure 8b, it was observed that the classification velocity of the particle decreased with the increase of volume fraction of particles. The phenomenon is exhibited in both simulated values and literature models. It implies that the resisting force acting on foreign particles increased as the volume fraction of particles in bed increased. Moreover, the normalized slip velocity of the 4 mm particle through the bed was close to that of Joshi model.
Settling Behavior of Foreign Particles in Bed
The teeter bed above provided a hindering settling environment for other particles (here, called foreign particles) which have different properties (size or density) with the fluidized particles. Foreign particles move resisting forces from the bed to the overflow or underflow and achieve the separation. In this section, several foreign particles were introduced to the teetered bed to study their motion behavior. The results are shown in Figure 8a that a 4 mm foreign particle with density of 2607 kg/m 3 was introduced from the top of the column. We can see that the particle has gone through four stages: accelerating, reaching balance in water, deceleration and reaching equilibrium in the bed. The particle velocity decreased significantly due to the resisting force from the bed; it decreased as the volume fraction of particles increased. For a clear comparison of simulated slip velocity with that of literature models, the particle slip velocity was normalized by dividing the free settling velocity. Figure 8b compares the normalized slip velocity of foreign particle of 4 mm size and density of 2607 kg/m 3 with the literature values. What needs to be explained: the free settling velocity of all particles involved in these models were the simulated data. For 2-mm, 4-mm and 6-mm glass particles, the simulated free settling velocities were 0.282 m/s, 0.445 m/s, 0.565 m/s, and for 2 mm, 4 mm, 6 mm steel particles, the values were 0.644 m/s, 0.969 m/s, 1.173 m/s. The deviation of simulated free settling velocity of particles were within 8% compared with Zigrang and Sylvester model [54]. In Figure 8b, it was observed that the classification velocity of the particle decreased with the increase of volume fraction of particles. The phenomenon is exhibited in both simulated values and literature models. It implies that the resisting force acting on foreign particles increased as the volume fraction of particles in bed increased. Moreover, the normalized slip velocity of the 4 mm particle through the bed was close to that of Joshi model. To further explore the agreement between the simulation results and the Joshi model, the settling behavior of 2 mm, 4 mm and 6 mm glass beads and 2 mm, 4 mm and 6 mm steel particle through the bed with a volume fraction of 0.55 was studied in Figure 8c,d. It can be seen that the simulation results keep good agreements with the Joshi model. The influence of a high concentration of fluidized particles on foreign particle sedimentation was significant and can decreased by about 20% of free settling velocity. We can also see that there were some differences between the simulated data and the predicted values in other models. It can also be seen from Figure 8c, the predicted slip velocity for glass beads in models of Di Felice, Van der Wielen and Grbavcic presents a good consistency. In these models, the particle velocity was greatly reduced more than half of the value, and the values in Kunii model was between Joshi model and other three models. Figure 8d shows that the predicted values of steel particles in Kunii and De Felice model were relatively close and were also close to the present simulated results. The particle velocity assessed by Van der Wielen and Grbavcic models existed some difference, but still decreased the particle velocity significantly. In principle, the motion of the foreign particle was governed by various forces from the bed. These forces generally includes drag force, the pressure gradient force, virtual mass force and the collision force, etc. In such a high concentration system, the resisting force caused by water drag force and collision force between particles was dominant and they can obviously decrease the settling velocity of the particles. The simulation data reflects the effect of fluid drag on particle velocity under this high concentration system and they were referenceable.
It was significant that the velocity of the foreign particles was reduced due to the drag force in teeter bed comparing with that of in pure water. The higher volume fraction of particles in bed was more preferable to achieve the separation of particles according to their density because it increases To further explore the agreement between the simulation results and the Joshi model, the settling behavior of 2 mm, 4 mm and 6 mm glass beads and 2 mm, 4 mm and 6 mm steel particle through the bed with a volume fraction of 0.55 was studied in Figure 8c,d. It can be seen that the simulation results keep good agreements with the Joshi model. The influence of a high concentration of fluidized particles on foreign particle sedimentation was significant and can decreased by about 20% of free settling velocity. We can also see that there were some differences between the simulated data and the predicted values in other models. It can also be seen from Figure 8c, the predicted slip velocity for glass beads in models of Di Felice, Van der Wielen and Grbavcic presents a good consistency. In these models, the particle velocity was greatly reduced more than half of the value, and the values in Kunii model was between Joshi model and other three models. Figure 8d shows that the predicted values of steel particles in Kunii and De Felice model were relatively close and were also close to the present simulated results. The particle velocity assessed by Van der Wielen and Grbavcic models existed some difference, but still decreased the particle velocity significantly. In principle, the motion of the foreign particle was governed by various forces from the bed. These forces generally includes drag force, the pressure gradient force, virtual mass force and the collision force, etc. In such a high concentration system, the resisting force caused by water drag force and collision force between particles was dominant and they can obviously decrease the settling velocity of the particles. The simulation data reflects the effect of fluid drag on particle velocity under this high concentration system and they were referenceable.
It was significant that the velocity of the foreign particles was reduced due to the drag force in teeter bed comparing with that of in pure water. The higher volume fraction of particles in bed was more preferable to achieve the separation of particles according to their density because it increases bed density and further increases the difference in settling velocity among foreign particles. Further, for particles with the same density, the slip velocity increases with the increase of particle size, which makes larges particles moving through the fluidized bed into light product more easily. Therefore, there was the influence of both density and particle size on the separation process in TBS. To reduce the number of mismatches and improve the accuracy of separation in TBS, the particle size of feeding materials was more likely to be narrowly distributed.
Conclusions
The particles in TBS are fine in size and the principle of their motion is complex in TBS, which brings difficulty for experimental study. A CFD model was implemented to investigate particle separation. It was found that lower water velocity was conducive to forming a homogeneous and stable fluidization bed. Higher water velocity was likely to cause strong turbulence flow, leading to large eddies in the bed. Therefore, a lower water velocity is recommended for the practical operation of TBS. The slip velocity gradually increased with an increase of particle size and a decrease of solid volume fraction. Therefore, in TBS, large particles may flow through the self-generating medium bed into the light product. Based on the movement of particles with various size and densities, the influence of a self-generating medium bed on the separation of particles with narrow size distributions in TBS by density was clarified in the model.
Conflicts of Interest:
The authors declare no conflict of interest. Nomenclature ρ L liquid density, kg/m 3 ε L volume fraction of liquid, dimensionless u L liquid velocity, m/s ρ S fluidized particles density, kg/m 3 ε S volume fraction of fluidized particles, dimensionless u S fluidized particle velocity, m/s CD drag force coefficient, dimensionless Re S fluidized particle Reynolds number, dimensionless d S diameter of fluidized particle, m ρ D foreign particle density, kg/m 3 d D foreign particle diameter, m u D foreign particle velocity, m/s g gravity acceleration, m/s 2 u D∞ foreign particle slip velocity in indefinite medium, m/s u Dw bounded settling velocity for foreign particle, m/s ρ M mixture/pseudo-fluid density, kg/m 3 u S∞ fluidized particle velocity in indefinite medium, m/s µ M pseudo-fluid viscosity, Pa·s µ L liquid viscosity, Pa·s u D foreign particle classification velocity, m/s L initial bed height, m n D R-Z index, dimensionless ρ e f f effective density, kg/m 3 r particle size, dimensionless | 7,631.4 | 2020-04-18T00:00:00.000 | [
"Materials Science"
] |
Thermodynamic foundation of generalized variational principle
One long-standing open question remains regarding the theory of the generalized variational principle, that is, why can the stress-strain relation still be derived from the generalized variational principle while the method of Lagrangian multiplier method is applied in vain? This study shows that the generalized variational principle can only be understood and implemented correctly within the framework of thermodynamics. As long as the functional has one of the combination A ( (cid:15) ij ) − σ ij (cid:15) ij or B ( σ ij ) − σ ij (cid:15) ij , its corresponding variational principle will produce the stress-strain relation without the need to introduce extra constraints by the Lagrangian multiplier method. It is proved herein that the Hu-Washizu functional Π HW [ u i , (cid:15) ij , σ ij ] and Hu-Washizu variational principle comprise a real three-field functional. In addition, that Chien’s functional Π Q [ u i , (cid:15) ij , σ ij , λ ] is a much more general four-field functional and that the Hu-Washizu functional is its special case as λ = 0 are confirmed.
INTRODUCTION
Variational principles have always played an important role in both theoretical and computational mechanics . Generalized variational mechanics began in the 1950s with the breakthrough works of Reissner [2] on two-field variational principles for elasticity problems, in which the displacement u i and stress σ ij are considered independent fields. The previous literature, however, considered only displacement u i as a single independent field. Reissner introduced a functional F that is defined in terms of 12 arguments: six stresses σ ij and six strains ij : where B(σ ij ) is the elastic complementary energy density.
Reissner proved the following theorem: Among all states of stress and displacement that satisfy the boundary conditions of the prescribed surface displacement, the actually occurring state of stress and displacement is determined by the variational equation: where the symbol V indicates the volume of the elastic body and S p indicates that the surface integrals are to be taken over that part of the surface only where the appropriate surface stress is prescribed. In 1954, Hu published a paper [4] (its English version appeared in 1955 [6]) that borrowed the idea from Reissner [2] and successfully extended Reissner's two-field (displacement-stress) theory to a threefield (displacement-stress-strain) theory by introducing a functional H U given by Hu [6]) proved a theorem as follows: In 1955, Washizu [7] independently proposed the same functional and proved the same theorems as Hu [4,6].
Regarding the history of the generalized variational principle, Felippa [32] published a dedicated paper on the original publication of the generalized variational principle and showed that de Veubeke had developed a much more generalized variational principle in a report dated 1951 [3], in which four fields, namely, displacement, stress, strain, and surface force, were included. de Veubeke's four-field (u i , σ ij , ij , t i ) theory can be presented as follows [32]: δΠ = 0, where the functional The three-field standard form is obtained by setting t i = σ ij n j on S u a priori. Hence, Fellippa proposed that the canonical functional in Eq. 5 be called the de Veubeke-Hu-Washizu functional Π V HW . This proposal has been confirmed by The History of the Theory of Structures Searching for Equilibrium [33].
In 1983, Chien [23], who was Hu's supervisor and communicated Hu's paper to both the Chinese Journal of Physics [4] and Science Sinica [6], pointed out that, regarding all publications and reports of Reissner [2], Hu [4,6] and Washizu [7] did not give any information on how to construct the functional. The formulation of the generalized variational seems mystical, and thus Chien indicated that the trial-and-error method was used when Reissner, Hu, and Washizu formulated their functional [23,28].
To derive the generalized functional in a systematic way, Chien proposed to formulate the functional by using the well-known method of Lagrangian multipliers [21]. This method can be described as follows [21,23,28]: Multiply undetermined Lagrange multipliers by various constraints and add these products to the original functional. Considering these undetermined Lagrange multipliers and the original variables in these new functionals as independent variables of variation, it can be seen that the stationary conditions of these functionals give these undetermined Lagrange multipliers in terms of original variables. The substitutions of these results for Lagrange multipliers into the above functional lead to the functional of these non-conditional variational principles.
With the help of the Lagrangian multipliers, in 1983 Chien [23] successfully reformulated the two-field functional, namely, Π[u i , σ ij ] and Π[u i , ij ], which are called the Hellinger-Resissner functional [2] and De Veubeke functional [3], respectively. However, Chien [23] found that the constitutive relation between stress and strain cannot be included to form a three-field functional Π[u i , ij , σ ij ] due to the zero crisis of corresponding Lagrangian multipliers, as it is known to be impossible to incorporate this condition of constraint into a functional whenever the corresponding Lagrange multiplier turns out to be zero. Therefore, Chien claimed that the functional H U is not a three-field, but rather a two-field, functional. To address this point of view, Chien elegantly wrote a monograph on the generalized variational principle [28] and, to overcome the difficulty, he proposed a method of a higher-order Lagrange multiplier, a fourfield functional Π Q that is suggested to be expressed as follows: Chien proved that for no zero λ = 0, the δΠ Q [u i , σ ij , ij , λ] = 0 will produce balance equations, strain-displacement relations, stress-strain relations, and corresponding boundary conditions. Owing to the arbitrary nature of the Lagrangian multiplier λ, there are an infinite number of functionals Regarding Chien's questioning [23,28], no explanation from Hu, to the best of our knowledge, has been found in the literature. Because the formulation of the generalized variational principle has been recognized as a key contribution by a Chinese scholar to mechanics worldwide, and in particular, considering its importance in finite-element formulation, it is vital that Chien's question can be clearly answered. Otherwise, it will continue to cause confusion to both scholars and students. The task of answering this question has become a newcomer's responsibility.
CHIEN'S QUESTION ON THREE-FIELD VARIATIONAL PRINCIPLE
To propose our understanding of the issue, a brief review of one-, two-, and three-field variational principles, as well as of Chien's question, is presented now.
Let V be the volume of an elastic body, S u the boundary surface where displacement is given, and S σ the boundary surface where external force is given. Letting S be the total boundary surface, then S = S u + S σ .
Assuming the body is subjected to the action of distributed body force f i (i = 1, 2, 3), S p is the portion of the boundary surface subjected to the action of external surface forcep i and S u the portion of the boundary surface where the displacementū i is given. Under statical equilibrium, the stress state in the body is denoted by stress tensor σ ij . Displacement u i , strain ij , and stress σ ij satisfy the following five conditions, that is, which is the balance equation, and σ ij,j = ∂σij ∂xj , where j is a dummy index; which is a strain-displacement relation; which are stress-strain relations; which is the boundary conditions for a given surface displacement; and which is the boundary conditions for a given external surface force; n j is normal unit vector of surface S σ .
For a one-field potential functional, Its extreme condition δΠ[u i ] = 0 leads to the balance equation σ ij,j + f i = 0 with constraints of ij = 1 2 (u i,j + u j,i ) and σ ij = ∂ ij , boundary condition u i =ū i on S u , and σ ij n j =p i on S σ .
If one wishes to eliminate the constraint of straindisplacement relation ij = 1 2 (u i,j + u j,i ), according to Chien [23], a symmetric tensor of Lagrangian multiplier λ ij can be introduced and form a functional as The Lagrangian multiplier λ ij can be determined by ∂ ij ; therefore, the two-field de Veubeke functional is If one wishes to carry on this process and eliminate the stress-strain relations, σ ij = ∂A( ij ) ∂ ij , one can do so by introducing another symmetric tensor of Lagrangian multiplier η ij to form a new functional as The Lagrangian multiplier η ij can be determined by δΠ[u i , ij , σ ij ] = 0, which leads to η ij = 0 and kl ∂ 2 A(eij ) ∂eij ∂e kl = 0, since ∂eij ∂e kl > 0, thus giving an incorrect result ij = 0.
These results reveal that the stress-strain relation cannot be included in the functional, Π[u i , ij , σ ij ], by the Lagrangian multiplier method. In other words, it is impossible to remove all the constraints simply because the related Lagrange multiplier is equal to zero in the stationary condition. This Lagrangian multiplier method crisis was discovered by Chien in 1983 [23], when he published a monograph and provided a comprehensive discussion of the issue [28].
THERMODYNAMIC FOUNDATION OF GENERALIZED VARIATIONAL PRINCIPLE
For the sake of brainstorming on the stress-strain relation, a quick brief of constitutive theory from a thermodynamics perspective is presented here.
Following the above thinking, it is easy to know that a functional has included a stress-strain relation if it contains either the terms In other words, if the structure of the functional was in the following form, it implies that the stress-strain relation and due to the arbitrary variation δ ij , one therefore has the following stress-strain relation: Similarly, if the structure of the functional was in the following form, it implies that the stress-strain relation ij = ∂B(σij ) ∂σij is included.
With this understanding, an examination of the following Hu-Washizu functional is necessary: The combination of the underlined terms in the above functional is exactly the term of A( ij ) − σ ij ij . Therefore, the Hu-Washizu functional Π HW [u i , ij , σ ij ] has already included the stress-strain relation σ ij = ∂A( ij ) ∂ ij . The Hu-Washizu functional Π HW [u i , ij , σ ij ] is a real three-field functional. This key point was not understood by Hu [4,6], Reissner [2], and de Veubeke [3] when they constructed their own generalized functional by the trial-and-error method. This situation is very similar to the formulation of the Schrödinger wave equation in quantum mechanics. The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say exactly what the wave function is. For instance, Schrödinger originally viewed the electron's wave function as its charge density was smeared across space, but Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space.
Of course, Chien's functional Π Q [u i , ij , σ ij ] in Eq. 6 is not only a three-field functional, but also much more general one, since it contains all elements, such as A( ij), B(σ ij ), and σ ij ij . The arbitrary nature of λ provides some kind flexibility in constructing a generalized functional.
CONCLUSIONS
It has been shown in this study that the generalized variational principle can only be correctly understood and implemented within the framework of thermodynamics. As long as the functional has any one of the combination A( ij ) − σ ij ij or B(σ ij ) − σ ij ij , its corresponding variational principle can produce the stress-strain relation without the need to introduce extra constraints by the Lagrangian multiplier method.
It has been proved that the Hu-Washizu functional Π HW [u i , ij , σ ij ] is a real three-field functional and therefore that the Hu-Washizu variational principle is a threefield variational principle. In addition, that Chien's functional Π Q [u i , ij , σ ij , λ] is a much more general four-field functional has been confirmed.
Owing to Chien's academic acumen, he discovered the problems and carried out meticulous research. His research inspired the author to think further on this issue. Although it was finally proved that the result of the Hu-Washizu functional was only correct in form, the current understanding has risen to a new level, leading to the resolution of the historic academic controversy on the issue of constructing a three-field functional.
ACKNOWLEDGMENTS
The author is honored to have benefited from personal connections with both Prof. Chien and Prof. Hu. Professor Chien supervised Prof. Kai-yuan Ye, the author's Ph.D. supervisor. Professor Hu was the committee chairman for the author's post-doctoral final progress report when he completed his post-doctoral research at Tsinghua University in 1991. In the great discovery of the generalized variational principle, both Prof. Qian and Prof. Hu made original contributions. Their academic thoughts are very important to our understanding of the generalized variational principle. Now that both Prof. Chien and Prof. Hu have passed away, if some valuable answer to the Qian question can be provided it can serve as the best tribute to both. Therefore, it is my privilege to dedicate this paper to the memories of Prof. Chien and Prof. Hu for their great contribution to the theory of the generalized variational principle.
Availability of data: This study does not have any data. | 3,208.8 | 2020-11-03T00:00:00.000 | [
"Physics"
] |
Antimicrobial Resistance in Bacteria Isolated from Foods in Cuba
INTRODUCTION Antimicrobial drug resistance constitutes a health risk of increasing concern worldwide. One of the most common avenues for the acquisition of clinically-relevant antimicrobial resistance can be traced back to the food supply, where resistance is acquired through the ingestion of antimicrobial resistant microorganisms pres-ent in food. Antimicrobial resistance constitutes a health risk, leading to production losses and negative consequences for livelihood and food safety. OBJECTIVE Determine whether resistant bacteria are present in foods in Cuba. METHODS A descriptive observational study was conducted in the Microbiology Laboratory of Cuba’s National Institute of Hygiene, Epi-demiology and Microbiology from September 2004 through December 2018. Researchers analyzed 1178 bacterial isolates from food samples. The isolates were identi fi ed as Escherichia coli, Salmonella, Vibrio cholerae and coagulase-positive Staphylococcus . The antimicrobial susceptibility study was performed using the Bauer-Kirby disk diffusion method, following procedures outlined by the Clinical and Laboratory Standards Institute. The data were analyzed using WHO-NET version 5.6. RESULTS Of the total isolates, 62.1% were resistant to at least one antibiotic. Within each group, >50% of isolates showed some type of resistance. E. coli and V. cholerae exceeded 50% resistance to tetracycline and ampicillin, respectively. Staphylococcus showed the highest resistance to penicillin, and Salmonella to tetracycline, nali-dixic acid and ampicillin. The highest percentages of non-susceptible microorganisms were identi fi ed in meats and meat products. CONCLUSIONS These results serve as an alert to the dangers of acquiring antibiotic-resistant bacteria from food and demonstrate the need to establish a surveillance system and
INTRODUCTION
Antimicrobial resistance (AMR) is a health risk worldwide, leading to production losses and negative effects on livelihood, food safety and the economy, [1] including in Cuba.Statistics from the national program for prevention and control of healthcare-associated infections show an increase in resistance to the most commonly used hospital antibiotics in the last few years, as well as longer hospitalizations and higher spending on these infections.[2] The public health sector is acting to promote the rational prescription and use of antimicrobials, and is conducting various susceptibility studies on clinically-obtained isolates.[3] However, there are few reports on antimicrobial-resistant foodborne bacteria.
Quantitatively, foodborne AMR is the most common route for the spread of antibiotic-resistant bacteria.The presence of these microorganisms in the food chain, the environment and water can lead to their appearance in the human intestinal microbiome, turning it into a major reservoir for resistant genes in the body.It also increases the risk of their dissemination among commensal bacteria and pathogens that cause intra-and extraintestinal infections.[4] Among the most clinically important foodborne pathogenic bacteria in AMR are strains of Salmonella and E. coli, which carry extended-spectrum beta lactamases, fl uoroquinoloneresistant Campylobacter and Salmonella, and methicillin-resistant Staphylococcus aureus.[5] However, commensal bacteria also found in foods play a key role in AMR evolution and spread.
They predominate in the environment and show greater genetic diversity and host variety in nature, which makes them a potential indicator for AMR.Thus, studying these agents can provide early warning of emerging AMR.[6] WHO suggests regular, periodic surveillance to address the problem of AMR, with permanent monitoring of changes in its prevalence in humans, animals, foods and the environment.[7] Clearly, it is important to discover foodborne AMR as quickly as possible.This includes studying risks by identifying dangers: antimicrobial-resistant microorganisms, the antimicrobials to which they are resistant, and the food products in which this resistance is found.Cuba has no program dedicated to ongoing surveillance of this problem.For these reasons, this study was performed with the aim of assessing antimicrobial resistance in clinically relevant bacteria isolated from foods in Cuba.
METHODS
A descriptive observational study was conducted from September 2004 through December 2018 on 1178 isolates identifi ed in foods (381 isolates of E. coli, 402 of Salmonella, 113 of V. cholerae and 282 of coagulase-positive Staphylococcus).The isolates were performed at the Provincial Hygiene, Epidemiology and Microbiology Centers in 13 Cuban provinces and in the Microbiology Laboratory of the National Hygiene, Epidemiology and Microbiology Institute (INHEM) in Havana, following current standards in Cuba.[8][9][10][11] The microorganisms were identifi ed in a variety of 146 foods subject to microbiological surveillance in the study of foodborne disease outbreaks and health inspections of foods before sale.These were categorized in 14 groups, according to Cuban microbiological criteria standard NC 585, 2017.[12] The food types were: IMPORTANCE This paper highlights the importance of antimicrobial resistance surveillance in foods commonly consumed in Cuba.
Results were interpreted following the manufacturer's criteria.E. coli ATCC 25922 strains were tested as a negative control, with ESβL Klebsiella pneumoniae ATCC 700603 strains tested as a positive control.
Results were analyzed using a database created in WHONET version 5.6, a WHO digital platform for surveillance of antimicrobial resistance and infection control.[14] The antibiogram interpretation criteria cutoff points were updated according to CLSI standards.Susceptibility was analyzed by isolate source, for which contingency tables were established, and the chi-square test was applied with a signifi cance level of 0.05%.The data were processed using the EPIDAT program (EpiData Association, Denmark) for epidemiological analysis of tabular data, version 3.0 of 2004.[15] Results of the in vitro susceptibility tests were expressed as absolute frequencies and percentages.Isolates with full growth around the antibiotic disk or those in which growth inhibition did not reach the diameter established for the CLSI susceptibility criterion (reduced susceptibility) were considered resistant.Otherwise, they were considered sensitive to the antibiotic.
Ethical considerations
No clinical assays were performed on persons or animals in this study, and the study was authorized by INHEM's scientifi c council.This document contains no company, institution or brand names of foods from which the isolates were obtained.
RESULTS
AMR was analyzed according to the microorganisms retrieved from different food types (Table 1 V. cholerae was isolated in fruits and vegetables, and in fi sh, seafood and fi shery products, which had the highest percentage of resistant isolates at 69.3%.
Original Research Peer Reviewed
Table 2 shows the relation between AMR in Salmonella, E. coli and Staphylococcus and their isolate sources.Salmonella was not associated with any specifi c food type.The highest percentage of resistant isolates was found in meats and meat products.E. coli had a higher proportion of resistant isolates compared to subgroup size in meats and meat products.Additionally, Staphylococcus had a higher proportion of resistant isolates found in meat and dairy products.
Resistance by antibiotic type was low overall, except for tetracycline in E. coli and ampicillin in V. cholerae, for which resistance was over 50% (Table 3).Of the 19 antibiotic agents analyzed (14 for Salmonella and E. coli, 12 for Staphylococcus and 6 for V. cholerae) Salmonella expressed in vitro resistance to 12, and E. coli, to 14. Tetracycline, nalidixic acid and ampicillin showed the highest resistance levels.More than 75% of Staphylococcus isolates were resistant, mainly against penicillin, erythromicin and tetracycline, in decreasing order.V. cholerae was resistant to three antibiotics, namely tetracycline,
DISCUSSION
More than half of the bacterial isolates recovered from foods were resistant to at least one of the drugs tested.The most clinically important isolates were E. coli and Salmonella, since they often cause gastrointestinal disease or extraintestinal infections requiring treatment.The least effective antibiotics administered in vitro were tetracycline, ampicillin, nalidixic acid and penicillin, as also found in international studies.[16][17][18][19][20] For WHO-classifi ed antibiotics, [18] specifi cally those appropriate for only limited use in humans (including ciprofl oxacin, cefotaxime, ceftriaxone and ceftazidime), resistance was low and observed more often in E. coli and Staphylococcus.The international literature reports resistance percentages higher than those in this study.[19][20][21] The foods that most often contained resistant isolates were meats and meat products; for Salmonella, this result is consistent with those of other researchers, which show that these products are among the main sources of resistant bacteria in this genus.[22,23] The 173 Salmonella isolates from meats and meat products were obtained from 31 different foods.Hamburger showed the highest number of resistant isolates.Among fresh meats, resistance was most often found in poultry, where isolates from ground turkey were predominant, followed by those from ground chicken and mechanically deboned meat.These results agree with international reports, which found that in ground meats, the Salmonella detected often presents with high virulence and high levels of AMR.[24,25] Since most poultry meats in Cuba are imported, [26] this could be considered a route for spreading resistance, in addition to antibiotics found in imported meat that are not used in domestic animal production, such as cefotaxime, ceftriaxone and ceftazidime.
Resistant E. coli isolates were most often found in pork, mortadella and smoked pork loin.Three isolates carrying ESβL were found in imported poultry meat and beef, and in domestically produced pork, at a lower percentage than has been reported in other countries.[27,28] Globally, antimicrobial susceptibility of E. coli is studied in different foods depending on geographic region.In the European Union and the United States, emphasis is on meats and antibiotics such as cephalosporins and fl uoroquinolones.[29,30] In Asia and Latin America, there are more studies on ready-to-eat foods.[31,32] This could be due to greater availability of industrially processed ready-to-eat foods in developed countries, while in developing nations there are more prepared foods sold by small-scale manufacturers who generally do not monitor product preparation, potentially allowing bacterial contaminants to survive and multiply.In this study, which analyzed meats and ready-to-eat foods, antibiotic resistance was frequent regardless of food type.
Currently, AMR in commensal bacteria such as E. coli is cause for growing concern because resistant genes can be replaced with bacteria that are pathogenic to humans.The scientifi c literature has demonstrated transfer of multidrug resistance through E. coli plasmids to other enterobacteria such as Salmonella.[33] Most antibiotic-resistant Staphylococcus isolates were identifi ed in meats and meat products such as sausages, ground meats and hamburger.In milk and dairy products, most isolates were found in cheese, mainly artisanal cheeses.This last food group was shown to be associated with resistant isolates.Other countries report varying percentages of AMR to at least one of the antibiotics tested, among which S. aureus was the most prevalent in meats and cheeses.[21,34] It should be noted that foodborne staphylococcal intoxication does not require antibiotic treatment, and there is no evidence that consuming foods contaminated with this bacteria is associated with infection in humans.[35] However, there is now special interest in antimicrobial susceptibility studies because of the possible transfer of resistant genes between microorganisms, and thus from the environment to humans.[7] V. cholerae is a species endemic to aquatic environments, and thus may be an indicator of antibiotic resistance in bacteria found in these ecosystems.In this study, it was mainly found in fi sh, seafood and other fi sh products.Its expressed resistance was low except to ampicillin, to which resistance was seen in >50% of isolates.No resistance was found to ciprofl oxacin, azithromycin or doxycycline, which are often used as fi rst-line treatments for infections of toxigenic agents of this species.For V. cholerae, the international literature reports AMR usually higher than that found in this study.[36,37] The highest percentage of isolates analyzed came from foods inspected at INHEM as part of the institution's responsibilities in sanitary registration including imported products and those domestically produced by various Cuban companies.Foods that do not meet the bacterial limits in the standard [11] are not approved for sale.However, there are currently no trade regulations that address antibacterial resistance, which is why studies focusing on risk are needed to accurately determine the scope of the problem.[38] We observed an unequal distribution in both the number and geographic origin of isolates received from laboratories in other provinces participating in the study, as well as in numbers of isolates of each bacteria type received.There were low percentages of E. coli, Staphylococcus and V. cholerae, which made it impossible to analyze antibiotic resistance for each region of the country.This would be possible if a national antimicrobial resistance surveillance system were established to obtain standardized information that would allow comparisons by region and over time.
One of the study's main limitations was the unequal numbers of bacterial isolates sent from each province.The study was based on the isolates received, which did not allow nationally based analysis of a resistant bacterial load for each food.In addition, the information presented was obtained more than a year ago, which makes it invalid for immediate surveillance purposes, but does not affect its usefulness as a resource for illustrating a problem that demands surveillance and control.Despite these limitations, a broad range of antibiotics were analyzed, including most classes used in human and veterinary treatment, and the number of isolates studied for each bacterial genus was suffi cient for making preliminary estimates of AMR prevalence in each case, although without claims as to their representativity.
CONCLUSIONS
Resistant phenotypes were identifi ed in more than half the bacteria isolated from foods, with a higher percentage found in animal products such as meat, dairy, eggs and foods made from these ingredients.Low percentages of AMR were found for antibiotics classifi ed as critical for human use.These results may serve as an alert to the dangers of acquiring foodborne antibiotic-resistant bacteria and demonstrate the need to establish a surveillance system and institute related control in Cuba.
Table 2 : Relation between antibiotic resistance of Escherichia coli, Salmonella and Staphylococcus and food type from which isolates were recovered (n = 1065). INHEM 2004-2018 Susceptibility
a Percentage refers to total number of isolates in category b Percentage refers to total number of foods analyzed per microorganism AMR: Antimicrobial resistance INHEM: National Institute of Hygiene, Epidemiology and Microbiology a Percentage refers to total number of isolates in category b Percentage refers to total number of foods analyzed per microorganism INHEM: National Institute of Hygiene, Epidemiology and Microbiology
Table 3 : Percentage of resistance by antibiotic and microorganism. INHEM 2004-2018 Antibiotic Salmonella n = 236 E. coli n = 220
INHEM: National Institute of Hygiene, Epidemiology and MicrobiologyOriginal ResearchPeer Reviewed ampicillin and sulfamethoxazole/trimethoprim (Table3).A low percentage (2.8%) of ESβL enzyme was detected in 97 E. coli isolates obtained from fresh meats.Geographical distribution of isolates (Table4) showed that the highest percentage, 52.7% of the total, was identifi ed in Havana Province at INHEM's laboratory.The percentage of isolates sent from provinces outside Havana was low.The highest percentage came from Santiago de Cuba (11.0%); the rest were less than 10.0%.
Table 4 : Isolates studied, by microorganism and province where identifi ed. INHEM 2004-2018
Percentage refers to total number of isolates for province, b Percentage refers to total number of isolates INHEM: National Institute of Hygiene, Epidemiology and Microbiology * Special Municipality a | 3,360 | 2020-07-01T00:00:00.000 | [
"Medicine",
"Agricultural And Food Sciences",
"Biology"
] |
Vocal Features of Song and Speech: Insights from Schoenberg's Pierrot Lunaire
Similarities and differences between speech and song are often examined. However, the perceptual definition of these two types of vocalization is challenging. Indeed, the prototypical characteristics of speech or song support top-down processes, which influence listeners' perception of acoustic information. In order to examine vocal features associated with speaking and singing, we propose an innovative approach designed to facilitate bottom-up mechanisms in perceiving vocalizations by using material situated between speech and song: Speechsong. 25 participants were asked to evaluate 20 performances of a speechsong composition by Arnold Schoenberg, “Pierrot lunaire” op. 21 from 1912, evaluating 20 features of vocal-articulatory expression. Raters provided reliable judgments concerning the vocal features used by the performers and did not show strong appeal or specific expectations in reference to Schoenberg's piece. By examining the relationship between the vocal features and the impression of song or speech, the results confirm the importance of pitch (height, contour, range), but also point to the relevance of register, timbre, tension and faucal distance. Besides highlighting vocal features associated with speech and song, this study supports the relevance of the present approach of focusing on a theoretical middle category in order to better understand vocal expression in song and speech.
INTRODUCTION
Language and music processing have been the subject of comparison for decades. Patient studies have revealed a dissociation between pitch processing in language and music (e.g., Peretz and Coltheart, 2003), but studies on music expertise and its transfer in the language domain support shared processes (e.g., Patel, 2008). While conclusions are still being discussed, researchers agree that the comparison of sung material to speech is interesting due to their shared properties. Indeed, both signals are produced with the same instrument (Titze, 1994(Titze, , 2008Sundberg, 2013) and share comparable structure (i.e., modulation of a monophonic acoustical signal over time, presence of lyrics/articulatory features, and following of syntactic rules; Koelsch, 2011). However, the comparison of speech and song requires a deep understanding of their respective characteristics and therefore a clarification of what makes a vocalization stand out as speech or song.
Production of Vocal Features in Speech vs. Song
Vocal production basically consists of oscillating movements of the vocal cords, initiated and sustained by the air stream from the lungs (Titze, 1994(Titze, , 2008. Pitch control in speech and song is achieved by the combination of tension and geometry of muscles in the larynx and sub-glottal pressure (Sundberg, 2013). Also, the vocal cavity and its geometry serve the function of a filter, which produces frequency bands of enhanced power in the vocal sound (Fant, 1960). However, sound modulations have been described as being different depending on the purpose of the vocalization (i.e., singing vs. speaking).
Regarding the pitch dimension, the German linguist Sievers (1912) described the difference between music and speech as early as 1912: "Music works mainly with fixed tones of steady pitch, speech moves mainly in gliding tones that rise and fall from one pitch to the other within one and the same syllable. Speech in particular is not bound to discrete pitches and intervals of musical melodies: it knows approximate tone levels only." A 100 years later, this is the main difference described in current publications (e.g., Patel, 2008;Zatorre and Baum, 2012). When investigating the acoustical features in song and speech production, the patterns that most stand out in visual inspection of the spectrogram are the straight, horizontal lines in song when holding a tone and the ups and downs of fundamental frequency (F0) when examining speech signals (for a good visualization, see Zatorre and Baum, 2012). That means, when investigating the acoustical features of the pitch patterns (pitch contour, melodies) in music and speech, a fundamental difference remains in the gliding, continuously changing pitch in speech and the discrete pitch in music; and while musical melodies are built around a stable set of pitch intervals (according to Western tonal music), spoken prosody is not (e.g., Patel, 2008). The variations of pitch in music are rather fine grained (one or two semitones; Vos and Troost, 1989), while in speech they can be rather coarse, for example, up to more than 12 semitones . Note that music seems to reflect patterns of durational contrast between successive vowels in spoken sentences, as well as patterns of pitch interval variability in speech (Patel et al., 2006) but, in singing, vowels are typically elongated to achieve the rhythm dictated by the musical text and to convey the pitch assigned to a specific syllable.
Besides the numerous studies comparing pitch production in speech and song, few studies have investigated the differences between speech and song with regard to other features such as vocal quality (Lundy et al., 2000;Livingstone et al., 2014;Gafni and Tsur, 2015), tension or articulation. In order to describe the peculiarities in spoken and sung signals (by applying acoustical analyses to the vocalization), a crucial step consists of understanding what is relevant to listeners when perceiving vocalizations as either speech or song.
Perception and Evaluation of Vocal Features in Speech vs. Song
On a perceptual level, clear speech and song stimuli can easily be distinguished by listeners, while the classification of ambiguous stimuli into the categories of song and speech is an individually varying process (Merrill et al., 2016). The distinction and classification of the two modes of phonation may result from the development of expectations about each domain. Indeed, the functions/contexts of these two activities are clearly distinguished from an early age (McMullen and Saffran, 2004). More generally, categorization is facilitated (if not driven) by top-down cognitive processes that constrain the listener's perception and therefore judgment. The phenomenon of categorical perception has been extensively studied on several perceptual dimensions (Harnad, 1987; for a review on the phenomenon, see, Goldstone and Hendrickson, 2010), such as categorization of phonemes (Liberman et al., 1957), categorization of environmental sounds (Giordano et al., 2014), prosodic contour (Sammler et al., 2015), or speaker gender (Latinus et al., 2013). The existence of distinct categories relative to speech and song is also supported by the well-known speech-to-song illusion (Deutsch, 2003;Deutsch et al., 2008Deutsch et al., , 2011Tierney et al., 2013;Jaisin et al., 2016). By listening to a spoken phrase several times, listeners perceive the phrase as sung. This simple and compelling experiment suggests that the categorization of a performance as speech or song does not rely on the acoustical characteristics (always the same) but rather on the repetition effect leading to the reinterpretation of the material as musical (Falk et al., 2014). However, when listening to a single sound or isolated phrase (i.e., not repeated and without previous exposition to the phrase), listeners are able to detect a clear spoken or sung utterance (Merrill et al., 2016) and therefore rely on acoustical parameters of the signal itself or on perceptual impressions.
To the best of our knowledge, the vocal features underlying the perception of a vocalization as song or speech remain unclear. One reason might be due to the proximity/similarity of several acoustic features usually examined (Tierney et al., 2013), which could overshadow the subtle differences in other acoustical features perceived by listeners. Another reason might be the lack of adequate tools to examine listeners' perception of vocal features in both sung and spoken material. Some questionnaires are specific to singing (e.g., Henrich et al., 2008), while others are specific to speech (e.g., the Geneva Voice Perception Scale, Banziger et al., 2014), making difficult a direct comparison of the impressions of spoken and sung vocalizations. A questionnaire was designed to capture the vocal-articulatory impression of vocal expression on many levels (Bose, 2001(Bose, , 2010, and can be used in different contexts, such as describing adults as well as children's speech (Bose, 2001), artistic and emotional speech (Wendt, 2007). Items are grouped into different major categories with regard to pitch, loudness, sound of voice and their subsequent modifications, articulation and complex descriptions of vocal expression such as mode of phonation (speaking, speechsong, singing etc.), rhythm, or tempo. By relying exclusively on listeners' perception, this tool overcomes the limits of acoustical analyses (i.e., acoustical differences between sound signals are not necessarily relevant to listeners) and was used for the first time to examine the vocal features leading to the perception of speech and song.
Another important reason concerns the choice of material to examine. In order to focus on the perceptual impressions of listeners, particular attention must be paid to limit the top-down cognitive processes behind categorization of vocalization (Falk et al., 2014;Margulis et al., 2015;Vanden Bosch der Nederlanden et al., 2015;Jaisin et al., 2016). Schematically, human perception is influenced by the existence of preset categories and expectations based on these categories (i.e., top-down processes) as well as the physical features of the stimuli (i.e., bottom-up processes). As the same principle applies to vocalization, examining the perception of acoustic features or listeners' impressions requires the use of material which is neither typical to the language nor to the music domain. Unfortunately, speech and song are rarely melded in a single performance, except in art forms such as poetry or rap music (Wallin et al., 2000;Fritz et al., 2013) or in a musical phenomenon called "speech-song" (Stadlen, 1981). Speechsong (or "Sprechgesang" in German) can be described as an expressionist vocal technique resembling an intermediate state between speech and song. By its nature, speechsong seems to be an adequate candidate to suppress the categories "speech" and "song, " and therefore allows for examination of listeners' perceptual impressions.
The Case of Speechsong
Building partly on earlier forms combining prosodic aspects of speech and music, such as the recitative and the melodrama, the form of speechsong in Western Art Music has been around for over 100 years, and its reception has fallen very far from any other type of vocal performance. The piece used in the current study is the representative of speechsong at the brink of modernity: Three times Seven Poems from Albert Giraud's "Pierrot lunaire" op. 21 for speaking voice and chamber orchestra by Arnold Schoenberg, written in 1912. The music lacks a tonal center, called atonal, and is a highly discussed piece by musicologists and performers alike. The 21 poems are composed into 21 short pieces. Instructions on how to perform Pierrot can be found in its preface and are represented in the musical score by a special musical symbol for the vocal part. The most important instruction was that the melody "should definitely not be sung" but had to be "transformed into a speech melody" (Schoenberg, 1914). Furthermore, "the difference between a sung pitch and a spoken pitch should be clear: a sung pitch is held and does not change, whereas a spoken pitch is intoned, then left by rising or falling in pitch" (Schoenberg, 1914). It is also interesting that Schoenberg did not only differentiate his notated vocal part from singing, but also from normal speaking: "The difference between normal speaking and speaking as part of a musical structure should be clear" (Schoenberg, 1914). It is also of note that he instructed that the notated rhythm be precise. Hence, the performer has the task of achieving the impression of speaking by following a notated rhythm with long notes, fixed pitch and a pitch range covering 2.5 octaves (from E flat 3 to G sharp 5). All this together constituted significant challenges for the performer, which were further addressed by the composer later: "The pitches in Pierrot depend on the range of the voice. They are to be considered 'good' but not to be 'strictly adhered to'. [...] Of course, the speaking level is not enough. The lady must just learn to speak in her 'head voice'..." (Schoenberg, 1923). His idea of the vocal part was not limited to melodic accuracy though, as the overall vocal expression was considered important, such as "to capture the light, ironical, satirical tone [...] in which this composition was originally conceived" (Schoenberg, 1940). Taken together, a degree of freedom is given with regard to the pitches and performers regularly narrow down the compass of their voice and produce relative intervals between pitches (Cerha, 2001, p. 67). This has been evidenced by empirical investigations about Pierrot performances, focusing on the melodic accuracy with regard to the musical score, i.e., counting exact and deviating pitches and correct and incorrect pitch interval relations (Heinitz, 1925), and making assumptions about the relationship between the notated melodies and linguistic and emotional prosody (Hettergott, 1993;Rapoport, 2004Rapoport, , 2006. Our main objective consists of clarifying the features of vocal expression associated with speaking and singing on an impressionistic level. We propose an innovative approach that aims at minimizing the top-down processes influencing listeners' perception of auditory information. Concretely, listeners are asked to evaluate various interpretations of a representative speechsong composition with an extensive questionnaire. It is expected that such material will not clearly be considered speech or song. To examine the relationship between evaluations of the mode of phonation (i.e., speaking and singing) and a variety of features, including not only features of the pitch domain but also of vocal sound and articulation, voice experts were chosen as participants. By achieving evaluations at this detailed level, we aim at clarifying what causes a vocalization to be perceived as speech or song.
Ethics Statement
All experimental procedures were ethically approved by the Ethics Council of the Max Planck Society, and were undertaken with written informed consent of each participant.
Participants
The raters were 25 university students from the field of speech science and phonetics (20 female, mean age = 23.3 years, SD = 1.14). They were equally trained in the auditory description of voices and in using different vocal assessment scales, and most importantly, in the questionnaire used in the current study. Their expertise was verified by taking their study program into account, which demonstrates their level of training, proven by corresponding periodical and final examinations (e.g., Bachelor's degree). The raters were all German native speakers. All of them had had singing training and/or played an instrument (range 4-18 years of training, M = 6.76) and 14 of them were still practicing music at the time of the experiment. One participant had a master's degree in Music (main instrument was the trumpet), one in acting, one in classical singing, five participants were trained speech therapists. Note that participants were unfamiliar with the piece under study prior to the testing session, which ensures comparable knowledge among participants.
The Musical Piece
From the composition Pierrot lunaire, the piece No. 7 "The Sick Moon" was chosen because of the sparse accompaniment, which is the flute only, so that the voice could be heard clearly. The lyrics were in the original German. The chosen excerpt ended with the first stanza. The excerpts were between 35 and 55 s long, depending on the chosen tempo by the performer. The pitch range of the excerpt is from D 4 to E flat 5 and the dynamic range is small, ranging from pianissimo to piano with several short crescendi and a decrescendo. Table 1 shows the 20 interpretations used in the study. The recordings range from 1940 to ca. 2010, representing a wide range and great variety of interpretations. The earliest one was conducted by Schoenberg himself, followed by recordings from his friends and colleagues Josef Rufer and René Leibowitz (1949 andca. 1954, respectively). Pierre Boulez recorded the piece three times with a different performer each time.
The Questionnaire on Vocal-Articulatory Expression
The questionnaire was developed by speech scientists (in German, Bose, 2001Bose, , 2010) as a tool for describing the vocalarticulatory expression in (German) speech. As illustrated in Table 3, the tool is set up to combine features that represent auditory impressions, not acoustic measures. Items are grouped into different major categories (N represents the number of items selected for the current study) with regard to pitch and its modifications (N = 4), loudness (N = 1), sound of voice and its modifications (N = 14), articulation (N = 3) and complex descriptions of vocal expression (N = 4), such as mode of phonation (sung or spoken) or rhythm. The modifications are described by means of the variability (e.g., for pitch inflected or monotone), the range (wide or narrow), and the changes between tones or syllables (sudden or continuous). Items pertaining to the sound of voice are manifold, e.g., faucal distance describes a wide or constricted pharynx, the sound of the onsets and offsets of the voice are described with hard and soft (for detailed description of the features and more extensive background information on the questionnaire, please refer to Bose, 2001). For the current study, the profile was slightly adapted to the musical piece under study. An item about the register and blending of the voice and the flute was added, while the category of loudness was reduced down to its range since the possibilities were limited in the recordings used. Overall, the questionnaire consisted of 27 items: 20 items were specific to vocal expression (5-point scale; six items on a 3point scale, see Table 2), one item concerned the flute and the voice (5-point scale). Note that the use of such a questionnaire requires experience in voice diagnostics and/or familiarity with the specific vocabulary and dimensions to evaluate. Because lay people cannot differentiate, for example, between a rough and (Kreiman and Gerratt, 1998), the present study focuses on the evaluation by voice experts.
In the present study the questionnaire was used to examine voices in an artistic/musical context; the midpoint of the rating scale, which usually describes a neutral midpoint, was now representing the perception of a feature being "just about right" (JAR) to account for the listeners' expectations of the vocal part. If a feature was perceived as "not right, " then the rater could choose the direction in which it deviated from JAR. In the case of the perceived "average pitch" of a voice, it could be "too high" and "much too high" or "too low" and "much too low." The rating scale was adapted from studies in consumer product testing (Popper et al., 2004;Popper and Kroll, 2005) and has been used to evaluate piano performances (Kroger and Margulis, 2017). As can be seen in Table 2, most items were bipolar (e.g., the average pitch can be high or low), the items on noisiness and modulations of the voice were unipolar, e.g., the vibrato could be rated as "just about right" or not. These items were only rated upon when present in the voice.
The JAR-scale can be coded in two different ways: A coding from 1 to 5 takes the bipolar scale into account; A coding accumulating the deviations from JAR allows focusing on the "JAR-ness, " i.e., a feature being either JAR (value 0) or non-JAR with "too X" (1) and "much too X" (2).
Besides these items of vocal expression, two general questions were proposed at the beginning of the questionnaire: overall liking and adequateness. Liking ratings are based, for instance, on the individual musical preferences and spoken arts perception, or on liking of the voice etc. Adequateness ratings, on the other hand, were dependent on the background information provided to the participants about Pierrot (see below). The questionnaire was concluded with two additional questions: "Do you consider the interpretation as coherent (additional meaning of the German phrase might be expressed by 'harmonious' and 'cohesive')?" (yes/no) and "What is the profession of the performer?" (Singer, speaker or actress). The ratings for profession were challenging (i.e., both "speaker" and "actress" boxes were often ticked together in the present study) and might be more informative when coded as binary variables, i.e., singer and actress. Coherence is about the overall impression of the performance (including voice and flute) and reflects the listener's expectations regarding the interpretation of the piece.
Procedure
Prior to the listening task, an introduction to the composer and the piece under study was presented by the experimenter, including Schoenberg's dates of birth and death and selected historical-musicological background information, i.e., his role in shifting from tonality to atonality in music as well as the importance of Pierrot with regard to the extraordinary use of the vocal part. With regard to the musical piece, the preface to Pierrot was presented, as well as selected quotes from Schoenberg (all presented in the Introduction "The case of speechsong"). The goal was to form a knowledge basis for the raters who were partly not familiar with Schoenberg and not at all familiar with the piece Pierrot lunaire. Also, the musical score of the beginning of the first piece (i.e., not the chosen piece) was presented in order to illustrate the peculiar musical notation of the vocal part. The participants also listened to four very different interpretations beforehand (which were not included in the respective experimental session) to get an idea on the range of possible interpretations, and to get to know the chosen piece, "The Sick Moon." During the rating sessions, excerpts were played three times, in a pseudo random order, counter balanced between two groups. After the first presentation, participants rated the overall liking and the adequateness. Then, while listening to the piece two more times, they rated the items on vocal expression as well as the additional questions. Participants were informed that they do not have to fill out the items in the order presented. Participants rated all 20 recordings in two sessions (70-90 min each).
Statistical Analyses
Since both the musical piece and the extraordinary vocal performance were unfamiliar to listeners, inter-rater agreement at the level of features of vocal expression was estimated following the procedure described in Larrouy-Maestri et al. (2013). Pairwise Spearman coefficient correlations between the 25 participants (rating 20 items for each of the 20 interpretations) were computed. On the basis of the correlation matrix, the proportion of significant correlations as well as their median correlation coefficient are reported. The agreement of the more general questions was investigated following similar procedures, using Spearman coefficients for the two scales (i.e., liking and adequateness) and using the phi coefficient for the two categorical variables (i.e., cohesion and profession). Differences in agreement level depending on the general question were tested with Chisquare tests.
Next, the relationship between the features and general questions was examined by means of linear regression analyses for liking and adequateness ratings and logistic regression analyses for coherence and profession ratings (using the JARness scale 0-2). Each analysis was carried out with the 20 items as potential predictors. Spearman coefficient correlations were computed between the "influential" items (using the 1-5 scale) and the general questions. For instance, if the coefficient correlation did not reach significance, low liking is related to high scores in both directions (e.g., average pitch of the voice being "too low" or "too high"). If the coefficient correlation was significant and positive, low liking was associated with the item on the left side of the scale (see Table 2). In the case of a negative correlation coefficient, low liking was associated with the item on the right side of the scale.
Finally, the relation between the mode of phonation and the other items in the questionnaire was examined with Spearman correlations using Bonferroni adjusted alpha levels of 0.003 in order to investigate features related to performances being perceived as "too (much) sung." Correlations were performed on the values 1, 2, 4, and 5, to focus on the direction of the relations. Also, a particularly relevant feature was investigated further with a Chi-square test. Here, the ratings were accumulated such that the mean was taken from each direction of the rating scale to only have one value representing song and one representing speech. Statistical analyses were performed using IBM SPSS Statistics 22. Some items were only ticked occasionally and were not further analyzed, i.e., the items on noisiness and modulations (overall N ≤ 14), as well as the item about the blending of flute and voice as almost all ratings were toward "disharmonic."
Inter-Rater Agreement
With regard to the features of vocal expression, the median correlation coefficient for the full scales (1-5) was r = 0.339 (SD = 0.072). Despite the moderate median correlation, all the combinations reached the significance level of p < 0.05. The proportion of positive and significant correlations (for the liking and adequateness questions) and phi coefficients (for the coherence and profession questions) are reported in Table 3.
In contrast to the high agreement of the participants when rating features of vocal expression (i.e., 100% of the correlations were positive and significant), the proportion of agreement regarding general liking, adequateness, coherence, and profession of the performers, was much lower (see Table 3). Chi-square tests confirmed the difference in agreement level depending on the general question proposed. Agreement of participants increased significantly following the order of (4) = 312.176, p < 0.001]. Importantly, despite the relatively low agreement on the general level, it was observed that when participants provided similar ratings, the median or mean correlation coefficients were "high" (>0.5) and presented low variability. In other words, few participants shared the same opinion, but when it was the case, they strongly agreed regarding the liking, adequateness, coherence and the profession of the performer.
Relationship between the Features and General Questions
Liking Regarding participants' liking, a significant regression equation was found, F (20, 433) = 19.647, p < 0.001. As can be seen Estimation of the agreement between the 25 raters (proportion of positive and significant Spearman correlation coefficient/phi coefficients, median correlation coefficient, mean correlation coefficient and standard deviation) when rating the general liking of each piece, the adequateness, the coherence of the interpretations, and the profession of the performers.
in Table 4, several items contributed in explaining 47.6% of the variance of the Liking ratings. According to the Spearman correlations, low liking was associated with a high average pitch, a thin and hard sound of voice, a constricted pharynx and lengthened vowels. Low liking was also associated with pitch variability and changes, the variance of the sound and mode of phonation. However, the ratings of these items were evenly dispersed on both sides of the scale, e.g., "too sung" and "too spoken" were both associated with low liking. More generally, it is of note that the performances were not particularly liked, with a median of 4 on the scale (IQR = 1-6).
Adequateness
As with the liking rating, a significant regression equation was found for the adequateness scale, F (20, 433) = 7.518, p < 0.001. Several items contributed in explaining 25.8% of the variance of adequateness ratings (see Table 4). Spearman correlations reveal that low adequateness was associated with continuous pitch changes, imprecise articulation and lengthened consonants. The items on average pitch and mode of phonation were not linearly correlated with the ratings on adequateness. Again, both directions were associated with low adequateness.
Coherence
The logistic regression model was statistically significant, χ 2 (20) = 142.65, p < 0.001. The model explained 37.3% (Nagelkerke R 2 ) of the variance in coherence ratings and correctly classified 77.4% of cases. Increasing the ratings of pitch variability and range, timbre, faucal distance and vowel duration on the JAR-ness scale, increased the likelihood of perceiving the interpretation as incoherent (i.e., answer "no"). Incoherence was associated with lengthened vowels, a dark timbre and a wide pharynx as well as an inflected and monotone pitch variability and a wide and narrow pitch range.
Profession
The logistic regression model was statistically significant, χ 2 (20) = 96.48, p < 0.001. The model explained 26.2% (Nagelkerke R 2 ) of the variance in profession ratings and correctly classified 74.0% of cases. "Too much" singing and a "too soft" sound of voice are associated with the fact that performers are perceived as singers whereas the choice of an actress (or speaker) was associated with "too much spoken" and a "too hard" vocal sound.
To conclude, the results show a significant agreement of the participants concerning specific items and a moderate agreement concerning general questions. Disregarding any specific interpretations, regression analyses highlight some particularly salient items when listening to and rating Schoenberg's Pierrot lunaire.
Description of Speechsong
The Spearman coefficient correlations highlight that singing correlates strongly with a high pitched voice (r = 0.513, p < 0.001), head voice (r = 0.313, p < 0.001) and a bright timbre (r = 0.335, p < 0.001). An inflected pitch contour (r = −0.208, p = 0.009), a wide pitch range (r = −0.188, p = 0.036), a wide faucal distance (r = −0.202, p = 0.019), and a tense Beta-weights and p-values of the specific items included in the two linear and the two logistic regression analyses, performed separately for each general question (i.e., liking, adequateness, coherence, and profession). Dark cells represent significant effects (p < 0.05), gray cells represent marginal effects (0.05 < p < 0.10) and white cells correspond to non-significant predictors (p > 0.10). The columns "direction" include the coefficient correlations between the vocal expression and the general question. If the significance level of p < 0.05 is reached (marked with an asterisk), the sign of the coefficient correlation indicates the direction of the relation between vocal features and the general question. Note that non-significant correlations reflect the non-specificity of direction of the vocal features influencing the general rating. For Direction: *p < 0.05. phonation (r = −0.222, p = 0.008) might also be associated with singing. The feature of pitch changes showed only a tendency toward continuously changing pitch correlating with singing (r = 0.143, p = 0.067). For an illustration of the relationships, see Figure 1.
In light of the importance of this feature in the literature, it was of interest to investigate it further in order to get a more conclusive picture about the relation between the pitch changes and mode of phonation. Firstly, a Pearson Chi-Square test confirmed that the perception of pitch changes [χ 2 (4) = 22.240, p < 0.001] significantly differs depending on the perception of mode of phonation. Secondly, the cross table (Table 5) revealed that the expected count in three fields out of four is less than expected. In the row of "singing" ratings, sudden pitch changes show a count lower than expected, continuous pitch changes on the other hand show a count higher than expected. With respect to JAR-ratings, more continuous pitch changes are expected for JAR-phonation, pointing toward a reduced acceptance of pitch glides in speechsong performances.
DISCUSSION
This study describes a new approach designed to clarify the vocal features used to categorize speech and song. By examining the perception of vocal expression when listening to several interpretations of speechsong, which is neither typical for speech nor song, we focused on the perceptual impressions of listeners.
Characteristics of Mode of Phonation
The present approach and material allow for review and extension of existing proposals on the differences between song and speech, which until now have been mainly described with regard to pitch patterns of speech prosody and sung melody (e.g., discrete pitches and gliding pitch patterns, range and interval size). The current study allows for examination of several aspects of pitch and confirms typical behaviors in singing, such as a wider pitch range, high pitched voice and an inflected pitch contour (e.g., Patel, 2008;Zatorre and Baum, 2012;Mecke et al., 2016). Interestingly, the often mentioned difference in pitch FIGURE 1 | Features associated with song and speech. Illustration of the significant correlations between the different features (register, average pitch, pitch range, pitch variability, timbre, faucal distance, and tension; y-axis) and mode of phonation (spoken vs. sung, around "just about right," JAR; x-axis). changes was not supported. On the contrary, continuous pitch changes show a higher count than expected for singing and not speaking, suggesting that exaggerating the pitch glides leads to the impression of singing and does not enhance the impression of speaking. As a consequence, the pitch glides do not seem to be a stable indicator for speech. Depending on how pronounced the glides are produced, the perception might shift toward singing. In addition to the usual pitch related features, the present study revealed that mode of phonation is associated with more features of vocal expression. As expected, head voice was associated with singing, but most frequently, if mode of phonation was just about right, the register was likewise just about right. Chest voice was underrepresented in the current material, and the combination of "too spoken" and "too much head voice" (despite Schoenberg's comments on speaking in head voice) was almost non-existent-which might also reflect listeners' basic assumption of the impossibility of this occurrence. According to our exploratory study, other features such as a tense phonation (reflecting high muscular activity), a wide faucal distance (reflecting a wide pharynx, lift of the soft palate and a low larynx), and a bright timbre (reflecting efficient use of resonance cavities), might also be associated with singing perception. Note that the association of spectral characteristics with singing is not surprising since they are particularly favored (if not specifically trained) in classical singing (Miller, 1986(Miller, , 1996Mitchell et al., 2003;Isherwood, 2013) and since listeners, even without formal training in music, are sensitive to such features (Larrouy-Maestri et al., 2017).
By asking for the assumed profession of the performer, our objective was to gain information on singing and speaking indirectly. As revealed by the logistic regression, singers and speakers were divided by soft and hard vocal onsets, respectively, but most importantly by mode of phonation (i.e., performers were perceived as singers if the performance was "too sung" and vice versa for speakers). This seems to reflect causality (i.e., attribution of a profession according to mode of phonation) but the hypothesis of an opposite relation (i.e., attribution of mode of phonation according to the profession) cannot be rejected. The fact that raters attribute the performance to a specific profession might lead to specific expectations (Falk et al., 2014;Vanden Bosch der Nederlanden et al., 2015) and thus bias the judgment toward speech or song. Future research controlling for the expected profession of the performer (e.g., by explicitly instructing the participants about the performer being an actor or a singer) would allow for clarification of the relationship observed between these two items.
Altogether, these findings extend the current knowledge on vocal features of speech and song, highlighting aspects of pitch, register, tension and timbre. Further research is encouraged to replicate the current findings by testing other "ecological" types of hybrid vocalizations, such as Rap, Jazz, or Musical Theater style singing or infant-directed speech. Phenomena such as speechsong are particularly interesting to the study of categorization processes, because they utilize our interest in ambiguous material in an artistic context, i.e., they esthetically challenge our internalized impressions of typical song and speech. By composers and performers playing with these expectations, researchers get information on the possible adjustments of certain vocal features. Finally, the identification of relevant features (i.e., associated with the perception of mode of phonation) paves the way to a better understanding of the perception and categorization processes of song and speech. The precise description of these features with acoustical analyses, and their systematic manipulations, will certainly clarify the categorization of vocal expression.
Notes on "Liking," "Adequateness," and "Coherence" Despite the frequent use of such terminology, the concepts behind are either highly subjective or difficult to grasp, as reflected in the relatively low inter-rater agreement. In the current study, adequateness was meant to describe the degree in which the performer succeeded in meeting the composer's intentions. Despite the low agreement and the percentage of explained variance (25%), relevant items noted by the composer such as mode of phonation, average pitch and pitch changes predicted the ratings' variance. Unlike liking-ratings, which showed a higher percentage of explained variance (47%), the evaluation of adequateness might rely on additional features which were not proposed in the questionnaire. Alternatively, the low percentage of explained variance regarding adequateness ratings could be due to missing information about this peculiar piece prior to testing.
With the subjective question on liking, the current study tackles the question of vocal appreciation in the context of speechsong. While the dislike of features such as a high pitched voice, a thin and hard vocal sound and a constricted pharynx might be explained in a broader context of pathological vocal sounds, other features such as lengthened vowels, pitch changes and variability might be very specific to the context of the rated piece. This points to the relevance of investigating other (and maybe more liked) material to control for the appreciation of listeners when examining the perception of speechsong.
Finally, the concept of coherence reflects the listener's impression of the performance in general (which includes the flute and the voice). It can be interpreted as which features do not fit in with the listeners' expectations of the interpretation of the piece. An incoherent interpretation was associated with a monotone and inflected pitch contour, a wide and narrow pitch range and shortened vowels. Notably, these features are set by the musical score, which might mean that listeners base their expectations on what is typically confined by the sheet music.
Notes on the Questionnaire
The questionnaire to evaluate features of vocal-articulatory expression was adapted to describe vocal performances producing speechsong. The questionnaire is an attempt to cover vocal expression with several items to achieve detailed descriptions of listeners' impressions. This tool might not replace acoustical analyses of vocal features but provides the information required (i.e., perception of vocal expression) for further investigation of relevant acoustical features. The high reliability of the raters with regard to the features of vocal expression suggests that raters understood the items in this specific context. Also, the JAR-scale, implemented to give a midpoint that would reflect their acceptance of the features in the given situation, provides useful indications regarding listeners' judgments beyond a pure description of the material. Concluding from the high agreement among judges, despite the variability in terms of formal musical background, this finding supports that the adaptation of the questionnaire was successful and is adequate to describe vocal expression in speechsong. From these results one can assume that an internal validation of the questionnaire might be successful, which would require systematically manipulated stimuli including acoustical analyses as well as comparison with other questionnaires. Further evidence that the questionnaire fulfilled its purpose is the result that the ratings actually relate to the context information given by the experimenter.
The chosen questionnaire is meant to be a dynamic tool that can be adapted to different situations and listeners. Its use in the present form implies listeners' expertise in auditory description of voices (more due to the labeling of features than to perception itself) and thus limits its application to a specific group of participants. However, it could be adapted to lay listeners by providing additional instructions. Here, the features lay listeners are able to evaluate need to be investigated and are at the same time relevant for the discrimination of song and speech. The ratings by voice experts should be used as a baseline. Therefore, this tool seemed to be particularly relevant in the present context and might be used in future research on different vocal material.
CONCLUSION
By examining listeners' perception of Schoenberg's Pierrot lunaire with regard to several features of vocal-articulatory expression, the present study highlights the features influencing the impression of song and speech in ecologically valid material. Keeping in mind the limitations due to the peculiar character of the piece under study, we observed the relevance of pitch, register, tension, and timbre. Besides clarifying the vocal features leading listeners' perception of a vocalization as being speech or song, our findings support the adequacy of both the chosen ambiguous material and the proposed questionnaire in investigating speech/song categorization. Also, this approach paves the way to further studies using other hybrid material as well as acoustically controlled manipulations of sounds to precisely define the acoustical characteristics driving speech and song perception and therefore to better understand the similarities/differences between music and language perception.
AUTHOR CONTRIBUTIONS
JM conceptualized and conducted the research. JM and PL analyzed the data and wrote the article. | 9,318.4 | 2017-07-11T00:00:00.000 | [
"Psychology",
"Physics"
] |
The Canonical Ensemble via Symplectic Integrators using Nosé and Nosé-Poincaré chains .
Abstract Simulations that sample from the canonical ensemble can be generated by the addition of a single degree of freedom, provided that the system is ergodic, as described by Nosé with subsequent modifications by Hoover to allow sampling in real time. Nosé-Hoover dynamics is not ergodic for small or stiff systems and the addition of auxiliary thermostats is needed to overcome this deficiency. Nosé-Hoover dynamics, like it’s derivatives, does not have a Hamiltonian structure, precluding the use of symplectic integrators which are noted for their long term stability and structure preservation. As an alternative to Nosé-Hoover, the Hamiltonian Nosé-Poincaré method was proposed by Bond, Laird and Leimkuhler [S.D. Bond, B.B. Laird and B.J. Leimkuhler, J. Comp. Phys., 151, 114, (1999)], but the straightforward addition of thermostatting chains does not sample from the canonical ensemble. In this paper a method is proposed whereby additional thermostats can be applied to a Hamiltonian system while retaining sampling from the canonical ensemble. This technique has been used to construct thermostatting chains for the Nosé and Nosé-Poincaré methods.
I. INTRODUCTION
͑a͒ Nose dynamics and its derivatives [1][2][3][4][5][6][7] are popular schemes for implementing molecular simulations at constant temperature.In all such schemes, sampling from the canonical ensemble is conditional on the system being ergodic, and in small or stiff systems this is often not the case.To overcome the lack of ergodicity in these systems further modifications to the Nose ´-Hoover method were proposed by Martyna, Klein, and Tuckerman 8 with new thermostats added to control each previous thermostat to form a thermostatting chain.This method is successful but suffers from the limitations of the Nose ´-Hoover method: there is no Hamiltonian from which it is derived and hence symplectic methods are not applicable.More recently the real time Nose ´-Poincare ḿethod was proposed by Bond, Laird, and Leimkuhler 6 which is based on an extended Hamiltonian, but the application of thermostatting chains to the Nose ´-Poincare ´method in a straightforward manner does not result in sampling from the canonical ensemble.A generalized thermostatting technique has been developed by Leimkuhler and Laird; 9 in this scheme an auxiliary heat bath is coupled into the thermostatting variables, an example of the auxiliary heat bath is a box containing billiards.This requires the design of the auxiliary heat bath and can require long integration time for the correct sampling to occur if the bath is poorly chosen.
This paper introduces the idea of adding multiple thermostats to a Hamiltonian which has been modified by Nose ´'s method, while retaining sampling from the canonical ensemble.Here we employ a regularizing term in the Nose ´or Nose ´-Poincare ´Hamiltonian to ensure bounded integrals over the auxiliary variables.This technique is used to introduce additional terms into the thermostatting chain in both the Nose ´and Nose ´-Poincare ´Hamiltonians, it is then possible to prove analytically that they sample from the canonical ensemble.In addition the fast convergence to the canoni-cal ensemble, which is a characteristic of the Nose ´-Hoover chains, is observed.Before proceeding to describe the new thermostatting technique, we introduce the Nose ´-Poincare formulation and other elements on which our method is based.
A. Nose ´and Nose ´-Hoover schemes
͑b͒ Nose ´'s method 1 adds a thermostatting variable to the equations of motion to act as a heat bath.Given a Hamiltonian system where H(q,p) is the energy of an N-body system, qϭ(q 1 ,q 2 ,...,q N ) and pϭ(p 1 ,p 2 ,...,p N ) are the positions and momenta of the N bodies, Nose ´proposed the extended Hamiltonian, where s is the new thermostatting variable, p s its corresponding momentum, T is temperature, Q the Nose ´mass, and k is the Boltzmann constant.In the equations of motion for this Hamiltonian, the time is scaled by the thermostatting variable s.
Hoover developed the idea of applying a Sundman transformation, dt/dtЈϭs, to correct the dynamics, but this destroys the Hamiltonian structure so that symplectic methods are no longer applicable.Applying the Sundman transformation, and substituting p i Јϭp i /s, tЈϭ͐ dt/s, p s Јϭp s /s, then making the substitution p ϭQ(1/s)ds/dtЈ, ϭln s the equations of motion become This form is now known as the Nose ´-Hoover thermostat. 2
B. Nose ´-Hoover chains
͑c͒ Martyna, Klein, and Tuckerman 8 proposed a method to overcome the lack of ergodicity in small or stiff systems.
Here each thermostat is controlled by another thermostat, forming a thermostat chain.In standard Nose ´-Hoover dynamics the distribution has a Gaussian dependence on the particle momenta, p, as well as the thermostat momentum, p .The Gaussian fluctuations of p are driven by the thermostat but there is nothing to drive the fluctuations of p unless further thermostats are added as described above.The modified dynamics, for M thermostats, can then be expressed as ϪkT ͪ.
͑11͒
These equations can be shown to produce the correct phasespace distributions.
C. Nose ´-Poincare ´method
͑d͒ Neither the Nose ´-Hoover or the Nose ´-Hoover chain methods have a corresponding Hamiltonian which means that symplectic integrators, with their associated long term stability and structure preserving characteristics, are not applicable.The real time Nose ´-Poincare ´method was introduced ͑along with a symplectic integrator͒ by Bond, Laird, and Leimkuhler. 6The reformulation for a Hamiltonian system with energy H(q, p) is ϩNkT ln sϪH 0 ͪ s.
͑12͒
Here N is the number of degrees of freedom of the real system, and H 0 is chosen such that the Nose ´-Poincare Hamiltonian, H NP , is zero when evaluated at the initial conditions.
The Nose ´-Poincare ´method has been extended to NPT ensemble simulation 10 and shown there to be an efficient method.However, it can be shown that the application of thermostatting chains to the Nose ´-Poincare method in a straightforward manner 6 does not sample from the canonical ensemble.
II. MULTIPLE THERMOSTATS
͑e͒ It is possible to introduce additional thermostats into the Nose ´and Nose ´-Poincare ´methods while retaining both their Hamiltonian structure and sampling from the canonical ensemble.This can be illustrated in a more general setting by rewriting the Nose ´method, ͑1͒, to include the momenta of the thermostatting variable with the system momenta such that p ˆϭ( p 1 ,p 2 ,...,p N ,p Nϩ1 ) to give ,p Nϩ1 ͪ .
A second thermostat can be added as follows: where f 2 (s 2 ) is a real valued function, g is a scalar and the new thermostat is applied to M of the momenta, the thermostatted set being ͕p i 1 Ј ,...,p i M Ј ͖ and the nonthermostatted set being ͕p j 1 Ј ,...,p j Nϩ1ϪM Ј ͖ for some integers i 1 ,...,i M , j 1 ,..., j Nϩ1ϪM .Note that the thermostatted set may include any of the system momenta and the thermostatting variable momenta.The partition function for this method, for energy E, is defined as
͑13͒
We can substitute p i Јϭp i /s 1 , 1рiрN, p Nϩ1 Ј ϭp Nϩ1 , the volume element then becomes dp ˆϭs 1 N dp ˆЈ, where p ˆЈ is defined as above.There is no upper limit in momentum space so we can change the order of integration of dp ˆЈ and ds 1 giving Using the equivalence relation for ␦, ␦(g(x))ϭ␦(xϪx 0 )/͉gЈ(x)͉, where x 0 is the zero of g(x)ϭ0, for xϭs 1 , and noting that ͉s 1 ͉ϭs 1 since ln s 1 is not defined for s 1 Ͻ0, we get We can substitute p k Љϭp k Ј/s 2 , k͕i 1 ,...,i M ͖; p k Љϭp k Ј , k͕ j 1 ,..., j Nϩ1ϪM ͖ the volume element then becomes dp ˆЈϭs 2
M dp ˆЉ
where p ˆЉϭ(p 1 Љ ,p 2 Љ ,...,p Nϩ1 Љ ).There is no upper limit in momentum space so we can change the order of integration of dp ˆЉ and ds 2 giving Zϭ 1 If we arrange that gϭM and that Integrating over both thermostat momenta, p Nϩ1 Љ and p Nϩ2 , gives where A similar proof can be applied to the Nose ´-Poincare ḿethod.This process can be repeated to add more thermostats, with the possibility at each stage of thermostatting the previous thermostat's momenta in addition to any of the other momenta.
III. NOSE ´CHAINS AND NOSE ´-POINCARE ´CHAINS
͑f͒ In this section the application of multiple thermostats to both the Nose ´and Nose ´-Poincare ´extended Hamiltonians is considered, in order to generate thermostatting chains.
A. Nose ´chains
͑g͒ Thermostatting chains, consisting of M thermostats, can be added to the Nose ´equation ͑1͒, with some additional terms as follows: where the auxiliary functions ͕ f i (s i )͖ are real valued and satisfy
͑15͒
This extended system produces a canonical ensemble of N ϩM degrees of freedom.The partition function for this ensemble, for energy E, is defined as
͑16͒
We can substitute pЈϭp/s 1 , the volume element then becomes dpϭs 1 N dpЈ.There is no upper limit in momentum space so we can change the order of integration of dpЈ and ds 1 giving the integral over s 1 as Using the equivalence relation for ␦, ␦(g(x))ϭ␦(xϪx 0 )/͉gЈ(x)͉, where x 0 is the zero of g(x)ϭ0, for xϭs 1 , and noting that ͉s 1 ͉ϭs 1 since ln s 1 is not defined for s 1 Ͻ0, we get
͑20͒
Substituting p s 1 Ј ϭp s 1 /s 2 , changing the order of integration, and integrating ͑20͒ over s 2 we get
͑21͒
where K 2 is defined in ͑15͒.Repeating this for s 3 ,...,s M gives
͑22͒
Changing the order of integration and integrating ͑22͒ over all p s i Ј and p s M gives
͑24͒
This means that constant energy dynamics of the extended Hamiltonian H NC correspond to constant temperature dynamics of H(q,p/s 1 ).
B. Nose ´-Poincare ´chains
͑h͒ In a similar manner, thermostatting chains, consisting of M thermostats, can be added to the Nose ´-Poincare ´equation ͑12͒, with some additional terms as follows: where the auxiliary functions ͕ f i (s i )͖ are real valued and satisfy Eq. ͑15͒, and H 0 is Eq.͑14͒ evaluated at the initial conditions.
This extended system produces a canonical ensemble of NϩM degrees of freedom.The partition function for this ensemble is defined as
͑25͒
Substituting pЈϭp/s 1 , the volume element then becomes dpϭs 1 N dpЈ.There is no upper limit in momentum space so we can change the order of integration of dpЈ and ds 1 giving the integral over s 1 as where H(q,pЈ,p ˆs), p ˆs , and F i are defined in Eqs.͑17͒-͑19͒.Using the equivalence relation for ␦, ␦(g(x))ϭ␦(x Ϫx 0 )/͉gЈ(x 0 )͉, where x 0 is the zero of g(x)ϭ0, for xϭs 1 , to get The remaining thermostatting variables can be integrated out as above in Eqs.͑21͒-͑24͒ to get the partition function
C. Auxiliary function
͑i͒ For the Nose ´chains to work correctly an auxiliary function, f i (s i ), must be chosen not only to satisfy Eq. ͑15͒ but to provide a suitable modification to the thermostats.One such choice is where C i , the auxiliary function coefficient, is a constant.The value a i is chosen as the required average value of s i , generally 1, as the additional term will operate as a negative feedback loop to minimize (a i Ϫs i ), as can be seen from the equations of motion.For a Hamiltonian of the form the equations of motion for s i and p s i in the equivalent Nose ´chain system will be If C i is sufficiently small, if s i increases above a i then p s i will decrease, eventually decreasing s i .Conversely, if s i decreases below a i then p s i will increase, eventually increasing s i .
D. Estimation of the auxiliary function coefficient
͑j͒ The value of C i , iу2 can be estimated by considering the equation of motion for the momenta of one of the thermostats, s i , Then the changes in s i are driven by the changes in p s iϪ1 .The purpose of the auxiliary function is to limit the excursions of s i , which can be achieved if ds i /dp s i is a maximum at s i ϭa i .The negative feedback loops arising in Nose ´dynamics drive ͗p ˙si ͘ to zero, in the above equation, over a sufficiently long integration time.For the purpose of estimating the value of C i , we will assume that p ˙si is small.Then, from Eq. ͑28͒,
͑29͒
differentiating with respect to s i , Differentiating again to get the turning points, and substituting s i ϭa i ,
͑31͒
Setting d 2 p s iϪ1 /ds i 2 ϭ0 and solving for C i gives . ͑32͒ Evaluating d 3 p s iϪ1 /ds i 3 at this point gives a positive value, indicating a minimum for dp s iϪ1 /ds i or a maximum for ds i /dp s iϪ1 as required.
Experimental data from tests with the harmonic oscillator, with a i ϭ1, show that Nose ´chains will not work with C i Ͼ1/8kT, but will work for all C i Ͻ1/8kT.However, with very small values of C i the additional thermostats become ineffective as s i is restricted to a value close to 1.
A. Hamiltonian splitting method
͑k͒ The numerical methods used for the following experiments are based on the following general Hamiltonian: The Nose ´-chains method derived from this, with M thermostats based on the auxiliary function in Eq. ͑27͒ is then giving a Nose ´-Poincare ´-chains method, where H 0 is chosen as the initial value of H NC .The equations of motion are where sϭ(s 1 ,...,s M ), p s ϭ(p s 1 ,...,p s M ), and pϭp ˜/s 1 .The thermostats have introduced an implicit coupling into the equations of motion, but an explicit method can be formulated by splitting the Hamiltonian and corresponding Liouville operator.For an odd number of thermostats, M, this can be reduced to three Hamiltonians by employing even-odd splitting of the extended variables.Then if, we have Using a symmetric splitting of the Liouville operator to get a symplectic and time reversible method, iL H ϭ͕.,H͖ϭ͕.,H 1 ͖ϩ͕.,H 2 ͖ϩ͕.,H 3 ͖ϭiL This splitting introduces an error of order ⌬t 3 at each step in terms of the solution operator, giving a second order method, The dynamics for H 1 and H 3 can be solved in a straightforward manner as each s i and p s i are decoupled, leaving H 2 to be solved either analytically or by using the generalized leapfrog algorithm. 6,11 Harmonic oscillator ͑l͒ The harmonic oscillator is generally regarded as one of the hardest models to thermostat and as such is a good test for these methods.The Hamiltonian for the test system is giving the Nose ´-Poincare ´-chains method, To illustrate the importance of the correct selection of thermostatting parameters, and the improvements obtained using thermostatting chains, further experiments were carried out.When very small values are used for the C i , the thermostats are forced to be close to 1, preventing them from operating and producing distributions that would normally be expected from the standard Nose ´or Nose ´-Poincare ´methods, without thermostatting chains, for the Harmonic oscillator.With the parameters the same as the above experiment except ⌬tϭ0.01,C 2 ϭ0.0008,C 3 ϭ0.0004,C 4 ϭ0.0002,C 5 ϭ0.0001 gave the results in Fig. 2.
As we can see from these examples, it is possible to choose thermostat masses to implement an efficient canonical sampling with the chain technique, while using symplectic integrators.
C. Estimating thermostat masses
͑m͒ Values for the thermostatting masses, Q j , can be estimated by simplifying the equations of motion to decouple the thermostats, the resulting equations can then be linearized to evaluate their behavior near to a point of equilibrium.When calculating the masses for Nose ´-Hoover chains 8 it is assumed that adjacent thermostats are slow in comparison to the thermostat of interest, average values can then be used for s jϪ1 , s jϩ1 , p s jϪ1 , p s jϩ1 .Under these conditions, the analysis for the Nose ´-Poincare ´chains is similar to that provided by Nose ´4 provided that we also consider the variation in s 1 to be slow, in order that we can replace it with it's average value, for all but the first thermostat.For Nose ´-Poincare ´chains the equations of motion for s j , p s j for 1 Ͻ jрM are
͑34͒
Rearranging and differentiating ͑33͒, then substituting into ͑34͒, We will consider a fluctuation ␦s j of s j around an average ͗s j ͘, s j ϭ͗s j ͘ϩ␦s j .
͑36͒
Linearizing ͑35͒ to obtain an equation for ␦s, If the change in s j is much faster than the rest of the system, then the change of the momentum can be ignored as the constant temperature is maintained by s j , then, since a j is chosen as ͗s j ͘, as discussed in Sec.III C͑i͒.Sub- stituting ͑38͒ into ͑37͒, expanding the left-hand side and substituting ͗s j ͘ϭa j , ͗s jϩ1 ͘ϭa jϩ1 , s 1 Ϸ͗s 1 ͘, we get ␦s ¨jϭϪ
͑39͒
giving a self-oscillation frequency, w j , of .
͑40͒
Since we normally choose a j ϭ1, j 1, ͑40͒ reduces to the more general form .
͑41͒
For the remaining thermostat's variables, s 1 and p s 1 , the equations of motion are
͑43͒
Following a similar procedure to that above gives a selfoscillation frequency, where a 2 ϭ1, of which is of the form familiar from Nose ´'s paper. 4
D. Optimum thermostat masses
͑n͒ To evaluate the relationship between the selfoscillation frequencies and the optimum choice of the Nose ánd auxiliary masses, experiments were carried out to assess the deviation from the required distribution with varying masses.The experiments were based on the harmonic oscillator model with frequency 1.0, auxiliary function coefficient C 2 ϭ0.08, a 2 ϭ1, and having two thermostats, the original Nose ´thermostat and one auxiliary thermostat.The initial conditions were chosen such that the average value of the Nose ´variable s 1 was 1.0 and the results were taken after 5 000 000 steps, with a step size of 0.005.
In the first experiment the auxiliary thermostat mass was chosen to have a self-oscillation frequency equal to that of the harmonic oscillator and the Nose ´thermostat was varied over a range of values, producing the results in Figs. 3 and 4.
Here Q 1 has been normalized so that 1.0 is the value given by Eq. ͑44͒ and ⌬ D represents the mean square difference between the actual and theoretical distributions.These indicate that the optimum choice of Nose ´mass is near, but less than, its self-oscillation frequency, which is consistent with the results obtained for the auxiliary heat bath method, 9 but with good results for smaller values.
For the second experiment the Nose ´mass was fixed at half that of its self-oscillation frequency and the mass of the auxiliary thermostat was varied, giving the results in Fig. 5.
Here Q 2 has been normalized so that 1.0 is the value given by Eq. ͑41͒ and ⌬ D is defined as above.From these results the optimum value for the auxiliary mass is around its selfoscillation frequency, but good results are obtained over a large range of values. | 4,551.4 | 2004-07-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Anisotropic power-law inflation for a conformal-violating Maxwell model
A set of power-law solutions of a conformal-violating Maxwell model with a non-standard scalar-vector coupling will be shown in this paper. In particular, we are interested in a coupling term of the form X2nFμνFμν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X^{2n} F^{\mu \nu }F_{\mu \nu }$$\end{document} with X denoting the kinetic term of the scalar field. Stability analysis indicates that the new set of anisotropic power-law solutions is unstable during the inflationary phase. The result is consistent with the cosmic no-hair conjecture. We show, however, that a set of stable slowly expanding solutions does exist for a small range of parameters λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} and n. Hence a small anisotropy can survive during the slowly expanding phase.
Introduction
The cosmic inflation [1][2][3] has been regarded as a main paradigm of the modern cosmology. It follows from the success of the cosmic inflation resolving quite a number of cosmological problems such as the horizon, flatness, and magnetic-monopole problems [1][2][3]. In addition, the standard inflation model with our early universe assumed to be homogeneous and isotropic has been important in realizing many cosmic microwave background (CMB) observation results inclusign the Wilkinson Microwave Anisotropy Probe (WMAP) [4,5] and the Planck [6,7]. Some recent CMB anomalies was, however, found. For example the hemispherical asymmetry and the cold spot have been detected by the WMAP and Planck. Anisotropic models have therefore been considered as a genearalization of the FLRW inflation [8]. Consequently, modifications for the FLRW inflation are also necessary in order to accommodate the nature of the observed anomalies of CMB. It turns out that one of the simplest modifications is by replacing the FLRW metric by anisotropic but homogeneous Bianchi type a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>spacetimes [9,10]. Note that many predictions made with anisotropic inflation have been done before the mentioned anomalies were detected [11,12].
The Bianchi models have also been discussed extensively in providing evidences for the cosmic no-hair conjecture proposed by Hawking and his colleagues [13,14]. The cosmic no-hair conjecture postulates that the final-time state of our universe will be simply homogeneous and isotropic regardless of any initial conditions and states of the universe in early time [13,14]. This conjecture has been a great challenge to the physicists and cosmologists for several decades. Some partial proofs to this conjecture have been worked out. In particular, this conjecture has been theoretically proved for the Bianchi spaces by Wald with an energy conditions approach [15].
Recently, some people have tried to extend the Wald's proof to a more general scenario, in which the spacetime of universe is assumed to be inhomogeneous [16,17]. Note that some other interesting proofs for this cosmic no-hair conjecture can be seen in Refs. [18][19][20]. Besides the theoretical approaches, a general test using Plancks data on the CMB temperature and polarization has provided observational constraints on the isotropy of the universe [21], by which we might see how precise the cosmic no-hair conjecture is.
As a result, the KSW model has been shown to admit a stable and attractive Bianchi type I inflationary solution [43,44] due to the existence of an unusual coupling term between scalar field and electromagnetic field, f 2 (φ)F μν F μν . More interestingly, non-canonical extensions of the KSW model, in which a scalar field takes non-canonical form such as Dirac-Born-Infeld (DBI), supersymmetric Dirac-Born-Infeld (SDBI), and covariant Galileon forms, have also been shown to admit stable and attractive Bianchi type I inflationary solutions [49][50][51][52]. This indicates that the cosmic no-hair conjecture does not hold for the extended KSW models due to the presence of the coupling term f 2 (φ)F μν F μν .
Note that the KSW model can be regarded as a subclass of the conformal-violating Maxwell theory with an extended coupling term I (φ, R, X, . . .) F μν F μν .
Hence a close relation exists between the stable spatial anisotropy of inflationary universe and the broken conformal invariance. Indeed, the mechanism of the broken conformal invariance should induce both non-trivial magnetic fields and a stable spatial anisotropy of spacetime during an inflationary phase. One might also consider some subclasses of the conformal-violating Maxwell theory to see whether the cosmic no-hair conjecture breaks down. In particular, a model with a Ricci scalar non-minimally coupled to the electromagnetic field via a coupling Y (R)F μν F μν [91] has already been considered.
In this paper, we will focus on a possible coupling term with I = J 2 (X ) as an algebraic function of the scalar field kinetic term X ≡ −∂ μ φ∂ μ φ/2. Note that J is considered as an arbitrary function of X . As a result, we will show that a set of Bianchi type I expanding power-law solutions exists. It will also be shown that these solutions are unstable during the inflationary phase in consistent with the cosmic no-hair conjecture in this model. On the other hand, we can also show that stable solutions do exist during the slowly-expanding phase.
This paper will be organized as follows: (i) A brief review and the motivation of this research have been given in Sect. 1. (ii) A complete setup of the proposed model will be presented in Sect. 2. (iii) A set of Bianchi type I power-law inflationary solutions and its stability will be solved and discussed in Sect. 3 and Sect. 4, respectively. (iv) Finally, concluding remarks will be given in Sect. 5.
Conformal-violating Maxwell model
A conformal-violating Maxwell theory can be described by the general action given by [82][83][84] with F μν ≡ ∂ μ A ν −∂ ν A μ the field strength of the vector field A μ and I (φ, R, X, . . .) a function of any field of interest. For example, the models with I = I (φ) 81], I = I R, R μν , R μνλκ [79,80,[91][92][93][94][95][96], I = I A 2 [97], I = I (G) [98,99], and I = I (k F ) αβμν [100,101] have been considered extensively in the literature. Note that the Planck mass M p has been set as one for convenience. In this paper, we will study a conformal violating model with I = J 2 (X ): As a result, the field equations can be shown to be Here J ≡ ∂ X J denotes the differentiation with respect to the argument of J (X ). In addition, the Einstein tensor is defined as G μν ≡ R μν − 1 2 Rg μν . Similar to Refs. [43,44,[47][48][49][50][51][52], we will focus on the Bianchi type I metric given by In addition, the scalar field φ and the vector field A μ will be chosen as φ = φ (t) and
As a result, Eq. (2.3) can be integrated directly aṡ
with p A the constant of integration [43,44]. Hence we can rewrite Eq. (2.4) as In addition, the non-vanishing components of the Einstein equation (2.5) can be shown to be:
Anisotropic power-law solutions
We will try to find a set of power-law solutions for the proposed model (2.1) in this section. In particular, we would like to obtain a new set of power-law analytic solutions with the form [43,44,[47][48][49][50][51][52]: Note that this set of power-law ansatz is the key to obtain a consistent set of power-law solutions. For simplicity, we will focus on the exponential scalar potential given by: along with the power-law kinetic function: Here V 0 , J 0 , φ 0 , and λ are constants specifying the boundary information of these potential and kinetic functions. In addition, the constants λ and n are assumed to be positive. For convenience, we will also introduce the following new variables: As a result, we can derive the following set of algebraic equations from the field equations: 2n along with the following constraints: The constraint (3.11) indicates that n behaves similar to the ratio ρ/λ discussed in the KSW model in Refs. [43,44] with . This is, however, the only similarity between the KSW model [43,44] and our proposed model. Indeed, it turns out that all field equations in this paper are not alike as compared to the field equations of the KSW model. With the constraint (3.10), the variable v can be written as Note that both variables, u and v, are assumed to be positive. Given η shown in Eq. (3.11), Eqs. (3.8) and (3.9) can be solved to give: With η, u, and v defined above, scalar field equation (3.6) and Friedmann equation (3.7) can be reduced as respectively. It turns out that both Eqs. (3.15) and (3.16) lead to the non-trivial solutions
17)
= λ 2 36n 4 + 36n 3 + 5n 2 − 2n + 1 + 32n (3.18) for ζ = 1/3. Since n is assumed to be positive and ζ > η, ζ + can be shown to be only the consistent solution: . (3.19) In addition, the variable η becomes It is clear that η < ζ = ζ + as expected. Following Refs. [43,44], the anisotropy parameter is given by The positivity of v implies that η > 0 with the help of Eq. (3.9) provided that ζ > 1/3. Hence, the positivity of η leads to a constraint of n given by with the help of Eq. (3.20). Note also that for an anisotropically expanding solution, the following constraints ζ +η > 0 and ζ − 2η > 0 need to be satisfied. It is apparent that the constraint ζ + η > 0 holds for positive n. In addition, the constraint ζ − 2η > 0 leads to the inequality: The inequality (3.23) holds apparently for all n > ( Hence, expanding solutions exist for the canonical model with n = 1. For 0 < n < ( √ 5 − 1)/4, the above inequality will depend, however, on the value of λ. Note also that the parameter is positive definite if the inequality (3.23) holds. Finally, with the help of Eq. (3.13), the positivity of u leads to the inequality (3.24) or equivalently, Consequently, the following constraint on η can be obtained from Eq. (3.11) as 26) It appears that there are constraints of n given by the Eqs. Consequently, the field variables can be approximated as
Stability analysis of anisotropic solutions
In this section, we will try to show that the new set of powerlaw solutions are unstable during the inflationary phase. On the other hand, this new set of power-law solutions is stable during the slowly-expanding phase for a limited λ-n domain.
Inflationary phase
In order to understand the stability of the set of power-law solutions, we will consider the power-law perturbation of the field equations with δα = A α t m , δσ = A σ t m , and δφ = A φ t m [47][48][49][50][51][52]. Perturbing Eqs. (2.8), (2.10), and (2.11) leads to a set of algebraic equations that can be cast as a matrix equation: with the coefficients a i 's (i = 1 − 6) given by
We can show that a 6 < 0 with the help of the inequality (3.22). In addition, the leading term of a 1 is given by for n 1. Hence the inequality (3.22) implies that a 1 > 0 for n 1. This implies that the anisotropic power-law solution of the J 2 (X )F 2 model is indeed unstable during the inflationary phase in contrast to the results of the KSW model. The result is, however, consistent with the cosmic no-hair conjecture. It shows that the non-trivial coupling, I (φ, X, ...) F μν F μν is capable of inducing a small spatialanisotropy for the evolution of our current universe.
Note that the stability of the anisotropy depends on the choice of the function I coupled to the U 1 gauge field. In particular, the anisotropy will survive the inflation if I is chosen to be a canonical function of a scalar field as shown in the KSW model [43,44,[49][50][51][52]. On the other hand, if I is chosen as a function of the kinetic term of the scalar field, the anisotropy is unstable during the inflationary phase. As a result, the power-law solutions shows that our universe acts in favor of the forming of an isotropic (de Sitter) space consistent with that the cosmic no-hair conjecture predicts.
Slowly expanding phase
Even the power-law solution is unstable during the inflationary phase, it will be interesting to know whether it is unstable during the slowly-expanding phase. Indeed, we will show that the new set of power-law solutions is stable during the slowly-expanding phase for a small λ-n domain. Note that ζ + η and ζ − 2η are of order one, O(1), for the slowly expanding phase. Note again that unstable solution will not exist if all coefficients a i listed above are positive (or negative) definite. In fact, the equation f (m) = 0 admits no positive root for a wider range of λ and n.
Indeed, we will plot a λ-n domain numerically in which all real parts of the five roots of f (m) = 0 are non-positive, i.e., Re(m i ) ≤ 0, during the slowly expanding phase. The result is shown in Fig. 1.
According to the result shown in Fig. 1, this allowed domain is quite limited. In particular, the range of λ is much wider for small 0 < n ≤ 1. Here we have set ζ + η > 0, ζ − 2η > 0, u > 0, and v > 0 for expanding solutions. Note again that we will only have unstable anisotropic inflationary solutions for the large n as proved earlier in this section.
In addition, we would like to see whether the new set of slowly expanding solutions is indeed a set of attractive solutions for the parameters λ and n chosen within the red region in Fig. 1. Following Refs. [43,44,[49][50][51][52], we will introduce the dynamical variableŝ (4.14) As a result, a set of autonomous equations of the dynamical system can be derived from the field Eqs.
Note that we have used the Hamiltonian equation (2.9) 18) in order to derive the above equations. Note that the anisotropic fixed points are solutions of the autonomous equations dX /dα = dY/dα = d Z/dα = 0. As a result, we can obtain, from equations dY/dα = d Z/dα = 0, a relation given by The equation dX /dα = 0 can thus be reduced to 22) or equivalently, Consequently, non-trivial solutions ofX can be solved aŝ It is apparent that theX − is identical to the ratio /H of the anisotropic power-law solution found in the previous section. In other words,X − is exactly the anisotropic fixed point equivalent to the anisotropic power-law solution we found earlier.
Hinted by the allowed region of λ and n for the existence of stable expanding-solutions shown above, we can, for example, examine the attractor behavior of the anisotropic fixed point for n = 1 (corresponding to the canonical kinetic term) and λ = 1.25. Note that for this choice the scale factors will be ζ 1.5 and η 0.015 ζ . The result is shown in Fig. 2.
As a result, the anisotropic fixed point can be shown to be the attractor to the dynamical system as expected. This result shows that the J 2 (X )F 2 model does admit stable and attractive anisotropic expanding solutions even this model does not admit any stable and attractive anisotropic inflationary solution. Nevertheless, the cosmic no-hair conjecture is indeed violated in the J 2 (X )F 2 model. A small anisotropy will still sustain during the slowly-expanding phase and leads to a small anisotropy observed today.
Conclusions
Inflation has been considered as a main paradigm in the current cosmology. It is successful in solving some fundamental questions in cosmology. The predictions are also consistent with the observations of the cosmic microwave background radiation. In addition, the cosmic no-hair conjecture explains the physical origin of the highly isotropic universe. This conjecture is based on a belief that the vacuum energy dominance would erase any classical hair. The conformalviolating Maxwell model [79][80][81][82][83][84][85][86][87][88][89][90] proposed in Refs. [43,44] provides a counter-example to the cosmic no-hair theorem. Hence we propose to study a conformal-violating Maxwell model with a coupling term, J 2 (X )F 2 = J 2 0 X 2n F 2 . In contrast to results found in [43,44,[49][50][51][52], we found that the new model does not admit any stable and attractive Bianchi type I inflationary solution. By a careful stability analysis, we have found, however, that the proposed model does admit stable and attractive Bianchi type I expanding solutions in the slowly-expanding phase. Hence this model provides another counter-example to the cosmic no-hair conjecture with a tiny spatial hair. The results shown in this paper suggest that a correlation between an anisotropy of universe space and a broken conformal invariance deserve more attentions.
Moreover, small anisotropy is required to explain the origin of large-scale galactic electromagnetic field in the present universe. It would be interesting to examine the validity of the cosmic no-hair conjecture for some other possible coupling terms in the conformal-violating theory. For example, those models that have been discussed recently, e.g., I R, R μν , R μνλκ [79,80,[91][92][93][94][95][96], I = I A 2 [97], I = I (G) [98,99], and I = I (k F ) αβμν [100,101]. The result shown in this paper focusing on the late-time evolution is a new approach to the selection rules for the compatible models. Hopefully, the result shown here could be helpful to the study on the cosmic evolution of our universe.
Note added: The appearance of Ref. [108] came to our attention when are finalizing our paper. Motivated by Refs. [63,[109][110][111], Ref. [108] proposes an extension of the KSW model with f (φ) → f (φ, X ) similar to ours. In particular, Ref. [108] points out that no small-anisotropy inflationary solutions exists for f (X ) ∝ X −n in contrast to our model with f (X ) ∝ X n . | 4,237.8 | 2018-05-01T00:00:00.000 | [
"Physics"
] |
Determinants of Sexual Network Structure and Their Impact on Cumulative Network Measures
There are four major quantities that are measured in sexual behavior surveys that are thought to be especially relevant for the performance of sexual network models in terms of disease transmission. These are (i) the cumulative distribution of lifetime number of partners, (ii) the distribution of partnership durations, (iii) the distribution of gap lengths between partnerships, and (iv) the number of recent partners. Fitting a network model to these quantities as measured in sexual behavior surveys is expected to result in a good description of Chlamydia trachomatis transmission in terms of the heterogeneity of the distribution of infection in the population. Here we present a simulation model of a sexual contact network, in which we explored the role of behavioral heterogeneity of simulated individuals on the ability of the model to reproduce population-level sexual survey data from the Netherlands and UK. We find that a high level of heterogeneity in the ability of individuals to acquire and maintain (additional) partners strongly facilitates the ability of the model to accurately simulate the powerlaw-like distribution of the lifetime number of partners, and the age at which these partnerships were accumulated, as surveyed in actual sexual contact networks. Other sexual network features, such as the gap length between partnerships and the partnership duration, could–at the current level of detail of sexual survey data against which they were compared–be accurately modeled by a constant value (for transitional concurrency) and by exponential distributions (for partnership duration). Furthermore, we observe that epidemiological measures on disease prevalence in survey data can be used as a powerful tool for building accurate sexual contact networks, as these measures provide information on the level of mixing between individuals of different levels of sexual activity in the population, a parameter that is hard to acquire through surveying individuals.
Introduction
The transmission dynamics and epidemiology of sexually transmitted infections (STI) are shaped by the sexual network through which they propagate. Sexual networks are characterized by their dynamic nature and large heterogeneity that reflects the diversity of human sexual behavior. For long now, mathematical modelers have attempted to describe the essential features of that behavior and the resulting networks in various types of models in order to understand the infection dynamics and possible impact of STI interventions [1][2][3][4][5][6]. It has remained extremely challenging to take all aspects that determine the structure of a sexual network into account in a comprehensive way, while at the same time not cluttering models with too much detail that makes them difficult to handle. One approach has been the design of individual based simulation models that follow certain algorithmic rules to describe the formation and dissolution of partnerships explicitly [7][8][9][10][11]. This approach has the advantage above models that assume a mass-action style of mixing between individuals, that partnerships of different durations are explicitly present in the model, and thus that important aspects of Chlamydia trachomatis (Ct) transmission dynamics, such as the duration of the gap/overlap between sequential partnerships [12][13][14], and re-infection events between partners [15,16] are not ignored. Individual based models have proven to be a flexible and useful tool in this regard.
In designing an individual based model, decisions have to be taken in how to implement partnership formation and dissolution and intervention strategies in terms of simple rules that can be coded into a computer program. While striving at parsimony and simplicity in order to be able to understand the resulting dynamics, one also wants to capture the essential features of human sexual behavior that impact on the transmission dynamics and intervention effectiveness of STI. By doing that one usually validates the model using aggregate data for some quantities describing sexual behavior such as lifetime number of partners, or partnership duration. One can analyze how well models are able to reflect those summary measures of sexual behavior and consequently the distribution of STI prevalence in a population [11]. However, it is also known that macrostructure of an individual based model does not uniquely determine the microstructure of a sexual network [17] and consequently models with similar macrostructures can lead to different results about intervention impact [18].
There are four major quantities that are routinely measured in sexual behavior surveys that are thought to be especially relevant for the performance of sexual networks in terms of disease transmission. These are (i) the cumulative distribution of lifetime number of partners [19,20], (ii) the distribution of partnership durations [14,20], (iii) the distribution of gap lengths between partnerships [14,20,21], and (iv) the distribution of the number of recent partners [11,22]. Fitting a network model to these quantities as measured in sexual behavior surveys is expected to result in a good description of Ct transmission in terms of the heterogeneity of the distribution of the infection within a population [11].
In this paper we investigate how these population level summary measures of sexual activity relate to the underlying sexual behaviors of the population and their heterogeneity on the individual level. We use an individual based simulation model, in which pair formation and separation are described as a dynamic process. The model is based on an earlier model of Kretzschmar et. al. [7], but has been extensively restructured to accommodate for more detail and heterogeneity in individual behaviour. Central to the model implementation is the function that describes the number of partnerships that an individual can simultaneously maintain during different periods of his/her life. This changing ''capacity'' of individuals controls their onset of sexual availability, as well as their propensity for acquiring concurrent partnerships. In addition, we demonstrate how including heterogeneity of sexual behavior on the individual level improved the performance of the model in describing population level measures of sexual behavior.
Results
The sexual network model presented here consists of a heterosexual population of +50,000 individuals, uniformly distributed over the ages 13 to 64. Connections in the network represent sexual partnerships between individuals. The network is dynamic: partnerships are continuously formed and dissolved. The model keeps the population size constant over the 40 years of simulation by adding young individuals to the network at age 13, as old individuals retire from the network (by no longer forming new partnerships) when reaching age 65. Individuals in the model are defined by their date of birth, gender, their current partnership(s), the maximum number of partners that they can concurrently maintain (i.e., their ''capacity''), and their Chlamydia trachomatis infection status. In this model we studied how individual heterogeneity influences population-level summary measures of the sexual contact network.
Cumulative lifetime number of partners
The large heterogeneity of individuals in their sexual behaviour is perhaps most apparent in the distribution of the number of partners that individuals in a population have had partnerships with. This lifetime number of partnerships ranges from 1 to 600-1000 partners for sexually active individuals [11,19,20], and is commonly presented as a cumulative lifetime number of partnerships (CLNP) plot. One of the remarkable features of the CLNP is that the higher end of its distribution (w20 partners) has a powerlaw-like distribution [19] (Fig. 1).
Whether simulation models can accurately reproduce the heterogeneity in the lifetime number of partners depends on their implementation of the ''core group'': a label introduced by Yorke et. al. in 1978 [23,24] for sexually highly active individuals that have many, possibly concurrent partnerships in a short period of time. Many models struggle with reproducing the powerlaw-like distribution of the CLNP, and typically end up with a distribution that reflects the two or three levels of sexual activity defined in these models (e.g., moderate, intermediate, and core-group individuals [7,[9][10][11]).
We concluded that behavioural heterogeneity in sexually highly active individuals is underestimated in models that distinguish a limited number of sexual activity levels, even if there is stochastic variation among individuals. Rather than using a limited number of sexual activity levels, we correlated the capacity for concurrent partnerships in core-group individuals with two pre-determined characteristics of high risk behaviour, namely the onset and the duration of the core-group period for an individual ( Table 1). The earlier the onset, and the longer the duration, the higher would be their maximum number of concurrent partnerships (see methods for details). Furthermore, we controlled the total number of partnerships within the moderate group and within the core-group separately ( Table 2). This allowed us to independently regulate
Author Summary
Although many diseases spread so easily between humans that someone could be infected by any of his or her daily social contacts, such is not the case for sexually transmitted diseases. Most of us have a very limited number of concurrently ongoing sexual partnerships, and thus the contact network over which sexually transmitted diseases spread tends to be very sparsely connected. The exact structure of these sexual networks plays an important role in how easy and fast sexually transmitted diseases spread through a population, and how effective various health care interventions will be. In this paper we use a simulation model to understand how the collective sexual behaviour of individuals relates to the summary measures of network structure (such as ''lifetime number of partners'', and ''duration of previous partnership'') that are typically used to build models of disease transmission over sexual networks. Based on our understanding of this relationship, we simulated sexual networks which have summary measures of network structure that are very similar to that of a real sexual network. Using these networks in disease transmission models will increase our ability to predict the effectiveness of health care interventions.
how close the number of partnerships within each group would be to the maximum number of partnerships that that group could maintain (i.e. the sum of their capacities). For example, a high number of partnerships in the core-group, relative to the sum of their capacities would result in high lifetime number of partnerships per core-group member without affecting the rate at which moderate individuals acquired new partners. These adjustments gave us considerable control on the shape of the CLNP plot (Fig. 1), and resulted in a powerlaw-like distribution of the CLNP that is similar to that of real sexual contact networks [11,19,20].
Number of recent partners
The current model accurately replicates the CLNP observed in real sexual contact networks. However, similar CLNP distributions can be the result of very different age distributions at which the individuals acquire most of their lifetime number of partners [17]. This was recently demonstrated in a comparison study of three models of sexual contact networks [11]. Because the age at which individuals are exposed to many sexual partners is also the age at which they are most vulnerable for contracting, as well as most efficient at transmitting STIs [11,22], it is necessary for sexual network models to accurately simulate the distribution of recent number of partners for different age groups.
In the model described by Kretzschmar et. al. in 1996 [7], all individuals were available for sexual partnership from the moment they entered the population at age 15, and 5% of these individuals were labeled as core-group members, and will remain so until the age of 35. The homogeneous age of sexual debut and onset of core-group behaviour of individuals results in a premature age of sexual debut (Fig. 2, gray line), and an overestimation of the mean number of recent partners in the population younger than 35 ( Fig. 3 gray line). By adding heterogeneity in the age at which individuals become available for sexual partnerships, and (if applicable) in the onset and duration of their core-group period (Table 1), the current model could accurately match the observed mean number of recent partners per age-group ( Fig. 3 orange line). As the level of sexual activity is such an important indicator of STD risk, we further stratified the recent sexual activity of the Dutch population into the fraction of the population with 1+, 2+, 3+ or 6+ partners in the last half year, as well as by age and gender (Fig. 4). We found that the current model could match many of the features of this more detailed measure of recent sexual activity of the Dutch population. Furthermore, at this level of detail, the recent number of sexual partners appears to be sensitive to most of the model parameters. It can therefore serve as an excellent measure to validate a sexual network model on, but at the same time is difficult to interpret how it depends on these model parameters. What follows is a discussion of how different parts of Fig. 4 relate to the underlying model mechanics.
The fit of the model for individuals with 1+ partnership is predominantly determined by the difference between the total number of partnerships that the model maintains in the moderate group of individuals (Table 2), and the maximum number of partnerships that this group can maintain, given their capacity. If the difference between the two is small individuals will quickly find partners, and few individuals will have been without partner in the last 6 months. A second important factor that influences the fit for individuals with 1+ as well as for those with 2+ partnerships is the duration and relative frequency of different partnership types. The model defines three types of partnerships (short-, medium-, and long-term), all of which have exponentially distributed durations, but with different average lengths of a partnership (see next section, and methods). If, for example, the relative frequency of . Number of partners in the last 6 months, stratified by age, gender, and sexual activity. Panel A shows the number of recent partners for men, and panel B for women. Stratification of the number of partners by sexual activity give a detailed insight in many aspects of a sexual networks' structure. The percentage of people with 1+ partners is given on the right-hand axis, and the percentage for 2+, 3+ and 6+ partners is given on the left-hand axis. doi:10.1371/journal.pcbi.1002470.g004 short-term partnerships is increased, more of the moderate individuals can acquire 1, 2 or even 3+ partnerships in 6 months. Finally, transitional concurrency [25] is a third factor that influences the recent number of partners of moderate individuals. Transitional concurrency is the overlap between the end of one partnership and the beginning of the next. Transitional concurrency is implemented in the model by defining the cost of a partnership as 0 as it nears the end of its duration, thus freeing up ''capacity'' for the individual (see section on gap length, and methods). If transitional concurrency is set to become possible earlier during partnerships, it will increase the number of moderate individuals that have had 2+ recent partner, but also increases the number of moderate individuals that had no partnerships in the last 6 months.
The group of 3+ recent partnerships ( Fig. 4 green lines) past age 30 is predominantly formed by a fraction of the former core-group members that continues to keep an increased capacity (~2) after their main period of core-group behaviour ( Table 1). Many moderate individuals will by chance be locked into a long-term partnership at that age, and thus have few opportunities to accumulate recent partners. The group with 6+ recent partnerships is almost exclusively defined by the characteristics of the core-group, i.e. its overall size, the age at which core-group behaviour starts, its duration, and how many partnerships the model maintains within the core-group, compares to the sum of the capacities of that group (Table 1,2).
Partnership duration
Partnership duration has been recorded in limited detail and only for the current partner in the Dutch sexual survey (Fig. 5), and thus predominantly reflects the duration of long-term partnerships in the Netherlands. Therefore, we also use the more detailed Natsal 2000 UK survey data [11,26] to fit the model, which also includes information on the duration of previous partnerships. From the UK survey data it appears that there are three typical durations of partnerships (short-, medium-and longterm, Fig. 5 dashed black line), each of which is exponentially distributed ( Table 2).
As mentioned in the previous section, partnership duration (or put more precisely, the relative frequency of long-term partnerships) predominantly affects the number of recent partners of the moderate population; a high relative frequency of long-term partnerships means that many moderate individuals will have had only 1 sexual contact in the last 6 months. Partnership duration has less effect on the number of recent partners of core-group individuals, because core-group members have a capacity of §3, and are limited to a maximum of 2 concurrent long-term partnerships (see methods). Therefore, core-group members always have capacity available for at least one short or mediumterm partnership.
In the model there are no individual sexual behaviour that directly affect the duration of partnerships: durations are determined when a partnership forms, and are independent of the further actions of the individuals involved (such as concurrency). One small exception is that individuals that stop being coregroup members have to reduce their current number of concurrent partnerships to match their new maximum capacity (see methods). We conclude that partnership duration shape the sexual network relatively independent from the distribution of the number of partnerships of individuals in the population.
Gap length
The time between sequential partnerships (e.g. the gap length [21]) is an important factor related to Ct prevalence [12][13][14]: negative gap length (e.g. an overlap in partnerships) signals the amount of concurrency in the population: in contrast to serial monogamy, concurrent partnerships provide a two-way channel for Ct to spread, and not just from a previous partner to a new partner. In contrast, positive gap length indicates the chance that an individual has had to spontaneously clear Ct prior to engaging in a new partnership [13]. The precise definition of gap length is the ''time between the end of the second most recent partnership, and the start of the most recent partnership'', in which the order of recentness is determined by the last date that partnerships were still ongoing. Where two or more partnerships are tied for recentness, their order is randomly determined.
The Dutch sexual survey data has insufficient information to reconstruct gap lengths, so the current model was fitted to UK survey data on gap length [11,20] (Fig. 6), and subsequently adjusted as to fit Dutch survey data on transitional concurrency in recently started (v6 months) steady partnerships (Fig. 7), where ''steady'' was interpreted as being either a medium-term or a longterm partnership.
About 3% of the UK population reports a negative gap length of more than 500 days between the last two partnerships (Fig. 6, black line), and up to 35% of the population has had overlap between his/her last two partnerships. These percentages indicate that a model of a sexual contact network should allow concurrency of long-term partnerships (to see overlaps of 500+ days), and should not limit the occurrence of concurrent partnerships to the small core-group of the population, as was the case in the Kretzschmar model. The current model allows transitional concurrency [25] during the last 15% of a partnership for men, and the last 7.5% for women (Fig. 6, orange line). The positive gap length in the UK is relatively short compared to the duration of an untreated Ct infection: only 15% of the population takes more than 1 year to find a new partner. In the model, the positive gap length is determined by the chance of individuals to participate in the pair-formation process for each timestep (Table 2), and by the total number of partnerships that the model tries to maintain: if either is too low, the time between partnerships (i.e. positive gap size) increases. Positive gap length is also affected by transitional concurrency, as an increase in concurrency increases the fraction of the population that is available for new partnerships, and thus increases the competition for those that are single and seeking a new partnership. The result is that the average positive gap length becomes larger with higher levels of transitional concurrency.
Ct prevalence distribution
The distribution of Ct prevalence in the population, stratified by level of sexual activity was introduced as a new summary measure by Althaus et. al. [11] that combines sexual behaviour with epidemiological data. As expected, the Ct prevalence within a stratum rises as the level of sexual activity of its individuals rises from 0 to 3 partners in the last year (Fig. 8). Remarkably, however, is that in the UK survey data the Ct prevalence in the subsequent strata (4 and 5+ partners) drops again to the prevalence levels halfway between that of the strata of 2 and 3 partners per year. Neither the current model, nor earlier sexual contact network models [11] are capable of reproducing this observation, or shed light on the mechanism behind it. Two possible mechanisms are explored in Supporting Text S1, namely 1) that prolonged and frequent exposure to Ct could result in a protective immune response [27][28][29][30], and thus reduce the Ct prevalence in those population groups that are most likely to have experienced a prolonged infection, and 2) that due to coital dilution, that is the decrease in coital frequency when maintaining concurrent partnerships, those with concurrent partnerships will have a reduced chance of acquiring Ct [31][32][33].
A second important feature of the distribution of Ct prevalence in the population is that it provides information about the amount of mixing between moderate and core-group individuals in the population. The level of assortative mixing is an important measure for disease spread [13,24], but is difficult to measure in surveys, as the surveyed individuals need to have accurate knowledge about the life history of their (possibly short-term) partners. Therefore, models of sexual contact networks typically . Transitional concurrency in partnerships that are less than 6 months old. The model (solid lines) matches Dutch sexual survey data (dashed lines) well when transitional concurrency is possible during the last 15% of a partnership (from a man's perspective), or the last 7.5% of a partnership from a woman's perspective. The y-axis shows the percentage of individuals with a medium-or long-term partnerships that is less than 6 months old, which had a period of transitional concurrency during this partnership. doi:10.1371/journal.pcbi.1002470.g007 rely on assumptions on the level of mixing [7,9,11]. However, as Ct becomes more concentrated in individuals with a high level of sexual activity [2,13,23] ( Fig. 8), in strongly assortative populations, and less so in more well-mixed populations, one can use the difference between the average Ct prevalence of the population, and the Ct prevalence of for example the group of individuals with 1 partnership in the last year, as an observation against which to fit the amount of assortative mixing in the model.
The level of mixing on sexual activity in the model ( Table 2) that well describes the UK survey data is a situation in which 87% of the partnerships of the core-group are with moderate members. If we define core-group members as those individuals with 5+ partnerships in the last 12 months, the model results are comparable to earlier investigations on mixing of sexual activity [34,35], and the amount of mixing in the model can be characterized as having a moderate assortativity coefficient of 0.25 [35].
Discussion
In this paper we investigated how four population level summary measures of sexual activity, are related to the underlying sexual behaviours of individuals, and to the heterogeneity in behaviour on the individual level. We showed that the ''recent number of sexual partners'' summary measure is very sensitive to many of the sexual characteristics of simulated individuals, whereas ''gap length'' and ''partnership duration'' are predominantly defined by homogeneous traits in the model (i.e. the amount of transitional concurrency, and the relative frequency and duration of short, medium and long-term partnerships). Heterogeneity of sexual behaviour in individuals, and especially the heterogeneity in sexual capacity of core-group individuals, was found to play a large role in the summary measure ''cumulative lifetime number of partners'', and instrumental in recreating the measure's powerlaw-like distribution [19]. We therefore conclude that an extensive heterogeneity in the behaviour of individuals in terms of acquiring partners, is likely to play an important role in the network structure of real sexual networks. Whether heterogeneity in individual behaviour plays a similar role for the summary measures of gap length and partnership duration is undecided: we find that, given the level of detail with which we studied these two summary measures, we could accurately recreate them in simulation models using homogeneous descriptions of sexual behaviour. However, a more detailed study of these summary measures is necessary (for example by stratifying by age and gender, or by studying them in terms of the life history of individuals) to conclude whether heterogeneity in behaviour in transitional concurrency and in partnership duration plays an important role in the structure of sexual networks.
The summary measure introduced by Althaus et. al. [11] that describes the distribution of Ct prevalence in the population highlight our incomplete understanding of disease transmission through sexual networks. We currently do not know whether the lower than expected Ct prevalence in individuals with the highest levels of sexual activity [11] is a feature of the structure of real sexual networks, or that it is related to some form of protective immunity upon frequent exposure to, but not necessarily infection with Ct [30]. The summary measure has an additional important quality: the distribution of Ct prevalence in a population can be used to inform sexual network models on the degree of mixing between individuals of different levels of sexual activity. This parameter is known to be very important for the structure of sexual contact networks [13,35], but is unfortunately not present in current sexual behaviour survey data (for ways to set up an unbiased sexual survey of mixing patterns, see Boily et. al [36]). In this paper we presented a clear way, based on existing theory [2,13,23], how to indirectly extract the mixing between individuals with different levels of sexual activity from disease prevalence distributions.
As sexual network models are now routinely used throughout the world to determine both the feasibility and cost-effectiveness of nation-wide healthcare interventions [37][38][39], it is critical that these models move towards a state where they are able to make accurate predictions. By increasing our understanding of the complex relations between population-level summary measures, and the heterogeneity of individual sexual behaviour, we were able to make large qualitative improvements in reproducing sexual network summary measures in comparison to earlier versions of sexual network models [11,18], at the price of a moderate amount of additional complexity.
In conclusion, shifting the models' perspective from a populationlevel description to that of the heterogeneity in individual sexual behaviour opened up new ways to fit sexual contact network models to sexual survey data. In the future, a modeling approach in which sexual network structure more and more emerges from individuals' sexual behavior as studied in social psychology may further improve the realism of sexual network models.
Dutch and UK sexual survey data
The population-level summary measures are based on data from the Rutgers Nisso Group Dutch population survey on sexual health in 2009 [40]. This survey entails 6428 individuals that are weighted on gender, age, ethnicity, and the degree of urbanization of their hometown. Individuals that fell outside the age-range of 13-64 were excluded from the data, as were those that reported having had homosexual partnerships, that were paid for partnerships, or had included prostitute visits in their answers on sexual activity. This procedure left a total of 5402 individuals (Table 3). For population-level summary measures which could not be derived from the Dutch sexual survey, we used the Natsal 2000 UK sexual survey [26] as presented in Althaus et. al. [11].
All Dutch survey population measures presented in the manuscript take into account the weighted value of individuals (e.g. individuals that had a weight ww1 associated with them because they were sampled from an underrepresented population group also contribute w times as much to the population level summary measures as individuals with a weight of 1). The population measure on ''sexual activity in the last 6 months'' and the model data (Fig. 4) were smoothened to facilitate a visual comparison, using a Savitzky-Golay non-linear smoothing filter [41,42] (parameters: window size 5, coefficient 4, left-padding 0, right-padding recent mean for 1+ and 2+ partners, and 0 for 3+ and 6+ partners). The smoothing filter works similar to a running average, but performs better at preserving the trends of the survey data.
Chlamydia trachomatis disease parameters
To measure the Ct distribution in the simulated population, we implemented the Ct infection process as described in Althaus et. al. [11] (Table 4), in which the rate of (unprotected) sexual contacts drops from once every 2 days to once every 7 days after the first two weeks of a partnership. The transmission rate of Ct in our simulations was set to 2.5% per partner per sexual contact, such that the average Ct prevalence in the age-group 18-44 matched the estimated Ct prevalence of 1.7% of the UK in the same age-group [11,22]. A more in-depth study of the relationship between the decline of condom use and coital frequency during a partnership, and its (limited) effect on the distribution of Ct is presented as Supporting Text S1.
Pair formation process
The model keeps track of the total number of partnerships between moderate individuals, between core-group members, and between core-group members and moderate individuals ( Table 2). Every timestep of the model, any shortage of partnerships in the simulated population is supplemented by randomly sampling pairs of individuals from the population and attempting to form partnerships between them. Once enough new partnerships have been formed, the model moves one timestep ahead and repeats the same process.
Because partnerships are continuously dissolved as time progresses, their total number constantly needs to be supplemented.
Not all individuals are available for sampling on a particular day. Individuals have to have the necessary free ''capacity'' for an additional partnership (taking into account that partnerships that are in their transitional concurrency phase no longer take up capacity). In addition, for moderate individuals there is an ageand gender-based probability that they are available for sampling that day (Table 2). This additional probability is not applied to core-group members. All individuals that are available for pair formation can be sampled to supplement the total number of partnerships in any of the appropriate combinations between coregroup members and/or moderate individuals.
From the subset of the population that participates in the pair formation process on a given day, pairs of individuals are randomly sampled, and tested for three necessary conditions for pair formation: 1. The two individuals are of opposite gender 2. The two individuals are not already in a partnership with each other 3. Both individuals aim for a partnership of the same type (i.e. short-, medium-, or long-term partnerships, see methods) If those conditions were all met, the probability of partnership formation was determined by the age disparity between the partners. To calculate this conditional probability, we use a folded cumulative normal distribution function (CDF) [9,43] whose mean m and variance s 2 depend on the age a of the woman. The properties of this function are as follows: when the age disparity (male age -female age) corresponds to the mean of the folded CDF, the probability is 0.5, and any deviation from the mean results in a lower probability. To prevent unnecessary computations in the model by rejecting at least 50% of the partnerships, all probabilities generated by the folded CDF are multiplied by two. The equation that defines the mean age disparity, m, is described by m~maxf1:5,4:25|ln(0:6a{7){ exp (0:07a{0:4)g ð1Þ This equation is plotted in Fig. 9A (solid pink line). Similarly, the variance s 2 of the age disparity can be described by The shape and parameters of these equations were initially fitted to the observed age disparity and variance in the Dutch sexual survey data, but as these equations represent the preferred age disparity within partnerships of the simulated population, they needed to be subsequently adjusted (by trial and error) such that the resulting age disparity in the model matched that of the sexual survey data (Fig. 9). Among the adjustments were the addition of a necessary condition based on the age of male partner: for men below the age of 18, a partners' age should not be more than 2 years older than their own.
Partnership type and duration
Based on the UK survey data on the duration of the secondmost recent partnership (Fig. 5), partnerships in the model are categorized into three types (short, medium and long-term partnerships). Each type of partnership represents an exponential distribution with a minimum duration of 1 day, and a mean duration of 13 1 3 days (short), 250 days (medium), and 5880 days (long) ( Table 2). The preferred duration of a partnership is determined by first picking the type of partnership from a ratio of 14 (short) to 13 (medium) to 12 (long), and subsequently sampling the exponential distribution associated with that type. As detailed in the previous section, a partnership will only be formed between two individuals if both select the same type of partnership.
Capacity
The maximum number of partnerships that an individual can simultaneously maintain is described in the model by their sexual capacity, meaning that an individual with a sexual capacity of n can maintain n simultaneous partnerships. The sexual capacity of an individual reflects how much time, attention, money,etc he/she is willing to invest in partnerships [44].
In the model, the development of the sexual capacity of an individual during his/her life is stored in a vector that is constructed at its birth, by sampling from gamma distributions that determine the age at which an individual starts participating in the pair formation process (and thus when its capacity goes from 0 to 1), and if applicable, the onset, and duration of a period during which its capacity is larger than 1 (Table 1). Individuals with a capacity larger than 1 are part of the so-called ''core-group'' [23,24] and are able to start and maintain multiple concurrent partnerships. The maximum capacity of a core-group member is a function of the onset of their core-group period, and the length of that period (Table 1). Core-group members that have a capacity of 5 or higher (about 15% of the core-group), will keep a higher base capacity of 2 after their core-group behaviour period.
Core-group members are not allowed to have more than two non-transitional (i.e. costly, see next section) long-term partnerships at the same time. This constraint makes it possible for some core-group individuals to accumulate up to 600 partnerships over their lifetime, as is observed in the empirical data (see results), and not become tied up in 5 long-term partnerships. When the capacity of an individual drops at the end of its core-group period, he/she will randomly break up a number of partnerships, until he/ she is no longer over capacity.
Transitional concurrency
Transitional concurrency (i.e. the period prior to the end of an existing partnership during which an individual acquires a new partnership [25]) is implemented as follows: partnerships that are near the predetermined end date of that partnership no longer carry a maintenance cost, and thus free up capacity for an individual. Transitional concurrency becomes a possibility during the last 15% of the duration of an existing partnership for men, and the last 10% for women. In effect, it allows individuals that have a maximum capacity of 1 to temporarily maintain two concurrent partnerships.
Model initiation
The model has a burn-in period of 60 years, during which a stable sexual contact network is built up in the model population. Ten years prior to the end of the burn-in period, Ct is introduced in the simulated population by infecting 100 core-group individuals with asymptomatic Ct. After 35 years, the average Ct prevalence distribution is measured over a period of 15 years.
Model implementation
The model is implemented in Clojure 1.21, a modern dialect of lisp (http://clojure.org). The source code of the model and an example of the sexual networks that are generated by the model are included as Protocol S1, and Dataset S1 & S2.
Supporting Information
Dataset S1 Dataset S1 & S2 contain a textfile split in two parts, and together form an example of a simulated sexual network over a 100 year period. Each line in the dataset represents the life history of a single individual, and records the day they were born, how their sexual capacity developed (per year) during their lifetime, and with whom and from what day to what day they had a partnership. This sexual network was initialized with no existing partnerships, and thus needs about 60 years before it has stabilized. Linefeeds in the dataset are in linux format (LF and not CR-LF) and may need conversion on windows. (BZ2) Dataset S2 Dataset S1 & S2 contain a textfile split in two parts, and together form an example of a simulated sexual network over a 100 year period. Each line in the dataset represents the life history of a single individual, and records the day they were born, how their sexual capacity developed (per year) during their lifetime, and with whom and from what day to what day they had a partnership. This sexual network was initialized with no existing partnerships, and thus needs about 60 years before it has stabilized. Linefeeds in the dataset are in linux format (LF and not CR-LF) and may need conversion on windows. (BZ2) Figure S1 The relationship between partnership duration, condom use and a proxy of coital frequency. The fraction of coital events during which condoms are used (black line) starts at +80%, and decreases to 16% as partnership duration increases (black circles [45]). The fraction of the population with an aboveaverage (w7) number of coital acts per month [46] (red squares) was used as a proxy for coital frequency (red line). The first datapoint in this series (the 100% at day 5) was not reported by Klusmann et. al [46], but is based on the assumption that partners would have had a coital event during the first five days of their partnership, and thus 100% would at 5 days have a coital frequency of w7 times per 4 weeks. The resulting relationship between partnership duration, the fraction of individuals not using condoms and a proxy of coital frequency is given by the orange line.
(EPS) Figure S2 The relationship between the number of recent partners, and the estimated number of sex acts in the last year, with (left) and without coital dilution (right). The figure shows the mean values, as well as the interquartile ranges of a single (typical) timepoint in the sexual contact network. (EPS) Figure S3 The relationship between the number of recent partners, and the number of days without partner. The figure shown here is a snapshot moment of the sexual contact network, and shows the mean values, as well as the interquartile ranges. Individuals with a high numbers of partners in the last year tend to be core-group members involved in concurrent partnerships, and have little or no days without sexual partners in a year, with the exception of individuals that during the year entered or left the core-group. (EPS) Figure S4 The effect of different assumptions on the Ct prevalence distribution. Compared to the main model (orange blocks, zoomed in version of Fig. 8., main manuscript), the various scenarios tested in this supporting text had a limited effect on the Ct prevalences of those with 3, 4 or 5+ sexual partners in the last year, and did not result in a pattern where for an overall Ct prevalence of 1.7%, the highest prevalence would be found in the group with 3 recent partners (as observed in the UK). The scenario with coital dilution includes a constant Ct transmission rate per day, and the immunity scenarios include both the constant transmission rate and coital dilution effects (of strength 0.7).
(EPS)
Protocol S1 Contains the source code of the individual-based model, together with the directory structure, documentation and the tools necessary to download the libraries and to create standalone JAR files with which to generate sexual network datafiles. (BZ2) Text S1 Contains a more in-depth study of the relationship between the decline of condom use and coital frequency during a partnership, and its (limited) effect on the distribution of Ct, as well as our exploratory study of two possible mechanisms that might explain the observed Ct prevalence distribution, in which Ct prevalence after an initial increase, appears to decrease with an increase in sexual activity [11]. (PDF) | 9,447.2 | 2012-04-01T00:00:00.000 | [
"Biology"
] |
Application of Face Recognition Technology in Intelligent Education Management in Colleges and Universities
The application of arti fi cial intelligence technology can help colleges and universities e ff ectively improve the e ffi ciency of campus management and teaching quality. The traditional educational management concepts and models in colleges and universities can no longer meet the growing needs of students. In this paper, the application of arti fi cial intelligence technology in intelligent education management in colleges and universities is proposed. The face recognition technology is constructed through the volume neural network model and the hog algorithm. The experimental results show that the face recognition system can meet the real-time requirements of face recognition and improve the accuracy of face recognition. In addition, in the dormitory management application experiment, the face recognition system proposed in this paper has more advantages than the traditional algorithm in recognition accuracy. The face recognition system in this paper can identify the problems existing in class attendance and provides a certain reference value for the classroom on campus.
Introduction
College education is an important part of national education. It is an important stage to cultivate students' learning ability and social ability and establish a correct world outlook, outlook on life and values. With the development of China's education and the popularization of higher education, not only the number of college students is growing, but also the scale of university campus is expanding. The problems existing in the traditional education management mode of higher education are becoming increasingly prominent [1]. The main problems of higher education management include (1) the backward concept of education management which fails to optimize and improve from the perspective of meeting the personalized growth of students and give play to the educational role of higher education management and the value of students' self-management; (2) many external factors, for example, college education management should not only deal with the internal problems of the campus, but also be impacted by a large number of social environments, students' families, students' individuals, and the Internet; and (3) the relatively low management efficiency, for example, there is insufficient coordination of education management and poor order of education management.
With the development of information technology and science and technology, college students have put forward more and more service needs and suggestions for university management, while the traditional education management system and model of colleges and universities cannot meet the requirements of students, and the safety of school management cannot be guaranteed. At the same time, with the expansion of the scale of colleges and universities, the management tasks to be undertaken by the original number of managers are also increasing. However, the traditional management mode is inefficient and error prone, which makes managers have to carry out repetitive work and increases the cost of time and manpower [2]. In addition, the evaluation methods of students' performance in colleges and universities are different from those in senior high schools. The final evaluation results of students are composed of classroom attendance and examination results [3]. At present, most colleges and universities record and manage students' attendance through traditional manual methods, mainly through classroom roll call and irregular spot check [4]. This method not only needs to occupy the classroom time and teachers' energy, but also has a high error rate. It is unable to detect the situation of being late and leaving early, substituting for classes and absenteeism in real time. With the continuous development of the Internet technology and information technology, the information construction of university education management has developed rapidly. Building intelligent education management system and platform has become the development trend of university education management, and it is also an important research on the development of university education management [5].
Based on the payment request of college information construction, this paper puts forward the application research of artificial intelligence technology in college intelligent education audience. At present, the typical applications of artificial intelligence in the field of education mainly include intelligent tutors, intelligent partners, intelligent evaluation systems, feature recognition, and learning analysis, involving three scenes: teaching scene, learning scene, and management assessment, which basically realizes the full penetration of education. The face recognition and detection technology are constructed through convolution neural network and image gradient direction histogram feature extraction algorithm and tested in the dormitory management and classroom attendance management module of college education management system. This paper is mainly divided into three parts. The first part is the elaboration of the application and research status of artificial intelligence technology in intelligent education management in colleges and universities. The second part is the construction of face recognition model in university intelligent management system. The third part is the experimental results and corresponding analysis of face recognition model in college intelligent education management system.
Application of Artificial Intelligence Technology in Intelligent Education Management in Colleges and Universities
Intelligent education management in colleges and universities refers to the management mode to improve the management efficiency and teaching level of colleges and universities through the integration of college education information network application technology and college teaching information technology [6]. Intelligent education management in colleges and universities needs to use modern information technology and science and technology to realize software requirements on the basis of intelligent education management hardware equipment, such as the Internet of Things technology, big data analysis technology, and artificial intelligence technology [7]. Intelligent education is a high-end form of educational informatization, which emphasizes the openness and innovation of education, so that teaching and learning can break through the limitations of time and space. Intelligent education adheres to the people-oriented concept; makes full use of high-tech means such as big data, mobile Internet, and artificial intelligence; builds a networked, digital, and intelligent smart learning environment; meets students' personalized learning needs; cultivates students' comprehensive quality and innovation ability; and enables students to achieve all-round development. Artificial intelligence technology is a technology that enables computers to have the ability of analyzing and sensing and simulating corresponding responses to human thinking. It has been widely applied in many fields, such as Alipay's face registration and train station's face screening [8]. Artificial intelligence technology is also one of the important technologies for colleges and universities to realize the Internet of Things and intelligent education management. Some colleges and universities have applied artificial intelligence technology to the library management system. In the past, the library identity registration was realized through the campus all-inone card or password, and this method also obviously has some disadvantages [9]. The way of biometric verification has high requirements for biometric acquisition, and there are certain potential safety hazards, so it is not conducive to the promotion of colleges and universities [10]. The face recognition technology in artificial intelligence technology can complete identity recognition without contact and in the most natural state of the identified person, which is safe and reliable [11]. In addition, learning is still an important task for college students in college life, and at present, most colleges and universities need students to reach the standard in English before graduation. However, for many students, oral English and listening are two major difficulties in English learning [12]. Therefore, scholars have built a learning system for English listening and speaking practice based on artificial intelligence technology and judge the problems of learners' pronunciation through speech recognition by artificial intelligence technology, so as to assist learners to make corresponding improvements [13]. Other scholars apply artificial intelligence technology to the detection of students' classroom behavior and recognize students' expression and body state through recognition technology, so as to judge students' classroom behavior [14]. In addition, some scholars detect students' running through the Internet of Things technology and artificial intelligence recognition technology, which can improve the accuracy of students' performance and help students correct wrong running posture through action recognition [15]. In addition to college learning and education, artificial intelligence technology is also used in college safety management system and student life management system. With the construction and development of university monitoring system, some scholars have introduced video recognition technology into the monitoring system, which can identify and detect the vehicles and personnel entering and leaving the university, improve the efficiency and safety of university campus management, and reduce the possibility of potential safety hazards [16]. In addition, with the continuous improvement of university information construction, the role of dormitory management system has gradually become prominent and has become the focus of university education management research at home and abroad [17]. Foreign scholars first investigated and sorted out the needs of all aspects related to dormitory management and developed a management mechanism with dormitory as the core on this basis. The management mechanism essentially defines the main responsibilities of the campus, dormitory, and students and ensures the stable operation of the whole system [18]. Compared with developed countries, the domestic research on dormitory management system lags behind. Not only does the dormitory management system and concept not meet the needs of university information construction, but also the technical means in the management system are relatively backward. Student dormitory management is a very important part of college student management. The level of student dormitory management reflects the level of student work in a school, which will directly affect all aspects of the school. For the current major universities, the work of student dormitory involves a lot of information. If manual registration is adopted, it will consume a lot of time and energy of administrators, and it is easy to cause information errors, information loss, and other problems. Other scholars have combined artificial intelligence technology with university network service platform, so that students can obtain corresponding service information through the network platform, such as waiting time in canteen and usage of bathroom [19].
At present, many domestic universities have applied artificial intelligence technology in education management system. Although it has improved the efficiency of university management and teaching quality, it still does not achieve the expected effect. There are three main problems. First of all, the digitization of the educational management process in colleges and universities is low. University management contains a wide range of contents, and each piece of management content contains a large number of data. Although the introduction of artificial intelligence technology can improve the data statistics and sort in various fields, the data sources of universities are scattered and have a certain fluidity, which make the statistics of some data incomplete, resulting in the low authenticity of the data managed by artificial intelligence [20]. Second, the teaching in the process of college education management has not been accurate. Artificial intelligence technology can improve the quality and level of teaching to a certain extent, but teachers do not fully grasp the learning situation and teaching effect of students. Relying too much on the evaluation of human-computer interaction will also make teachers' evaluation of students inaccurate and incomplete. Third, the idea of people-oriented education is ignored. Artificial intelligence technology can standardize the educational management of colleges and universities and make all departments of colleges and universities act according to rules. At the same time, these standards are also a restriction on students and teachers, which are not conducive to the innovation and reform of teaching.
Construction of Face Recognition Model in University Intelligent Education Management System
The research of face recognition system began in the 1960s and has been improved with the development of computer technology and optical imaging technology since the 1980s. The real application stage is in the late 1990s, and it is mainly realized by the technology of the United States, Germany, and Japan. The key to the success of face recognition system lies in whether it has a cutting-edge core algorithm and makes the recognition results have practical recognition rate and recog-nition speed. Face recognition is one of the important technologies for the implementation of intelligent education management in colleges and universities. It can carry out face recognition in a noncontact way, which is convenient and reliable. The principle of face recognition technology is shown in Figure 1. It mainly collects face related data and extracts image features and then stores the obtained information in the database as basic data information. When the camera captures the face image in the specified area, it will digitize the image for image processing, then compare and recognize it with the existing image eigenvalues in the database, and finally get the result. Therefore, the key link of face recognition technology is face detection and image feature extraction and comparison. The face detection algorithm used in this paper is to fuse the extracted image gradient direction histogram, namely hog, with the image features extracted by convolution neural network based on Yolo model. The fusion result is the final feature of the face.
The image gradient direction histogram feature extraction algorithm can extract the edge contour of the image in a short time and overcome the influence of illumination and color on the detection results in the process of face recognition and detection. After the image is input, it needs to be changed into a unified format file with the same pixel size, and then the horizontal gradient and vertical gradient of each pixel contained in the image are calculated accordingly and then calculated through the convolution kernel, which is ½−1, 0, 1T and ½−1, 0, 1. The calculation is shown in formulas The gradients of the coordinate points of the pixels in the horizontal and vertical directions are expressed as g x ðx, yÞ and g y ðx, yÞ, respectively, and the pixel values are expressed as h ðx, yÞ. According to the gradient value obtained from formulas (1) and (2), the touch and direction of the gradient corresponding to the point can be solved, as shown in formulas Through the calculated pixel gradient modulus and corresponding direction, the picture can be transformed into gradient histogram, and the gradient histogram of all units can be obtained. After that, the whole picture can be traversed arbitrarily through the sliding window. In this process, the eigenvalues contained in each block will be normalized, so as to improve the robustness of the image to shadow, illumination, and edge changes. The normalization method is shown in formula v normed
Wireless Communications and Mobile Computing
The abscissa index of the image gradient direction histogram is expressed as m, the cell index contained in the sliding window block is expressed as n, and the count value of the gradient direction histogram corresponding to the index is expressed as v m,n .
The input image also needs to be processed through the convolution neural network based on the Yolo model. Each convolution in the convolution neural network based on the Yolo model will convolute, add bias, activate, and down sample the input image in the corresponding layer. The input image needs to undergo six convolution operations and four pooling operations, and then the corresponding image eigenvalues are extracted under the processing of the activation function. The convolutional neural network based on Yolo model is an improvement based on the structure of convolutional neural network, as shown in Figure 2. Compared with the traditional feature extraction and classification methods, the way of feature extraction of input image by convolutional neural network is to achieve automatic network learning through layer by layer convolution dimensionality reduction and multilayer nonlinear mapping to obtain the feature extractor and classifier required for target recognition. Therefore, after the input picture is convoluted through n × m convolution kernels, biased and activated, the characteristic map of the corresponding layer can be obtained, the number of which is n, and the activation function is shown in formulas The current number of layers is represented as l, the neuron bias of the j feature graph is represented as b, and the feature graph set associated with the previous layer is represented as M.
The pool layer in the convolutional neural network selects the maximum value for sampling, and the number of output characteristic graphs after sampling remains unchanged, but the corresponding size will become smaller. As shown in formula (8), it is the sampling function: The nonlinear mapping capability of the network can be enhanced in the full connection layer, as shown in formula In the formula, the number of neurons in the upper layer is expressed as n, the connection strength between neurons in the current layer and neurons in the upper layer is expressed as ω, the bias of neurons in the current layer is expressed as b, and the activation function is expressed as f ðÞ.
Face features have great complexity and many categories, so face features are classified by softmax regression function in the network. If the number of samples is n and the number of categories is s, the training set composed of s classes is expressed as fðx ðtÞ , y ðtÞ Þ, ⋯, ðx ðsÞ , y ðsÞ Þg, in which there is sample x ðrÞ ∈ R n+1 , and the corresponding class mark is expressed as y ðrÞ ∈ f1, 2, ⋯, mg. The regression function is shown in formula The probability that the sample belongs to category k is expressed as p, and the model parameter is expressed as θ T ∈ R n+1 . If its expression is matrix, its cost function is shown in formula
Wireless Communications and Mobile Computing
The indicative function in the formula is expressed as lfg .
On the basis of convolution neural network, in the neural network based on Yolo model, except that the last layer adopts linear function, the activation function of the remaining volume base layer is shown in formula The image feature values extracted by the convolution neural network based on Yolo model will be fused with the feature images obtained by the gradient direction histogram feature extraction algorithm in the last volume base, so as to obtain the final feature image. Then the final face features will be obtained after the two-layer full connection layer processing in the neural network, but it cannot be used to realize the real face target recognition and detection, and the classifier of the recognition network needs to be further set and trained. The loss function needs to be defined before setting and training the classifier. The error of the network is divided into positioning error and classification error. Its calculation is shown in formulas The positioning error is expressed as L position , the classification error is expressed as L class , whether a face detected in the i network is expressed as F, the center point of the corresponding grid boundary box is expressed as j, the coordinate value of the point is expressed as ðx, yÞ, the width and height of the boundary box of the point are expressed as w and h, respectively, and the category of the detection target is expressed as c i .
To complete the detection network defined by the loss function, we need to constantly modify and adjust the parameters of the full connection layer through the feedback information of the training results, so as to continuously improve the confidence of the recognition network, finally achieve the design goal, and output the relevant bounding box parameters of the face target. The loss function in this paper is shown in formula The trained network needs to carry out corresponding detection to evaluate the face target position contained in each unit with the set confidence parameters, as shown in formula to run stably and effectively in the intelligent education management system of colleges and universities, corresponding tests need to be carried out. In actual face recognition, the orientation, illumination, and expression of the face to be recognized in the detection environment are different, so these factors also need to be considered in the process of training. In the training process, the existing faces in the data set are cut uniformly, and the size of the sum is used as the standard to divide the samples into positive and negative, that is, those with a size less than 0.3 are negative samples, those with a size more than 0.65 are positive samples, and those not within this range are not used temporarily. Figure 4 shows the error comparison results obtained by fusing hog and Yolo model algorithm models.
Wireless Communications and Mobile Computing
It can be seen from the results in the figure that the face detection algorithm model adopted in this paper only propagates complex samples, which can save computing resources on the basis of maintaining a high recognition rate. In order to further test the performance of the recognition network in this paper, the algorithm in this paper is tested and compared with other face recognition algorithms through fddb data set. As shown in Figure 5, the test results are compared.
The horizontal coordinate in Figure 5 is the recognition speed, and the vertical coordinate is the recognition accuracy. It can be seen from the results in the figure that the algorithm in this paper is not the best algorithm among the four T1 T1 T1 T1 T1 T1 T1 T1 T1 T1 T11 fraction explain Leave early Wireless Communications and Mobile Computing algorithms in terms of recognition speed or recognition accuracy, but the algorithm in this paper has the advantages that the other three algorithms do not have in terms of recognition speed and recognition accuracy. This is mainly because the fusion of Yolo model and hog fast extraction method shows good timeliness, and the final accuracy of this algorithm can reach 91.5%. Therefore, this algorithm can be applied to the environment of face recognition in the intelligent education management system of colleges and universities. As shown in Figure 6, the results of each test example of the face recognition algorithm in the university intelligent education management system are shown. As can be seen from the figure, the main test examples include user login, accurate query, fuzzy query, data and picture upload, report generation, and query statistics. It can be seen from the test results that the test response time displayed by each example is within the expected range, which meets the expected requirements of face recognition test.
Application Test Results of Face Recognition System.
In the intelligent education management system of colleges and universities, the learning management and safety management of college students are the focus. Therefore, this paper will test the application of face recognition system in two aspects: attendance system and dormitory management. As shown in Figure 7, the face recognition test results of the dormitory environment of the face recognition algorithm in this paper and the traditional algorithm in the literature are compared. It can be seen from the demerit in the figure that the fastest recognition time of this face recognition algorithm is slightly longer than that of the traditional algorithm, while the slowest recognition time is shorter than that of the traditional algorithm. Finally, in the average recognition time, the recognition time of this face recognition algorithm has a little advantage over that of the traditional face recognition algorithm, but it is not obvious. However, from the perspective of the error rate of face recognition, the recognition error rate of this face recognition algorithm is less than that of the traditional algorithm, which is reduced by 3.27%. Overall, the face recognition algorithm in this paper has certain advantages in application testing, which can meet the requirements of real-time face recognition and improve the accuracy of face recognition. Figure 8 shows the test results of face recognition in classroom attendance. In the classroom attendance test, twelve college students are mainly selected as the test objects of face recognition. From the results in the figure, it can be seen that the classroom attendance module based on face recognition system can identify and deal with the situations of late and early leave, substitute and absenteeism in the classroom, and can meet the expected needs of classroom attendance recognition. In the test process, it is found that the efficiency of face detection and recognition is relatively high when college students maintain correct posture and face is illuminated evenly. If the student's posture angle is too biased or the face deflection angle is large, there will be a recognition blind area, and face recognition will fail, resulting in the system judging that there is no corresponding face in the image, and the recognition result score is zero.
To sum up, the algorithm of the face recognition system in this paper has good real-time performance, can realize face recognition in the expected time, improve the efficiency of face recognition and reduce the error rate of face recognition, and can meet the needs of face recognition in different environments. In the application test of dormitory management and classroom attendance model, the application of face recognition technology can improve the efficiency of students' education and management and achieve the purpose of effectively managing students.
Conclusion
With the popularization of higher education, the number of college students and the scale of university campus are expanding. With the development of higher education, the problem of campus education management is becoming more and more prominent. With the development of the Internet technology and information technology, the construction of intelligent education management system and platform has become an inevitable development trend. Therefore, this paper puts forward the application research of artificial intelligence technology in college intelligent education management and applies face recognition technology in college intelligent education management system, dormitory management system, and classroom attendance system. The experimental results show that the convolution neural network algorithm based on Yolo model combined with hog algorithm can effectively improve the extraction of feature information of face recognition, improve the recognition efficiency, and reduce the error rate of face recognition. In addition, the face recognition algorithm in this paper has more advantages in the accuracy of face recognition than the traditional algorithm in the dormitory management experiment. It can meet the real-time performance of face recognition and improve the accuracy of face recognition. In the classroom attendance experiment, the face recognition algorithm in this paper can identify and deal with the problems such as students' late and early departure, absenteeism, and substitute classes. However, there are also problems that cannot be recognized correctly due to students' sitting posture and face angle. Therefore, this aspect needs further improvement and research.
Data Availability
The figures used to support the findings of this study are included in the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 6,169.8 | 2022-08-13T00:00:00.000 | [
"Computer Science"
] |
Measurement Accuracy in Silicon Photonic Ring Resonator Thermometers: Identifying and Mitigating Intrinsic Impairments
Silicon photonic ring resonator thermometers have been shown to provide temperature measurements with a 10 mK accuracy. In this work we identify and quantify the intrinsic on-chip impairments that may limit further improvement in temperature measurement accuracy. The impairments arise from optically induced changes in the waveguide effective index, and from back-reflections and scattering at defects and interfaces inside the ring cavity and along the path between light source and detector. These impairments are characterized for 220 x 500 nm Si waveguide rings by experimental measurement in a calibrated temperature bath and by phenomenological models of ring response. At different optical power levels both positive and negative light induced resonance shifts are observed. For a ring with L = 100 um cavity length, the self-heating induced resonance red shift can alter the temperature reading by 200 mK at 1 mW incident power, while a small blue shift is observed below 100 uW. The effect of self-heating is shown to be effectively suppressed by choosing longer ring cavities. Scattering and back-reflections often produce split and distorted resonance line shapes. Although these distortions can vary with resonance order, they are almost completely invariant with temperature for a given resonance and do not lead to measurement errors in themselves. The effect of line shape distortions can largely be mitigated by tracking only selected resonance orders with negligible shape distortion, and by measuring the resonance minimum wavelength directly, rather than attempting to fit the entire resonance line shape. The results demonstrate the temperature error due to these impairments can be limited to below the 3 mK level through appropriate design choices and measurement procedures.
Introduction
A silicon photonic thermometer can be based on a Si waveguide ring resonator (RR) or photonic crystal resonator, where the temperature T is determined from the change in the resonance wavelengths [1][2][3][4][5][6].The relatively large thermo-optic coefficient of the silicon waveguide core (TO ~ 2 ×10 -4 ) [7] results in a resonance wavelength variation with temperature dresdT= (/Ng) (dNeff /dT), where Ng is the waveguide group index and Neff is the effective index.In silicon waveguides this temperature dependence is approximately 80 pm/K, with the exact value depending on the specific device geometry and operating wavelength.Wavelength changes down to 1 pm or less can be routinely measured using calibrated tunable lasers used in optical communication component testing, so a temperature resolution of better than 10 mK can be expected.Several national metrology organizations are assessing the performance of silicon photonic ring resonators for use in calibration laboratories and in high-accuracy commercial applications [4][5][6] as an alternative to platinum resistance thermometers (PRT), thermocouples, as well as a more precise alternative to fiber Bragg grating optical thermometers [1,2,4].The interest in Si photonics arises in part because the waveguide is made using extremely pure single crystal silicon layer and encapsulated by a stable silicon dioxide cladding.As a result, a silicon photonic thermometer is expected to be both mechanically stable, shock resistant, and relatively immune to calibration changes caused by chemical contamination or changes of the constituent material properties over time and temperature cycling.Furthermore, Si RR thermometers are usually a few hundred micrometers or less in diameter, and therefore closely approximate a true point-like sensor, unlike PRTs and FBGs which are typically several centimeters long.This removes measurement ambiguities caused by thermal gradients across a large thermometer sensing element.Previously we have demonstrated a temperature measurement reproducibility of better than 10 mK is possible with our RR thermometer prototype using commercial optical test instruments [8].In comparison, standard PRTs designed for demanding metrology applications are able to achieve reproducibility down to the 100 K level [9].This level of accuracy is a result of many decades of work on resistance measurement instrumentation and thermometer assembly, and best temperature-measurement practices.The result is the modern PRT with the Pt wire hermetically sealed with an appropriate gas mixture to suppress oxidation and surface contamination [9].Achieving sub-mK accuracy and reproducibility with a Si photonic thermometer will require a similar assessment of measurement instrumentation, assembly, and intrinsic device properties, and development of strategies to limit calibration drift and measurement errors.A temperature resolution of 100 K has already been demonstrated in a ring resonator by tracking a point at the side of a resonance using a constant power feedback loop to tune the probe laser wavelength [10], rather than measuring the resonance wavelength itself.This work demonstrates the potential of Si photonic thermometers, although the accuracy and repeatability of this interrogation method may not be suitable for a calibrated reference thermometer.In recent work, we have addressed some of the device assembly issues, for example by using a stress-free mounting for the Si thermometer chip and free space optical coupling to minimize stresses and contamination arising from adhesives bonding the Si chip to a substrate and to input/output optical fibers [4,8].Using these approaches, photonic thermometer probe packages suitable for immersion in a temperature calibration bath have been built and are used here for the precise assessment of Si thermometer performance.
In this paper, we examine two basic physical impairment mechanisms that compromise Si photonic thermometer accuracy and reproducibility.The first arises from optically induced refractive index changes in the waveguide.Even if the incident optical power is very low, the circulating power in the ring on resonance can be orders of magnitude larger than the power in the through waveguide and produce measurable self-heating in the ring [2,11].This is the analogous problem to that of resistive heating encountered in standard PRT thermometers [9].Our results also reveal that at the low optical powers used to monitor the ring device, there are additional refractive index changes that are related to the dynamics of carrier excitation, recombination and trapping [11].The second category of measurement impairments are caused by scattering and reflections at optical interfaces and defects within the chip and in the external optical path.The scattering and reflections can cause strong distortions in the ring resonance line shape, and create spurious fluctuations and ripple in the background spectrum of light transmitted through the thermometer chip which also distorts the apparent resonance line shape.These spectral distortions produce ambiguities in the measured resonance wavelength and hence temperature.Through experiment and modelling we quantify the potential measurement error that can result, and identify strategies for mitigation.These effects also contribute to detection limits and accuracy in other photonic sensor applications beyond thermometry, and so the results reported here should be of relevance to other types of silicon photonic sensors.
The paper is organized as follows.Section 2.1 gives an overview of the thermometer chip design and the thermometer probe assembly that is used for the temperature calibration bath measurements.The optical measurement and temperature measurement methods are described in section 2.2.A phenomenological model used to calculate resonator response to both temperature and incident power is summarized in Section 3. In Section 4 experiments and the phenomenological model are used to characterize and quantify the effect of optical self heating and other power dependent refractive index changes on ring performance.Section 5.1 examines the thermometer impairment arising from side-wall scattering and back-reflections within the ring cavity leading to line splitting and distortion and Section 5.1 addresses the effect of scattering and reflections elsewhere in the optical path.Section 6 summarizes the results and reviews the strategies and prospects for mitigating the various impairments examined in the previous sections.
Thermometer Chip Design and Probe Assembly
The ring resonators used in this work were formed using Si channel waveguides fabricated on silicon-on-insulator (SOI) wafers with a 220 nm thick Si waveguide layer and a 2 m buried SiO2 layer as the lower waveguide cladding.The rings and other waveguide structures were designed to guide TE polarized light at wavelengths near = 1550 nm.The waveguides were formed by etching completely through the 220 nm Si layer to form a 500 nm wide waveguide.The upper waveguide cladding was a 2.2 m thick PECVD deposited SiO2 layer.In this work, measurement and theoretical modelling were carried out for four ring thermometers with different Si ring cavity lengths: Ring A with L=100 m, Ring B with L =950 m, Ring C with L = 70 m, and Ring D with L =950 m.Fig. 1 shows the ring resonator thermometer layout for Ring C, along with the measured temperature variation of resonance center wavelength and examples of the resonance spectrum taken at different temperatures.The other rings were similar in layout and design.The ring resonators were coupled to through-waveguides by directional couplers (DC) formed by two bends and a short parallel waveguide section as shown in Fig. 1.In the DC the two waveguides are separated by a nominal gap of 300 nm for Ring A, B and D, and 400 nm for Ring C, with parallel straight waveguide section lengths of 0 m (Ring A), 5 m (Ring B and Ring D), and 10 m (Ring C).In the all-pass configuration shown in Fig. 1, the single through waveguide provides both optical input and output.The ring resonances appear as dips in the through waveguide transmission spectrum as shown in the inset of Fig. 1(b).Table 1 lists the selected resonance wavelength used in the thermometry experiments and corresponding ring properties for each device.Since the DC coupling coefficients and waveguide loss are sensitive to fabrication variations, the specific values given in Table 1 were extracted from experimental measurements.Light was coupled in and out of the through waveguide to/from two optical fibers by two focussing surface grating couplers at the left side of the chip in Fig. 1(a) [12].
Rings A and B were tested by placing the Si chips on a temperature-controlled optical test stage that is open to the laboratory environment.Rings C and D were each incorporated into a sealed probe with a schematic layout as shown in Fig. 2 [4,8].These probes were designed to be immersed in a temperature calibration bath at the Metrology Research Centre at the National Research Council Canada.The probes consisted of a 4 cm diameter and 8 cm long aluminum chamber fixed to the end of a narrow 54 cm long stainless-steel conduit tube.The input and output optical fibers lead from the laser source and photodetector through the conduit tube to the Si thermometer chip.The silicon thermometer chip is mounted on an invar metal stage at the lower end of the chamber, using a spring clip to clamp the Si chip at a point several millimeters away from the ring thermometer.This mounting procedure allows the chip to freely expand and contract with temperature, and thereby avoid stress induced refractive index changes in the ring waveguide due to the stress-optic effect [13].Mounting using rigid clamps or adhesives such as epoxy creates local stresses due to thermal expansion mismatch between the stage and clamps, adhesive layer, and the Si chip.Adhesive related stresses and creep have been observed to alter the thermometer reading by up to 170 mK over time and temperature cycling [2].In the thermometer probes for Ring C and D, light was coupled to and from the chip using two polarization maintaining (PM) optical fibers mounted in a single 8° angle polished fiber block.The input and output fibers are separated by 500 m spacing to match the on-chip grating coupler positions.The fiber block is aligned with the two grating couplers, and is permanently fixed in the probe with the fiber facets at a height of 250 m above the chip as shown in Fig. 2. Positioning the fiber end facet well above the chip produces a diverging beam with an approximately 100 m spot diameter at the chip surface.Although this results in a -15 dB mode mismatch loss from the fibers to the 10 m wide grating couplers, this free space coupling system eliminates sensitivity to optical misalignment during probe assembly and also to dynamic misalignment due to thermal expansion when in use.The need for epoxy to bond fiber to the chip is also eliminated.The signal was still more than sufficient to make accurate resonance wavelength determinations.Furthermore, even in this configuration incident laser powers in the 1 mW range generated measurable resonance wavelength changes due to self-heating.
Optical and Temperature Measurement
The variation of resonance wavelength with temperature is monitored by repeatedly scanning the wavelength of light transmitted through the ring resonator across the selected resonance line as the chip temperature is varied.Accurate ring thermometer assessment therefore depends on the stability and accuracy of laser wavelength and reference thermometer calibrations, as well as precise temperature control with a negligible temperature gradient between the reference thermometer and ring.
For the bench-top stage measurements on Rings A and B, the silicon chip was placed on a temperature-controlled copper block mounted on a thermoelectric cooler (TEC) with the upper chip surface exposed to the laboratory ambient environment.This bench-top stage provides more flexibility in carrying out optical testing compared with the temperature calibration bath, for which the ring and optical fibers are be permanently mounted in a sealed probe assembly.However, the stage temperature accuracy and reproducibility are limited to approximately ±0.05 K by the stability of the TEC controller, fluctuating room environment, and the thermal gradients between the underlying stage and the exposed silicon-on-insulator chip surface [7].As a result, the correspondence of the measured resonance wavelength to a specific temperature has a much larger uncertainty of ±0.05 K (or ±4 pm).The stage temperature was measured using a thermistor embedded in the stage, and the thermistor signal was used as feedback to the TEC controller to maintain the stage at a desired fixed temperature point.A one-dimensional heat flow model [7], that includes thermal radiation effects, predicts that the difference between stage temperature and Si waveguide temperature will be less than 50 mK for an ambient room temperature near 20 °C and stage temperatures between 20 °C to 50 °C.Light from an Agilent 81600B tunable laser is coupled into a polarization controller followed by a polarization maintaining (PM) single mode fiber with an 8° angle polished facet, which directs light to the input grating coupler on the chip.A second PM fiber captures light from the output grating and directs it to the photodetector.For the stage measurements on rings A and B, the two fibers were positioned directly above and in near contact to the grating couplers at an 8° incident angle.
The temperature control is much more precise for the Ring C and D temperature calibration bath measurements.The entire probe was immersed at a depth of 36 cm in a stirred liquid temperature calibration bath.The temperature of the bath was measured using a calibrated standard platinum resistance thermometer (SPRT) placed beside the probe, and two calibrated thermistors.The SPRT thermometer was used as the reference thermometer for ring measurements.By varying the position of the thermistors, the bath temperature uniformity of better than 0.4 mK was measured, while the temperature stability over time at any given set point was better than 0.7 mK [14].The ring transmission spectra for the probe mounted Ring C and Ring D devices were measured using an HP model 81682A tunable laser and a Keysight 81624B photodetector.The laser is able to scan wavelength from 1480 nm to 1580 nm with a minimum wavelength step size of 0.1 pm.The wavelength of each data point was obtained directly from the laser readout.The laser calibration and repeatability were assessed by using the laser to measure the absorption lines of HCN and acetylene gas reference cells.The wavelength reproducibility from run to run was within ±0.1 pm, with a scatter in wavelength offsets within ±0.25 pm of the published reference gas line positions [15,16].Given the measured resonance wavelength shifts near 77.5 pm/°C in these rings, the temperature measurement error arising from the laser wavelength uncertainty will be less than 3 mK.The details of the calibration procedures are reported in a separate publication [14].
Ring resonator model
A phenomenological ring model was developed in order to aid in understanding the potential thermometer reading error in a ring caused by optical self-heating and other optical nonlinearities, and also by the presence of back-reflections and fluctuations in incident light power.The model is based on the fundamental ring resonator equations as given by Yariv [17] and Bogaerts et al. [18].In order to reproduce the effect of self-heating and other intensity dependent changes, the model includes a power dependent waveguide effective index.In the all-pass configuration of Fig. 1, the optical power PT transmitted through the through-waveguide is given by where P0 is the incident power coupled to the through waveguide, t is the field transmission coefficient of the directional coupler, and a is the ring round trip field attenuation.The optical power circulating inside the ring is The total round trip phase delay ψ of the ring resonator is a function of the wavelength and waveguide effective index Neff.This phase is the sum of the propagation phase delay through the ring waveguide of physical length L and any additional phase t arising from propagation through the directional coupler section.
It is assumed that t is only weakly dependent on wavelength and can be treated as a constant over small wavelength intervals comparable to the ring resonance linewidths.The transmission resonances occur at wavelengths where ψ is a multiple of 2, and manifest as periodic minima in the transmission spectrum PT.The depth of the resonance minima will depend on the values of ring round trip loss a and the coupling coefficient t.At the critical coupling condition |t| = a, the transmitted power PT is zero on resonance.The ring waveguide effective index Neff is temperature dependent due to the thermo-optic effect in silicon, and hence resonance wavelength will shift with temperature, thereby enabling the use of the device as a thermometer.
The nonlinear component of the effective index in the model, Neff, is a function of the circulating power in the ring so that Neff = Neff,0 + Neff (Pcirc), and it follows that the transmission, phase, and circulating power in Eq. 1-3 will all vary with the incident optical power.Given a specific functional form for the Neff (Pcirc), the circulating power and transmission for the ring can be calculated by finding the values of Pcirc that self-consistently solve Eq. 2 for any specific incident power P0.The resulting Pcirc and Neff can then be used in Eq. 1 to obtain the ring transmission spectrum PT.This procedure will generate up to three different solutions depending on the wavelength, nonlinearity, and incident power.Fig. 3 illustrates the qualitative resonance line shapes expected for Si ring resonator at three different incident power levels, assuming a linear power dependent effective index, as would occur for simple self-heating caused by linear absorption.The transmission spectrum for zero incident power P0 =0 (equivalent to setting Pcirc = 0 or Neff = 0) is given by Eq. 1 and Eq. 3 directly.For low incident power, the light induced change in effective index causes the observed resonance line shape to become asymmetric and the resonance minimum to move to longer wavelengths.At higher power in Fig. 3, there are three possible solutions for Pcirc and the corresponding ring transmission PT.In this region, the Si ring can exhibit bistability and self-oscillation.The nonlinear optics of Si resonators, and the nonlinear mechanisms such as self-heating, free carrier generation, and the intrinsic intensity dependent refractive index n2, has been the topic of extensive previous and ongoing research [11,[19][20][21][22]. Temperature determination would use the lowest possible power that gives an adequate transmission signal with minimum possible nonlinear effective index change and local heating.Therefore, this work focuses only on the low power regime where the line shape is stable and repeatable, but temperature reading errors may still be caused by an optically induced resonance wavelength shift and also the line shape asymmetry, which can skew the results of line shape fitting procedures.Fig. 3.The qualitative shapes of a Si ring resonance expected for zero power, low power and high input power, assuming a power dependent waveguide effective index, as would be the case for linear optical self-heating.The wavelength scale is referenced to the position of the resonance at zero incident power.At high power the ring transmission exhibits three possible solutions between = 12 pm and 17 pm, where the ring transmission will exhibit bistable behaviour.
The resonance line shape asymmetries shown in Fig. 3, and observed in experiment, are in fact only apparent distortions that result when the data is acquired by scanning the incident wavelength.The intrinsic resonance does not change shape, provided that there are negligible optically induced changes in the ring round trip loss a, or the ring DC coupling strength.The resonance center wavelength simply adjusts relative to the instantaneous laser probe wavelength, changing at each wavelength step in response to the circulating power Pcirc to produce the observed transmission.But in the low power regime, the resonance center wavelength of the intrinsic ring resonance coincides exactly with the measured ring transmission minimum wavelength even though measured resonance shape appears to be distorted.Therefore, the resonance wavelength shift determined from the measured resonance minimum accurately reflects the actual ring effective index as determined both by the underlying chip temperature, and local self-heating and other nonlinear optical effects within the waveguide.In this power regime the measured resonance minima wavelengths can be compared with predictions of theoretical models for the induced refractive index changes.On the other hand, if incident power is high enough to push the ring into the bistable regime, this is no longer the case since the observed transmission spectrum may be a time average of the fluctuations between possible resonator states.The use of ring resonators as thermometers or other types of sensors therefore requires consideration of the incident power and possible nonlinearities, particularly for very large quality factor (Q) resonators where the stored circulating power may be orders of magnitude larger than the incident beam power.In the ideal thermometer, accuracy and repeatability are independent of the instrumentation and the laser power used to interrogate the device.Since this is not possible in a real device, the goal of this work is to quantify the magnitude of the potential errors and explore means to improve accuracy.
Self-heating and Power Dependent Effective Index Changes
The transmission spectra for Rings A and B were acquired at different optical powers using the bench top optical stage maintained at a temperature of 25 °C ±0.05 using the TEC controller.Fig. 4a shows the measured spectrum of Ring A at incident power levels from 50 W to 2 mW.Far from resonance and at 1 mW (0 dBm) input laser power, the received power at the photodetector is -15 dBm, including approximately -1 dB loss from the input polarization controller.Since the ring is positioned at the approximate mid-point of the optical path from laser to detector, the accumulated loss in the through waveguide up to the position of the ring is estimated to be -7 dB, or 20% of the launched laser power.The losses include a grating coupling efficiency of -5 dB from fiber to the Si waveguide, Si waveguide loss of approximately 1.5 dB/cm ±0.5 and any additional system losses between laser and detector.For simplicity the input powers given in Fig. 4 are as launched into the input fiber from the laser.1, but to facilitate comparison of the four graphs, the wavelength scale is shifted so that = 0 pm corresponds to the line centre at zero incident power.The calculated line shapes for 0 W power are vertically offset for clarity.The empirical power dependent effective index derived from the Ring C measurements is used to produce the model results shown in b) and d).
The measured spectra of Fig. 4(a) and 4(c) show that the ring resonance wavelengths are shifted to longer wavelengths with increasing power and the asymmetric line shape distortion is already evident in Ring A at incident powers as low as 200 W for Ring A. At highest powers the sharp transition at the long wavelength edge of the resonance indicates that the rings are entering the bistable regime and undergo a switching transition.This occurs near 1 mW for Ring A, and above 10 mW for the longer cavity in Ring B. The corresponding model results in Figs.3(b) and 3(d) were generated using an empirical nonlinear index function determined by fitting to spectral measurements of the Ring C thermometer carried out in the temperature calibration bath, as described below.The model calculations for each ring used the DC through coupling coefficients and ring waveguide losses in Table 1, which are derived from the measured -3 dB linewidths and resonance depths for each ring.Fig. 5 shows the measured and calculated power induced resonance wavelength shifts corresponding to the spectra in Fig. 4.Both the model line shapes and wavelength shifts are in good agreement with experiment.Ring A shows a wavelength shift of 15 pm for a 1 mW input power.When used as a thermometer, this generates a reading error of almost 0.2 K based on resonance minimum wavelength.The distortion of line shape will cause additional ambiguity if line shape fitting is used to determine the resonance position.The optically induced wavelength shift for Ring A in Fig 4a is more than ten times larger than for Ring B at any given power.This result is expected since for rings with similar quality factors or resonance line width, the circulating power in a ring at resonance will scale inversely with cavity length [22], as will be discussed in more detail at the end of this section.Similar measurements of the resonance wavelength variation with optical power were carried out for Ring C and Ring D, and the results are shown in Fig. 6.Since laser to chip coupling is much lower in the bath experiments, the powers given here for the Ring C and D have been scaled to correspond to the incident power scale used for Ring A and Ring B in Figs. 4 and 5 to allow a direct comparison of all the ring devices.The probes were immersed in the temperature calibration bath maintained at a constant temperature at T = 23 °C.The temperature bath stability of ±0.7 mK is much better than that for the bench-top stage used for the Ring A and B measurements, so the resonance wavelength uncertainty in Fig. 6 is dominated by the ±0.1 pm repeatability of the laser, and by the 0.1 pm wavelength step size used to acquire the spectra.Each data point in Fig. 6 is an average over approximately ten separate measured spectra, and the error bars shown in Fig. 6(b) indicate the typical standard deviation of each data set.The Ring C resonance wavelengths in Fig. 6a show a positive linear change with power for coupled laser powers higher than 50 W, similar to the behaviour of Rings A and B. At very low power Ring C displays an initial blue shift of the resonance, suggesting the presence of a second negative optically induced index change mechanism that competes with the positive index change that dominates at higher powers.The power induced wavelength shift for Ring D also shows a comparable small blue shift, but with the power dependence that is an order of magnitude slower than that for Ring C. In both devices the maximum blue shift is approximately -0.2 pm.In Ring D the measured wavelength shift never reaches the positive linear regime at the input powers available in the probe experiments.Again, this is expected because the cavity length of Ring D is almost ten times longer than in Ring C.
The power dependence of the resonance wavelength in all four devices is a result of light induced effective index changes in the silicon waveguide.The contributions to optically induced index changes in semiconductors will include self-heating due to absorption of light in the waveguide.As will be discussed below, this can occur even for photon energies lower that the band gap.There will also be direct refractive index contributions arising from optically excited free carriers, as well as band-filling and absorption saturation due to free carrier excitation and defect state filling, and the intrinsic intensity dependent refractive index n2 [11,[23][24][25]].An empirical model for the intensity dependent waveguide effective index was developed to aid in comparing the experimental results for the four ring devices.The functional form of the nonlinear index Neff was chosen by using ring resonator model for the geometry of ring C, and selecting a function and parameter values that generate a good match to the experimental data of Fig. 6.The resulting equation has the form with coefficients A = 1.39×10 -6 mW -1 , R = 3.01×10 -6 , and = 1.14 mW -1 where the in-waveguide optical power P is expressed in milliwatts.The uncertainty in these empirical parameters is dependent on the uncertainty in power attenuation between the laser up to the position of the ring.The form and parameter values for Eq. 4 are based on the data for Ring C in Fig. 5, but the same index function also generated excellent matches to the measured spectral data for Rings A, B, and D with no additional adjustments, as shown by the calculated curves in Figs. 5 and 6.This suggests that Eq. 4 is generally applicable to Si waveguides comparable to Ring C at the low powers used in these experiments.Although not clearly evident in the experimental data, the small negative index changes in Ring C and Ring D at low power (i.e., the resonance blue shift) are also likely present in Rings A and B, but are not resolved because of the larger temperature fluctuations of the bench-top stage.The first term in Eq. 4 describes a positive linear power dependence that could be attributed to either self-heating by linear optical absorption processes or the intensity dependent refractive index n2.The measurements suggest that self-heating dominates, since value of n2 = 5.6 ×10 -18 m 2 W -1 for silicon [23] is more than an order of magnitude too small to account for the empirically derived Neff.The photon energy of the =1550 nm light (~ 0.8 eV) used in these experiments is much lower than the Si indirect and direct band gap energies (~ 1.1 and 3 eV, respectively).Linear absorption and linear electron and hole carrier excitation are therefore not expected in bulk Si at wavelengths longer that = 1200 nm.However, carriers may still be generated via mid-gap defect and surface states.The photonic wire waveguides used in this work have a large surface to volume ratio, and fabrication processes such as reactive ion etching can introduce a high concentration of near surface defects states.Ion implanted defects, as would be created by reactive ion etching, are known to produce linear absorption in silicon [26].Several studies have found that linear absorption by surface states and near surface defects can dominate optical absorption at low powers [11,[26][27][28][29][30] in Si photonic waveguide devices, and therefore contribute to heating and provide a linear free carrier excitation path via mid-gap states.Novarese et al. have found that including sub-band gap energy linear absorption is essential to explain their experimental results on nonlinear ring resonator response [11].
The second term in Eq. 4 models the small negative index change responsible for the blue shift in Fig. 6, which appears to level off with increasing power.A negative index change in a semiconductor material may arise from optically excited free carriers, or possibly from defect state filling and/or band filling [11,25].Similar behaviour to that of ring C has been observed in previous work, and was attributed to the changing ratio of a negative and positive index changes induced by free carrier excitation and self-heating respectively [11].The 0.2 pm blue shift observed here corresponds to an effective index change of approximately Neff = 10 -6 , which would result from a very small electron and hole density change of the order of n, p ~10 14 cm - 3 .In the analysis presented in Ref. [11], the excited carrier lifetimes are long at low carrier densities, so that carrier density increases rapidly and the carrier induced index change dominates at very low optical powers.As power and hence carrier density increase, the free carrier lifetime becomes shorter causing carrier energy to be more rapidly dissipated as heat while suppressing the growth of carrier density.Higher order absorption processes such as two photon absorption (TPA) may also contribute to the effective index change via heating and carrier excitation.However, the experimental results in Figs. 4 and 5 show no obvious quadratic wavelength variation with power.This result is consistent with the thermal modeling calculations of Dickmann et al. that predict the threshold for TPA heating to be near 10 mW input waveguide power, for a ring of comparable dimensions to Ring C [22].After adjusting for our estimated coupling efficiency, this is well above the input waveguide powers in our measurements.Note that the index model of Eq. 4 is a purely phenomenological model intended to predict and compare ring thermometer behaviour under different measurement conditions on the assumption that similar results will be obtained for comparable devices produced by commonly used Si fabrication processes.Identifying the underlying physical mechanisms responsible for the observed silicon waveguide index change can only be fully addressed using time resolved optical experiments in combination with detailed modelling of carrier dynamics and heat flow, and is outside the scope and objectives of this study.
The power induced resonance changes for Rings A and C are approximately 15 pm for a typical interrogating laser power of 1 mW.Under these operating conditions, a temperature reading error of more than 100 mK would result when using a ring as a thermometer.This would preclude using Si rings for demanding measurement applications and as metrology reference standards.An obvious solution is to correct every measurement for the intensity dependent error using a calibrated correction factor for laser power.This approach requires a calibration of the ring thermometer response to different laser powers, and precise control of laser power and laser coupling to the Si chip.The power dependent resonance shift can also be reduced simply by using the lowest possible optical power that gives a sufficient signal to noise ratio to accurately determine resonance positions.In our experiments, incident powers near 10 W were sufficient to determine resonance positions to within ±0.1 pm.At this level the power induced wavelength shift of ring C is approximately 0.2 pm, corresponding to a temperature error of less than 3 mK.
It would be more attractive to introduce ring design modifications that reduce the intrinsic power dependence of the resonance positions, so that accuracy and repeatability can be decoupled from the details of the measurement instrumentation.The results for Ring B and D in Fig. 5 and Fig. 6 show that simply increasing ring length can give a significant improvement.
Ring B and D have a length L= 950 m which is approximately ten times longer that both Rings A (L= 100 m) and C (L= 70 m).The corresponding power induced resonance shift is more than ten times smaller than that for Rings A and C, while still having a comparable narrow resonance linewidth 3dB.This is in agreement with basic ring resonator theory that predicts that the circulating power will decrease inversely with cavity length when 3dB or Q-factor stays the same [22].When the ring loss and the directional coupler coupling coefficient are small, the circulating power given by Eq. ( 2) can be approximated as where FSR is the free spectral range of the ring.A narrow linewidth 3dB is desirable to improve the precision of resonance wavelength determination, but simply optimizing the ring design to narrow the linewidth increases the circulating power.By adjusting the ring DC coupling coefficient as the ring length L is increased, the ring line width can be kept narrow while reducing the circulating power and hence self-heating and other optical nonlinear effects.The maximum usable ring length will eventually be limited by the increased ring round trip loss, which will cause linewidth broadening that can no longer be compensated by reducing the directional coupler coupling coefficient.The variation of the power induced resonance wavelength shift with ring length is illustrated in Fig. 7, which shows the calculated nonlinear response for several model rings of different cavity lengths but all with the same power dependent effective index of Eq. 4. The waveguide loss was assumed to be 2 dB/cm in each case, and the directional coupler coefficients were adjusted so that each ring was critically coupled.The resonance 3 dB linewidths of 3dB = 8.4 pm were approximately the same for each ring.The trends illustrated in Fig. 7 are confirmed by the experimental results presented in Fig. 5 and 6.Comparing rings with L= 100 m and L= 2000 m in Fig. 7a at input powers of 1 mW, the strong red shift from self-heating can be reduced from 20 pm to less than 0.2 pm.The small initial blue shift is more difficult to avoid since it occurs at very low powers.However, for input powers less than 100 W, the power induced change is suppressed to less than 0.1 pm, corresponding to temperature reading errors of less than 1 mK.For most applications this is negligible, but may be important for demanding metrology applications.The disadvantage of extending cavity length is that the FSR decreases with cavity length, limiting the temperature change before adjacent order resonances overlap.For example, the FSR in rings B and D is only 0.6 nm so that an 8 °C temperature change will cause the resonances to shift by the full FSR.This leads to a temperature reading ambiguity if the ring is interrogated intermittently, since the resonance order will not be known if a temperature change of more than 8 °C occurred in the interval between measurements.On the other hand, the ambiguity can be avoided if resonances are tracked continuously in real time.
Intra-ring scattering: Resonance line splitting and distortion
Stray light from unwanted reflections and scattering is present to a greater or lesser degree in all integrated optical chips, and can cause significant changes in performance for integrated photonic devices.In this section, we examine the effects of back-reflections and scattering specifically in a ring resonator thermometer, but the considerations are equally relevant to many types of resonator-based sensors.Light that is scattered out of the waveguide into the substrate or cladding simply contributes to the total waveguide loss.While higher loss will increase the resonator linewidth, loss alone will not otherwise change the ring behaviour.On the other hand, light that is scattered either once or many times into counter-propagating waveguide modes (i.e., back reflections) can cause significant changes in the device output.Within a photonic chip every transition between different waveguide structures will produce some back reflection.Light back scattered from sidewall roughness and defects into the waveguide also contributes to the stray light propagating through the system, and can be regarded as a distributed back-reflection.
Back scattering within the ring resonator has the most serious and obvious effect on ring output.Resonance line splitting up to several picometers and asymmetry in overall line shape are commonly observed in Si waveguide micro-resonators [27,[31][32][33] and are also present in the ring resonators used in this work.These line shape distortions create ambiguities in identifying a precise resonance wavelength, and may introduce temperature further reading errors if the distortions are wavelength or temperature dependent.Back-scattering within microresonators has already been described by many groups [27,[31][32][33][34][35].Here the basic phenomenology and descriptive equations are only briefly summarized, in order to better understand the impact of line shape distortions on temperature reading accuracy and repeatability.
Strong back-scattering by distributed sidewall roughness or isolated defects within the ring cavity will couple light from one ring mode into the degenerate counter-propagating ring mode of the same order.The two coupled modes combine to form two resonances separated in wavelength and the observed resonance line shape in the ring output spectrum splits into two closely spaced resonance minima.Back scattering and reflections in the directional coupler (DC) can also excite the counter-propagating waveguide mode by directly coupling the incident beam backwards into the ring [32].Such back coupling arises from reflections at the two ends of the DC if the through waveguide and ring junctions are not sufficiently adiabatic, and also from any sidewall roughness along DC parallel waveguide section.Back coupling can be comparable to in-ring back-scattering and cause strong line shape asymmetries even when the underlying scattering mechanisms are wavelength independent [32].The resulting distortions become increasingly difficult to avoid as resonator Q increases, since the counter-propagating mode power becomes larger when the total ring loss is low compared with the back scattering strength.An additional complication is that the line shape distortion can be strongly wavelength dependent.Even for the same ring the individual ring modes at different resonance orders may exhibit different Q-factors, mode splitting, and wavelength offsets from the nominal resonance position [27,31,32].One resonance may have a perfect single peak Lorentzian line shape as expected from Eq. 1, while another resonance only a few nanometers away may be split into two peaks, as is evident from the measured line shapes in Fig. 8.
Resonator mode splitting is usually treated using time dependent coupled mode theory [32][33][34][35][36][37].Coupled mode theory has been applied in many previous publications, but we repeat a brief outline here in order to arrive at a descriptive form for the all-pass transmission, which has not been the focus of previous work but is the transduction signal in our thermometers.The excitation and coupling of two counter-propagating ring mode field amplitudes A+(t) and A-(t) can be described in terms of an internal ring back scattering strength , and the directional coupler forward and back coupling coefficients, f and brespectively.The coefficient lumps together both distributed sidewall roughness scattering and any scattering from specific defects in the ring.It is assumed that the input field Sin•exp(it) has an optical frequency and the circulating modes therefore have a time dependence of form A±(t)•exp(it).The driving frequency is also assumed to be close to the nominal frequency 0 = m (2c /Neff )•L for the resonance of order m.Here Sin is the input waveguide mode field amplitude, = -0 is the frequency offset from resonance, 0 is the exponential attenuation coefficient due to intrinsic waveguide loss and c is the corresponding mode attenuation due to coupling of light from the ring into the output waveguide by the directional coupler by backwards and forward coupling.Setting both Eq.6(a) and 6(b) to zero yields the time independent expressions for the two counterpropagating mode fields within the ring.
The circulating power in these modes will have two maxima at ± = 0 ± ( 2 −( 0 + ) 2 ) 1 2 if the back scattering coefficient is larger than the total loss c + 0 [33].For smaller values of there is no splitting and transmission minimum occurs at the nominal resonance frequency = 0, i.e., at = 0.In the all-pass configuration used in the thermometer devices, the final output field in the through waveguide, = + * + + * − , is a superposition of contributions by the incident field, the desired forward coupled A+ mode and unintentional backwards coupled counter-propagating A-mode [34].Using the square modulus of the output field Sout, the ring power transmission can be arranged into the form The denominator in Eq. 8 has a similar form to that of |A+| 2 and |A-| 2 with minima at the split resonance frequencies ± [33].The first part of the numerator is symmetric in while the second part is asymmetric and depends on the coefficient which varies with the complex ratio of back and forward coupled field strengths, f = kb /kf, as introduced in Ref. 32. Herefis the phase of f.Equation 8 predicts different line shape behaviors depending on the values of and b.In the simplest case the back scattering and DC back coupling are assumed to be wavelength and temperature independent.In the absence of any back scattering in the ring (= 0) or back coupling in the directional coupler (b=0), Eq. 8 gives the same spectral line shape predicted by Eq. 1.If back scattering in the ring is present but below the threshold for mode spitting ( < c + 0), the transmitted line shape remains symmetric and retains a single minimum at 0, and the transmitted power is zero at the critical coupling condition c 2 = 0 2 + 2 [26,[32][33][34].Back scattering will cause some line broadening, but in the weak scattering regime the resonance minimum remains at the nominal resonance frequency 0 (or wavelength 0), and will shift with temperature at the same rate as in an ideal silicon ring.Therefore, weak internal ring scattering does not impair the accuracy or practical viability of a ring photonic thermometer.The more complex functional form of the numerator in Eq. 8 means that the frequencies of the transmission minima differ slightly from the peaks of the internal mode of Eq. 7, but will still be of equal depth and equidistant about the nominal resonance center frequency 0.Since 0 still changes with temperature in the same way as an ideal ring, the split pair average center frequency (or wavelength) would in principle remain an accurate indicator of ring temperature.Finally, when the back-coupling in the directional coupler is included, then the last term in the numerator of Eq. 8 creates an asymmetry in the spectral line shape about the center resonance frequency.If mode splitting is present this will result in two asymmetric lobes in the transmission spectrum.This is the case for the measured resonance shapes in Fig. 8b.Even without obvious resonance splitting, an asymmetric distortion of the transmitted line shape may be present.It becomes impossible to determine the nominal ring resonance frequency 0 (or wavelength) and hence temperature without resorting to complex line shape fitting procedures.The concomitant complexity and ambiguity are obviously undesirable in a practical thermometer.Line splitting is often observed in our Si rings and can be up to 10 pm in wavelength separation as in Fig. 8(b), so the resulting uncertainty in resonance center wavelength could potentially limit temperature reading accuracy to more than 100 mK.The discussion so far is based on the simplifying assumption that back scattering and back coupling are wavelength independent.In reality the back scattering and back coupling can exhibit a significant wavelength dependence and the line shape distortions and resonance depths may be completely different from one mode order to the next for the same ring [31,32].This wavelength dependence has been attributed to the statistical independence of the Fourier components of side wall roughness [31] and the interference effects when light is scattered repeatedly between different points in the ring and directional coupler [32].This point is illustrated by the two measured resonances in Ring D shown in Fig. 8.These resonances are only 4 nm apart in wavelength yet the resonance in Fig. 8a has an almost perfect Lorentzian line shape while the line in Fig. 8b is broadened and split into a clearly separated and asymmetric pair.The line of Fig. 8(a) appears to be ideal for photonic thermometry or any other sensor based on resonance tracking.However, since scattering and back coupling are wavelength dependent, one might expect that as temperature and hence the resonance wavelength the line shape itself will evolve with temperature in an unpredictable way.This would render the device unusable as a thermometer because of the ambiguity in line fitting or peak finding if the line shape constantly evolves with temperature.
Fortunately, this is in fact not the case.We have found that ring resonance line shapes are surprisingly invariant over a wide temperature range.The single lobed Lorentzian mode of Ring D in Fig. 8a is shown at several temperatures from 20 °C to 80 °C.The line shapes at each temperature are indistinguishable from each other, even though the resonance center moves by more than 4 nm between T= 20 °C and 80 °C.For this resonance the scattering and back coupling effects appear to be negligible.Distortions do not appear as the temperature changes even though the temperature induced resonance wavelength change exceeds the 3.6 nm nominal wavelength difference between the two resonance orders at 20 °C shown in Fig 8a and Fig 8b.It is even more surprising that the strongly distorted resonance of Fig 8b is also completely invariant across the same 60 °C temperature range.The distorted line profile indicates that scattering and back coupling are obviously significant, but their relative magnitude and effect are completely unchanged by heating the ring.The explanation is that the scattered and back-coupled field propagation phases in the ring scale with the effective index of the waveguide Neff, which changes with temperature due to the thermo-optic response of silicon.The main ring resonance mode phase also scales identically with NeffAs temperature changes and the resonance wavelength (as measured in vacuum) changes, the physical wavelength /Neff in the ring cavity at the resonance center remains constant, as do relative phases of the fundamental ring mode fields, scattered fields, and back coupled fields.Hence their relative interactions are unchanged over a wide temperature range.On the other hand, the amplitudes of the electric fields scattered from specific point defects are temperature independent since the scattering depends on defect size and the index difference between the Si core (n ~ 3.5) and SiO2 cladding (n ~ 2.0).This index step is much larger than any temperature induced index changes given the thermo-optic coefficients of 2×10 -4 °C-1 for silicon and 1×10 -5 °C-1 for SiO2.Conceivably the line shape invariance may eventually break down over a very large temperature span because the device itself also undergoes thermal expansion.However, such deviations have not been observed over the temperature ranges used in this work.The conclusion is that if a resonance order with acceptable line shape can be identified, that specific resonance will remain unchanged and can be used for temperature measurement over a wide temperature range.In the Si rings used is this work, distorted and split resonances do appear frequently but the majority of resonance orders across the 1530 nm to 1580 nm working wavelength range have single lobed line shapes.
Extra-ring scattering and back reflection: Background intensity noise
Scattering and multiple reflections in waveguides external to the ring can also distort the ring line shape, and under certain conditions (e.g., strong coupling between the ring and reflection induced parasitic external resonant cavities) render the ring unusable as a sensor [39,40].In this section, we assume that the back-reflections and back-scattering sufficiently small that only small perturbative changes in the resonance line shape occur.In a silicon photonic chip, the strongest reflections usually arise at the input and output waveguide facet couplers or surface grating couplers.Since the stray light can be back-reflected or scattered many times and interfere coherently with the main incident beam, reflections and scattering can produce both periodic ripple and random fluctuations in the intensity and phase of light propagating through even a simple straight waveguide.Significant progress has been made to design coupling structures that simultaneously maximize fiber to waveguide coupling efficiency and minimize back reflections [12,38], but given the sensitivity to small dimensional variations in as-fabricated features, reflections can never be completely suppressed even using optimal designs.Nevertheless, the effects on sensor performance can be minimized through the appropriate mitigation strategies.
As an example, Fig. 9a shows a measured spectrum for ring C at two different temperatures, for which a background intensity ripple is generated by input/output grating coupler back reflections of about R = 0.06.These couplers create a weak on-chip Fabry Perot (FP) cavity encompassing the ring, as shown schematically in Fig. 9(b).The cavity length between the two gratings is 2 mm.The induced intensity and phase fluctuations near the resonance can skew the apparent line shape and produce resonance wavelength measurement errors.Similar reflection effects can arise from elsewhere in the optical measurement train such as at optical fiber facets and at interfaces of any in-line optical components such as waveplates and polarizers.The effects of on-chip and off-chip reflections are similar, but on-chip sources of reflection cannot be eliminated once the chip has been fabricated.We use the two simple models shown in Fig. 9(b) and Fig. 9(c) to examine the effect of reflections and back-scattering on thermometer reading accuracy.The first model consists of a ring coupled to the through waveguide and positioned inside the FP cavity formed between two waveguide-to-fiber couplers with a back-reflection coefficients r1 and r2.The other details of the couplers are not important in this discussion.This geometry forms a coupled FP cavity and ring resonator system in which the intensity and phase of the light in the through waveguide is modified by the ring on each FP cavity round trip.This model produces a background fringe pattern that is consistent with the measured Ring C spectrum shown in Fig. 9(a).In a typical photonic chip, the back-reflections are weak so that the total FP cavity loss is much larger than the ring DC coupling strength, and the perturbations in the transmitted ring resonance line shape will be small.The second model of Fig. 9(c) is similar to Fig. 9(b) except that the ring is now positioned outside the FP cavity.Here the intensity fluctuations are created by multiple reflections between two interfaces located on only one side of the ring, and therefore generate a simple external intensity background intensity modulation that is independent of the ring properties.Real devices may show more complex behaviour due to the superposition of several sources of back-reflection and scattering through the optical path, but these two models capture the basic physics and phenomenology of the two mechanisms that can distort the apparent resonance line shape, and can guide mitigation strategies to improve ring sensor measurement.
The transmission spectrum of the coupled ring-FP system in Fig. 9b can be calculated by including the amplitude and phase modulation of light due to the presence of the ring resonator in the internal propagation term of the FP transmission function [41].The complex through transmission of the ring resonator directional coupler is where the coefficients are defined as for Eqs.1-3 above.Inserting Br into the FP transmission equation gives The coefficients r1 and r2 are the field reflectivity of the FP mirrors (e.g., the input and output grating couplers), and δ = 4Neff / is the round-trip phase accumulated by through waveguide propagation within the FP cavity, but not including the phase modulation introduced by the ring by the ring transmission Br.The factor T0 incorporates the coupling efficiency and losses of the grating couplers.Here T0, r1 and r2 are treated as a wavelength independent constant in the immediate neighbourhood of the ring resonance.Fig. 10 shows the calculated transmission spectra for a ring with length L=950 m, and is embedded in a 2 mm long FP cavity formed by two reflectors with reflectivity of R = 0.15, i.e., in the configuration shown in Fig. 7b.This model ring is nominally identical to ring B using the parameters in Table 1.The spectra are shown for several ring temperatures incremented in steps of 0.2 K.An asymmetrical skewing of the line shape is particularly evident when the resonance center coincides with the steepest slope of the FP fringes.Since the fluctuations vary with wavelength, the error in measured resonance wavelength centre is wavelength dependent, and therefore temperature dependent.The measured centre wavelength will periodically swing backwards (blue-shift) and forwards (red-shift) from the true resonance centre as the resonance moves relative to the back-ground ripple pattern.When used as a thermometer, the temperature readings will exhibit a corresponding periodic error as the temperature changes.Fig. 11a shows an example simulation of the temperature reading error and resonance wavelength error for the ring-FP model used to create the spectra in Fig. 9.The resonance wavelengths plotted in Fig. 11 are extracted from the ring-FP spectra using three different measurement procedures.The first method determines resonance wavelength using a leastsquare error fit of a rigid line shape, with functional form as in Eq. 1, to the ring-FP spectrum at any given temperature, over a wavelength window spanning one -3 dB resonance width 3dB.The second procedure was the same except the fitting was carried out over a narrower one-half 3dB width window, to reduce the range of the spectrum near the resonance minimum that contributes to the fit.Finally, the resonance positions were also determined simply by choosing the minimum point of resonance.This last approach gives a periodic error that is always within 0.02 pm of the resonance wavelength at each temperature, and hence the corresponding temperature error is less than 300 K.The larger the fitting window over which the resonance is sampled, the stronger the effect of line distortions become.The 0.2 pm (3 mK) error obtained by fitting over the full -3 dB width window is much larger than using the smaller fitting window or simply taking the minimum transmission point.This behaviour arises because the ring DC transmission of Eq. 6 attenuates the intensity of the circulating FP reflections on each pass.This attenuation is highest near resonance, essentially turning off the FP cavity so that near the minimum point the resonance is only very weakly perturbed.The minimum point itself therefore gives the best estimate of the true resonance position.On the other hand, when fitting over a finite window, the extended line shape distortion contributes to the fit and creates a spurious measurement offset.The temperature and resonance wavelength error for the back-reflection geometry of Fig. 9(c) is shown in Fig. 11(b).Here the ring is external to the FP cavity and does not itself modify the intensity fluctuations produced by the multiple back-reflections.The apparent resonance is skewed by the background intensity modulation, producing a resonance wavelength error that can be positive or negative depending on the relative alignment of the background modulation pattern and the resonance.The errors are smaller than in Fig. 11(a) because the ring and FP are not coupled and the phase distortion by the ring is not present.As in the coupled FP-ring geometry, the error is minimized by fitting to a narrow region around the resonance minimum or simple choosing the minimum point.The most accurate result for the scenarios of Fig. 9(b) and Fig. 9(c) would be expected for a critically coupled ring with a transmission zero on resonance, and simply tracking the wavelength of the zero-transmission point.Multiple internal reflections forming a parasitic FP cavity will be completely attenuated on resonance and the transmission zero will be the true resonance wavelength.The wavelength of a transmission zero also cannot be changed by any external intensity wavelength dependence or temporal modulation.These considerations suggest that second derivative based minimum detection methods combined with laser locking may useful in tracking thermometer resonances.
While the two configurations of Fig. 9(b) and 9(c) are very specific geometries, a complex spectral distortion can be considered a superposition of the back-reflection induced modulation patterns similar to these two examples.The results of Fig. 10 show that in either case the most accurate resonance measurement is obtained by measuring very near the resonance minimum, and preferably designing the ring to be critically coupled so that the resonance minimum is near zero.In practice, line shape fitting over a finite spectral window is often used to determine the center wavelength of a spectral line in order to reduce the uncertainty due to spectral noise fluctuations and detector noise [5].The results here show that in the case of ring resonators in the all-pass configuration, this can be counter-productive.For example, the small resonance blue shift measured for rings C and D shown in Fig. 6 were only resolved by measuring the minimum point of each resonance.In our original attempt at data analysis using line shape fitting to determine resonance wavelengths, the wavelength scatter was several times larger than that seen in Fig. 6.The above discussion assumes that there are only intrinsic time independent signal distortions present.The measured resonance spectrum may also be changed by time varying external system noise from the photodetector, light source, and mechanical vibrations, and also by shot noise.Fitting may obviously improve measurement accuracy when such time varying external fluctuations are the dominant error source.In each measurement situation there will be a compromise to be considered before deciding on the optimal sensor data analysis strategy.Fitting may be helpful to mitigate time dependent noise, but should be carried out over a narrow spectral window when background spectral intensity variations and ripple are present.Averaging repeated spectral scans over the resonance position, combined with identifying the resonance minimum directly may be a best strategy to eliminate time dependent noise.
The numerical simulations used to generate the error curves in Fig. 11 assumed that the ring alone changed temperature while the through waveguide and grating couplers that contribute to background intensity fluctuations did not change temperature.In the case where the entire chip is at a uniform temperature, and all the waveguides have exactly the same thermo-optic coefficient, the ripple pattern in Fig. 10 will move rigidly in lockstep with the ring resonance as chip temperature changes.This is similar to the invariance of the back reflection induced line splitting observed in Fig. 8.The resonance wavelength displacement in this case will simply be a constant offset, and can be included in the calibration if it is possible to regard the entire chip as the effective thermometer element.This approach would not work when there are temperature dependent on-chip thermal gradients, or when the back reflection sources are not on the chip but elsewhere in the optical measurement path well away from the measurement point.Fig. 9(a) shows the resonance for ring C at T= 30 °C and T=80 °C, and it confirms that the grating coupler induced FP ripple does move almost in lockstep with the resonance line.There is a small 4 pm lag in the ring resonance shift relative to the ripple, which may be attributed to the small difference in the temperature dependent effective index of the curved ring waveguides and directional coupler, compared with the straight through waveguide that forms the FP cavity.Although this small lag would have an unmeasurable effect on the measured resonance minimum in these experiments, the would be compensated in the thermometer calibration procedure.
Summary
In this work we have examined some of the intrinsic sources of temperature measurement errors in silicon photonic ring resonator thermometers.These impairments may arise from the optical induced changes in the silicon waveguide, the primary example being optical absorption leading to local heating of the waveguide.Spectral distortions may also arise due to unintended reflections from defects, side-wall roughness, and discontinuous boundaries between different elements along the optical path such as from fiber facets and in-line polarizers or splitters, and also from grating couplers or waveguide facets at the chip edge.
Although the incident photon energy is well below the silicon band gap, refractive index changes may still arise from defect and surface state mediated absorption [26][27][28][29][30].For ring cavities of 100 um or less, self-heating of the silicon waveguide is found to cause the ring waveguide to be several hundred mK hotter than the ambient environment even when using probe powers near 1 mW as is common in routine photonic device testing.At very low incident powers (< 20 W) where self-heating is small, there remains a small blue shift of the ring resonance in all the Si devices tested, which corresponds to temperature reading errors near 2 mK.Such a small offset is not of concern in most applications but may be important for precision measurement and metrology standards work requiring sub-millikelvin resolution and accuracy.
The obvious mitigation strategy to reduce the power dependent effective index change is to minimize the optical probe power, but power can only be reduced to the level where signal-tonoise ratio becomes compromised by the detector noise and shot noise.An alternative method is to use a broad band light source and measure the ring spectrum using an external spectrometer.Since the power is distributed over a wide bandwidth, the circulating power at the resonance wavelength can be much lower than for a single line laser source, and self-heating will be correspondingly lower.Achieving comparable accuracy to our tunable lasers would require spectrometers with a resolution of better than 1 pm, which to best of our knowledge are not commercially available.The comparison of self-heating in short (Rings A and C) and long (Rings B and D) rings shows that simply increasing ring cavity length is a much simpler solution and does not require any change in measurement procedure.Measurements on Ring D show that the observed intensity dependent wavelength change can be less than less than 0.2 pm (i.e., < 3 mK temperature error) at probe powers that give more than adequate signal to noise.As previously noted, the presence of defect and surface states is likely the most significant cause of absorption related heating and index change in these waveguides.It is therefore expected that self-heating will be fabrication dependent, so the selection and optimization of fabrication processes to minimize near-surface defects and improve Si-cladding interface properties is also a path for temperature sensor improvement [28].
Back-reflections and back-scattering within the ring cause resonance splitting and line shape distortion.This is particularly common in high-Q, narrow linewidth resonators.Observed line shape distortions can vary randomly from one resonance order to the next in the same ring, but in this work we have found that the distorted line shapes are completely temperature independent due the identical temperature scaling of scattered field and primary ring mode propagation phases.Any such distorted resonance therefore remains suitable for tracking temperature as long as the fitting or peak finding method can reliably track resonance position.In the ring devices used in this work, severe line distortions and splitting are evident at some orders, but symmetric single lobed Lorentzian resonances are still common and such resonances remain unchanged with temperature and can serve as a reliable marker for temperature read out.Eliminating line splitting altogether requires reducing side wall roughness and designing the transitions between the DC, waveguide bends and straight waveguides to be sufficiently adiabatic that back reflections are negligible.Alternatively, the ring through waveguide coupling can be increased to increase the ring round trip loss relative to the back scattering, but this would sacrifice wavelength/temperature resolution since resonance line width increases with coupling strength.
The input and output surface grating couplers are particularly problematic since gratings designed for near normal coupling angle will have a grating periodicity near the Bragg condition for back-reflection, and therefore naturally form a weak FP cavity.Again, grating couplers can be designed with local structure factors that suppress such back-reflection [12,38].Another simple design strategy to reduce ripple distortion is to shorten the distance between the input and output grating couplers.This increases the period of the FP ripple relative to the ring linewidth so the slope of background intensity is smaller with less distortion of the ring line shape.The improvement in background intensity modulation is illustrated by comparing the reflection induced ripple for a 2 mm coupler separation in Fig. 9(a) (Ring C), and that in Fig. 8 (Ring D) for which the coupler separation was reduced to approximately 900 m.
Back-reflections and the resulting spectral distortion may also arise from off-chip optical elements, so similar care must be taken in building the measurement system and packaging the silicon photonic sensor chip.The sealed thermometer probe configuration described in Section 2 above, in which the fiber is several hundred micrometers away from the chip, is particularly effective in eliminating reflections from the input and output fiber facets and silicon surface.Since the free space beam is strongly diverging, these reflections are only very weakly coupled back into the fiber or the silicon chip with more than -15 dB attenuation and so will have little effect on the measured ring spectrum.In Fig. 9a the only visible spectral distortion is the fringe pattern of on-chip FP cavity formed by the input and output coupler, whereas measurements on similar devices in the direct fiber-coupled test bench configuration often exhibit more random background intensity variations as wavelength is scanned.
The resonance line shape very near the minimum wavelength point is relatively unperturbed by back-reflection induce spectral distortion, both when the ring is coupled to the reflection induced FP cavity and when there is a simple intensity modulation.In the case of a critically coupled ring the resonance minimum is zero and its wavelength is immune to spectral intensity distortions.Therefore, attempts at line shape fitting over a wide wavelength window may give poorer accuracy than simply determining resonance wavelength from the minimum point of the line.The experimental results in Fig. 4b demonstrate that accuracy down to 0.1 pm can be achieved by measuring the minimum point, even though the 3 dB width of the resonance is of the order of 10 pm.
In this work we have shown that silicon ring resonance wavelengths can be accurately measured and monitored down to the 0.2 pm level using commercial photonic test equipment and applying the mitigation strategies described above.In the application of such rings to photonic thermometers, this corresponds to a temperature accuracy of approximately 3 mK.In principle the mitigation methods suggested here may be extended to give a measurement resolution into the sub-millikelvin regime.Nevertheless, the role of near surface and interface states in optical absorption and carrier excitation implies that the precise nonlinear optical behaviour of each Si device may depend on details of fabrication history.Achieving accuracy and reproducibility to the 100 K level or better will require individual device calibration, but this is also the current practice in case of SPRTs.Finally, it is important to consider the limits of state-of-the-art in photonic measurement and device fabrication.Obtaining ring resonance linewidths of less than a few picometers requires a level of control of the directional coupler dimensions and waveguide losses that is difficult to achieve with commonly used standard Si fabrication processes.For such very high Q resonators, line splitting and shape distortion will become more common as discussed in Section 6.Furthermore, the sub-picometer spectral resolution required to carry out the measurements in this paper is at the limits of commercial photonic test equipment (e.g., a tunable laser or spectrometer).Entering the microkelvin measurement regime using simple single ring devices will require spectral resolution, stability and accuracy in the femtometer (fm) wavelength range, and hence new and customized optical measurement systems.Alternatively, it may be possible to adopt more complex photonic chip architectures that amplify temperature response through, for example waveguide group index engineering [42].Another approach that has only begun to be explored is to use the evolution of complex spectral patterns generated by two or more rings different thermo-optic coefficients [43,44] to extract temperature.
Disclosures.The authors declare no conflicts of interest.
Fig. 1 .
Fig. 1.(a) The layout of the ring resonator thermometer for Ring C. Input and output grating couplers are at left.The ring cavity length is L=70 m.(b) The measured resonance wavelength variation for Ring C with temperature.The inset shows the resonance line at temperatures T = 23, 30, and 40 °C.
Fig. 2 .
Fig. 2. The sealed thermometer probe assembly showing (a) a detailed view of the invar mounting stage and two fiber block assembly, and (b) a schematic side view cross-section of the entire probe (not to scale).The output fiber is parallel to the input fiber as shown in (a), but only the input fiber is shown in this cross-sectional view.The polished fiber block face is positioned 250 m above the Si chip surface.
Fig. 4 .
Fig. 4. (a) The measured and (b) calculated transmission resonance spectra for Ring A at input laser powers from 0 W to 2000 W, showing resonance wavelength shifts and line shape distortion resulting from optical power induced effective index changes.The measured and calculated resonance spectra for Ring B at laser powers up to 15 mW are shown in plots (c) and (d) respectively.The chip temperature is held at T = 25 °C ±0.05.The nominal resonance wavelengths are given in Table1, but to facilitate comparison of the four graphs, the wavelength scale is shifted so that = 0 pm corresponds to the line centre at zero incident power.The calculated line shapes for 0 W power are vertically offset for clarity.The empirical power dependent effective index derived from the Ring C measurements is used to produce the model results shown in b) and d).
Fig 5 .
Fig 5.The measured and calculated resonance wavelength shift with incident power for (a) Ring A and (b) Ring B. Resonance wavelengths are determined from the minimum points of the resonance lines shown in Fig. 4. The chip temperature is held at T = 25 °C.The stage temperature stability of ±0.05 C generates a measured resonance wavelength uncertainty of approximately ±3 pm due to temperature induced resonance drifts.
Fig. 6 .
Fig. 6.(a) The measured and calculated resonance wavelength variation with incident power for Ring C (L= 100 m) and Ring D (L= 950 m).The rings are maintained at T = 23 °C with less than ±0.001 °C variation during the measurements.(b) An expanded view of the low power range.The wavelength axis origin (= 0 pm) corresponds to resonance wavelengths of =1535.807nm and =1557.981nm for rings C and D respectively.
Fig. 7 .
Fig. 7. (a) Comparison of power induced wavelength shifts for Si ring resonators of various lengths from 100 m to 2000 m, using the effective index model of Eq. 4. (b) An expanded view showing behaviour at low power.In these calculations the waveguide loss was fixed at 2 dB/cm, and the ring directional coupler was adjusted to produce critical coupling at each ring length.
+ () =−(∆ + 0 + ) + + − () 6(a) − () =−(∆ + 0 + ) + + + () 6(b) Fig 8(a) shows an example of a line shape typical of the low back scatteringregime, measured for Ring D. When the internal ring back scattering exceeds the threshold for mode splitting ( c + 0) the internal power circulating in the ring has maxima at the two frequencies ±, and the transmitted spectrum splits into two minima of equal depth.
Fig. 8 .
Fig. 8. (a) The measured transmission spectrum of a single lobed resonance for Ring D at four temperatures.(b) the split resonance spectrum at a different resonance order in the same ring.The resonances have been shifted to align the center wavelengths to l = 0 pm at each temperature and also offset vertically in order to facilitate a comparison of line shapes.The resonances are at l = 1558.3nm and l = 1554.7 nm at t= 20 °c, for (a) and (b) respectively.
Fig. 9 .
Fig. 9. (a) The measured transmission spectrum of Ring C at 30 °C and 80 °C showing a single resonance and the background intensity ripple due to back-reflections of approximately R= 6% from each of the input and output surface grating couplers.(b) Schematic diagram of back-reflection model with a ring resonator within the Fabry-Pérot cavity formed by two reflecting structures (e.g., two surface grating couplers as in Fig. 1), and (c) with the ring external to the Fabry-Pérot cavity.
Fig. 10 .
Fig. 10.The calculated transmission spectra for a silicon ring with length L=950 m embedded in a 2 mm long FP cavity formed by two reflectors with R= 0.15, at several different ring temperature increments T with T= 0 corresponding to room temperature.
Fig. 11 .
Fig. 11.(a) The error in resonance wavelength and temperature over a 2 K temperature range, for a ring of length L= 950 m embedded in a 2 mm long FP cavity formed by two R=0.15 back-reflectors as in Fig. 8.The resonance wavelengths are determined by either fitting the resonance to an ideal line shape over windows with 3 dB line width3dB, one-half 3dB, or simply by taking the minimum point of the resonance line shape.(b) Wavelength and temperature error as in (a) but with the ring located external to the FP cavity.
Table 1 . Ring Thermometer Properties
* Experimental values determined by fitting the ring transmission function to the measured ring spectrum. | 17,536.4 | 2023-06-27T00:00:00.000 | [
"Physics",
"Engineering"
] |
Discerning the Ancestry of European Americans in Genetic Association Studies
European Americans are often treated as a homogeneous group, but in fact form a structured population due to historical immigration of diverse source populations. Discerning the ancestry of European Americans genotyped in association studies is important in order to prevent false-positive or false-negative associations due to population stratification and to identify genetic variants whose contribution to disease risk differs across European ancestries. Here, we investigate empirical patterns of population structure in European Americans, analyzing 4,198 samples from four genome-wide association studies to show that components roughly corresponding to northwest European, southeast European, and Ashkenazi Jewish ancestry are the main sources of European American population structure. Building on this insight, we constructed a panel of 300 validated markers that are highly informative for distinguishing these ancestries. We demonstrate that this panel of markers can be used to correct for stratification in association studies that do not generate dense genotype data.
Introduction
European Americans are the most populous single ethnic group in the United States according to U.S. census categories, and are often sampled in genetic association studies.European Americans are usually treated as a single population (as are other groups such as African Americans, Latinos, and East Asians), and the use of labels such as ''white'' or ''Caucasian'' can propagate the illusion of genetic homogeneity.However, European Americans in fact form a structured population, due to historical immigration from diverse source populations.This can lead to population stratification-allele frequency differences between cases and controls due to systematic ancestry differences-and to ancestry-specific disease risks [1][2][3][4][5].
Previous studies have carefully analyzed the population structure of Europe [6][7][8], but here our focus is on European Americans, who constitute a non-random sampling of European ancestry that reflects the historical immigration patterns of the United States.To understand European American population structure as it pertains to association studies, we used dense genotype data from four real genomewide association studies, analyzing European American population samples from multiple locations in the U.S. We found that in these samples, the most important sources of population structure are (i) the distinction between northwest European and either southeast European or Ashkenazi Jewish ancestry (similar to the main genetic gradient within Europe [6][7][8]) and (ii) the distinction between southeast European and Ashkenazi Jewish ancestry (which is more readily detectable in our European American data than in previous studies involving Europeans [6][7][8]).These ancestries can be effectively discerned using dense genotype data, making it possible to correct for population stratification and to identify ancestry-specific risk loci in genome-wide association studies [9].
Although genome-wide association studies that generate dense genotype data are becoming increasingly practical, targeted association studies-such as candidate gene studies or replication studies following up genome-wide scans-will continue to play a major role in human genetics.These studies typically analyze a much smaller number of markers than genome-wide scans, making it far more difficult to infer ancestry in order to correct for stratification and identify ancestry-specific risk loci.To address this, a possible strategy is to infer ancestry by genotyping a small panel of ancestryinformative markers [10], and this is the approach we take in the current paper.Using the insights from analyses of dense genotype data in multiple European American sample sets, we set out to identify markers informative for the ancestries most relevant to European Americans.Important work has already shown that northwest and southeast Europeans can be distinguished using as few as 800-1,200 ancestry-informative markers mined from datasets of 6,000-10,000 markers [7,8].Here we mine much larger datasets (more markers and more samples) to identify a panel of 300 highly ancestryinformative markers which accurately distinguish not just northwest and southeast European, but also Ashkenazi Jewish ancestry.This panel of markers is likely to be useful in targeted disease studies involving European Americans.In particular, the panel is effective in inferring ancestry and correcting a spurious association in a published example of population stratification in European Americans [1].
Analysis of Data from Genome-Wide Association Studies
To investigate whether we could identify consistent patterns of European American population structure, we analyzed four European American datasets involving a total of 4,198 samples.These samples were genotyped on the Affymetrix GeneChip 500K or Illumina HumanHap300 marker sets in the context of genome-wide association studies for multiple sclerosis (MS), bipolar disorder (BD), Parkinson's disease (PD) and inflammatory bowel disease (IBD) (see Methods).For each dataset, we used the EIGEN-SOFT package to identify principal components describing the most variation in the data [11].The top two principal components for each dataset are displayed in Figure 1.Strikingly, the results are very similar for each dataset, and are similar to our previous results on a smaller dataset involving the Affymetrix GeneChip 100K marker set [9], suggesting that the main sources of population structure are roughly consistent across European American sample sets.
We were able to characterize the main ancestry components in the IBD dataset, because a subset of these individuals self-reported their ancestry as northwest European, southeast European or Ashkenazi Jewish (see Methods).(We use the term ''ancestry'' for ease of presentation, but caution that cultural or geographic identifiers do not necessarily correspond to genetic ancestry.)We conclude that the top two principal components of genetic ancestry in the IBD dataset roughly correspond to a continuous cline from northwest to southeast European ancestry and an orthogonal discrete separation between Ashkenazi Jewish and southeast European ancestry (Figure 1E).[We note that the northwestsoutheast axis corresponds approximately to the top principal component (x-axis in Figure 1), but this correspondence is not exact, as principal components are mathematically defined to extract the most variance from the data without regards to geographic interpretation.Thus, top principal components will often represent a linear combination of ancestry effects in the data.]Our results are consistent with a previous study in which Ashkenazi Jewish and southeast European samples occupied similar positions on the northwest-southeast axis, although there was insufficient data in that study to separate these two populations [7].A historical interpretation of this finding is that both Ashkenazi Jewish and southeast European ancestries are derived from migrations/expansions from the Middle East and subsequent admixture with existing European populations [12,13].
To determine whether the visually similar patterns observed in these four datasets each represent the same underlying components of ancestry, we constructed a combined dataset of MS, BD, PD and IBD samples using markers present in all datasets.The top two principal components of the combined dataset, displayed in Figure 2, are similar to the plots in Figure 1 and show the same rough correspondence to self-reported ancestry labels from the IBD study.
To simplify the assessment of ancestries represented in each dataset, we discretely assigned each sample to cluster 1 (mostly northwest European), cluster 2 (mostly southeast European), or cluster 3 (which contains the great majority of self-reported Ashkenazi Jewish samples) based on proximities to the center of each cluster in Figure 2 (see Methods).We emphasize that this discrete approximation does not fully capture the continuous northwest-southeast cline described by the data, and that we are classifying genetic ancestry rather than cultural or geographic identifiers-for example, not all self-reported Ashkenazi Jewish samples lie in cluster 3. Proportions of individuals assigned to each cluster are listed in Table 1.Results are generally consistent with demographic data indicating that 6% of the U.S. population self-reports Italian ancestry and 2% of the U.S. population self-reports as Ashkenazi Jewish, with higher representation of these groups in urban areas [14,15].We note that although the selfreported ancestry of samples in the IBD dataset is generally fairly consistent with the cluster assignments, Figure 2 indicates that inferred genetic ancestry is more nuanced and informative than self-reported ancestry with regard to genetic similarity, particularly for individuals who may descend from multiple ancestral populations.By coloring each plot in Figure 1 with cluster assignments inferred from
Author Summary
Genetic association studies analyze both phenotypes (such as disease status) and genotypes (at sites of DNA variation) of a given set of individuals.The goal of association studies is to identify DNA variants that affect disease risk or other traits of interest.However, association studies can be confounded by differences in ancestry.For example, misleading results can arise if individuals selected as disease cases have different ancestry, on average, than healthy controls.Although geographic ancestry explains only a small fraction of human genetic variation, there exist genetic variants that are much more frequent in populations with particular ancestries, and such variants would falsely appear to be related to disease.In an effort to avoid these spurious results, association studies often restrict their focus to a single continental group.European Americans are one such group that is commonly studied in the United States.Here, we analyze multiple large European American datasets to show that important differences in ancestry exist even within European Americans, and that components roughly corresponding to northwest European, southeast European, and Ashkenazi Jewish ancestry are the major, consistent sources of variation.We provide an approach that is able to account for these ancestry differences in association studies even if only a small number of genes is studied.the combined dataset, we verify that the most important ancestry effects in each individual dataset correspond to these clusters (Figure S1).
We computed F ST statistics between clusters 1 (mostly NW), 2 (mostly SE) and 3 (mostly AJ), restricting our analysis to individuals unambiguously located in the center of each cluster (Figure 2).We obtained F ST (1,2) ¼ 0.005, F ST (2,3) ¼ 0.004 and F ST (1,3) ¼ 0.009.The additivity of these variances (0.005 þ 0.004 ¼ 0.009) would be consistent with the drift distinguishing clusters 1 and 2 having occurred independently of the drift distinguishing clusters 2 and 3, as might be expected under a hypothesis of drift specific to Ashkenazi Jews due to founder effects [13,16].However, more extensive investigation will be required to draw definitive conclusions about the demographic histories of these populations.
Impact of European American Population Structure on Genetic Association Studies
To assess the extent to which ancestry differences across sample sets could lead to population stratification in real genetic association studies, we computed association test statistics across the genome, assigning differently ascertained European American sample sets as cases and controls.We first compared the two Affymetrix 500K datasets, treating MS samples as cases and BD samples as controls.(We did not compare the two 300K datasets, which would lead to severe stratification because the IBD dataset was specifically ascertained to include roughly equal numbers of Jewish and non-Jewish samples.)To minimize the effects of assay artifacts [17] on our computations, we applied very stringent data quality filters (see Methods).We computed values of k, a metric describing genome-wide inflation in association statistics [18], both before or after correcting for stratification using the EIGENSTRAT method [9].We used the combined dataset to infer population structure, ensuring that the top two eigenvectors correspond to northwest European, southeast European and Ashkenazi Jewish ancestry (Figure 2).Values of k after correcting along 0, 1, 2 or 10 eigenvectors are listed in Table 2, and demonstrate that the top two eigenvectors correct nearly all of the stratification that can be corrected using 10 eigenvectors, with all of the correction coming from the first eigenvector; the second eigenvector has no effect because the ratio of cluster 2 (SE) to cluster 3 (AJ) samples is the same in the MS and BD datasets (Table 1).Residual stratification beyond the top 10 eigenvectors is likely to be due to extremely subtle assay artifacts that EIGENSTRAT cannot detect -indeed, with less stringent data quality filters (see Methods) the value of k after correcting for the top 10 eigenvectors increases to 1.090, instead of 1.035.
The BD dataset contains two distinct subsamples (one collected from Pittsburgh and one collected from throughout the U.S.).Thus, we repeated the above experiment using Pittsburgh samples as cases and other U.S. samples as controls and assessed the level of stratification.According to the discrete classification described above, proportions of clusters 1/2/3 ancestry were 91%/8%/2% for Pittsburgh samples vs. 95%/2%/3% for other U.S. samples, thus we would expect differences along the second axis of variation, which distinguishes clusters 2 and 3, to contribute to stratification.Indeed, results in Table 3 show that correcting along the second eigenvector has an important effect in this analysis, and that the top two eigenvectors correct for most of the stratification that can be corrected using 10 eigenvectors.
These results suggest that discerning clusters 1, 2 and 3, which roughly correspond to northwest European, southeast European and Ashkenazi Jewish ancestry, is sufficient to correct for most population stratification in genetic associ- ation studies in European Americans.However, this does not imply that these ancestries account for most of the population structure throughout Europe, as there are many European populations -such as Russians and other eastern Europeans -that are not heavily represented in the United States [14].On the contrary, these results, along with the results that follow, are entirely specific to European Americans.
Validation of a Panel of Ancestry-Informative Markers for European Americans
To develop a small panel of markers sufficient to distinguish clusters 1, 2 and 3 in targeted association studies in European Americans, we used several criteria to select 583 unlinked SNPs as potentially informative markers for within-Europe ancestry (see Methods).These criteria included: (i) Subpopulation differentiation between clusters 1 and 2, as inferred from European American genome-wide data; (ii) Subpopulation differentiation between clusters 2 and 3, as inferred from European American genome-wide data; and (iii) Signals of recent positive selection in samples of European ancestry, which can lead to intra-European variation in allele frequency [19,20].As we describe below, from these markers we identified a subset of 300 validated markers that effectively discern clusters 1, 2 and 3.
To assess the informativeness of the initial 583 markers for within-Europe ancestry, we genotyped each marker in up to 667 samples from 7 countries: 180 Swedish, 82 UK, 60 Polish, 60 Spanish, 124 Italian, 80 Greek and 81 U.S. Ashkenazi Jewish samples (see Methods).We applied principal components analysis to this dataset using the EIGENSOFT package [11].Results are displayed in Figure 3A, which clearly separates the same three clusters, roughly corresponding to northwest European, southeast European and Ashkenazi Jewish ancestry, as in our analysis of genome-wide datasets (Figure 2).We note that Spain occupies an intermediate position between northwest and southeast Europe, while Poland lies close to Sweden and UK, supporting a recent suggestion that the northwest-southeast axis could alternatively be interpreted as a north-southeast axis [8].
Defining clusters 1, 2 and 3 based on membership in the underlying populations, we computed F ST (1,2) and F ST (2,3) for each marker passing quality control filters, and selected 100 markers with high F ST (1,2) and 200 markers with high F ST (2,3) to construct a panel of 300 validated markers (see Methods and Web Resources).We reran principal components analysis on the 667 samples using only these 300 markers, and obtained results similar to before (Figure 3B).The 300 markers have an average F ST (1,2) of 0.07 for the 100 cluster 1 vs. 2 markers and an average F ST (2,3) of 0.04 for the 200 cluster 2 vs. 3 markers.These F ST values are biased upward since they were computed using the same samples that we used to select the 300 markers from the initial set of 583 markers.However, unbiased computations indicate an average F ST (1,2) of 0.06 for the 100 cluster 1 vs. 2 markers and average F ST (2,3) of 0.03 for the 200 cluster 2 vs. 3 markers, indicating that the upward bias is modest (see Methods).
Recent work in theoretical statistics implies that the squared correlation between an axis of variation inferred with a limited number of markers and a true axis of variation (e.g. as inferred using genome-wide data) is approximately equal to x/(1þx), where x equals F ST times the number of markers (see Text S1) [21,11].Thus, correlations will be on the order of 90% for clusters 1 vs. 2 and 90% for clusters 2 vs. 3, corresponding to a clear separation between the clusters (Figure 3B).Because F ST is typically above 0.10 for different PLoS Genetics | www.plosgenetics.orgcontinental populations, it also follows that these 300 markers (which were not ascertained to be informative for continental ancestry) will be sufficient to easily distinguish different continental populations, as we verified using HapMap [22] samples (Figure S2).Thus, it will also be possible to use these markers to remove genetic outliers of different continental ancestry.
Correcting for Population Stratification in an Empirical Targeted Association Study
To empirically test how effectively the panel of 300 markers corrects for stratification in real case-control studies, we genotyped the panel in 368 European American samples discordant for height, in which we recently demonstrated stratification [1].In that study, we observed a strong association (P-value , 10 À6 ) in 2,189 samples between height and a candidate marker in the lactase (LCT) gene; this association would be statistically significant even after correcting for the hundreds of markers typically genotyped in a targeted association study (or in Bayesian terms, incorporating an appropriate prior probability of association).We concluded based on several lines of evidence that the association was due to stratification-in particular, both LCT genotype and height track with northwest versus southeast European ancestry.We focused our attention on a subset of 368 samples and observed that after genotyping 178 additional markers on these samples, stratification could not be detected or corrected using standard methods [1].
Encouragingly, the panel of 300 markers detects and corrects for stratification in these 368 height samples.We applied the EIGENSTRAT program [9] with default parameters to this dataset, together with ancestral European samples, using the 299 markers unlinked to the candidate LCT locus to infer ancestry and correct for stratification (see Methods).We note that it is important to exclude markers linked to the candidate locus when inferring ancestry using a small number of markers, to avoid a loss in power when correcting for stratification [9].A plot of the top two axes of variation is displayed in Figure 4, with height samples labeled by self-reported grandparental origin (NW Europe, SE Europe, or four USA-born grandparents) as described in the height study [1].Unsurprisingly, nearly all Height-NWreport samples lie in cluster 1, which corresponds to northwest European ancestry.More interestingly, nearly all Height-USAreport samples also lie in cluster 1; because clusters 2 and 3 do not seem to be represented in the ancestry of USA-born grandparents of living European Americans, the contribution of these clusters to the ancestry of living European Americans may largely descend from foreign-born grandparents, implying relatively recent immigration.Finally, Height-SEreport samples lie in clusters 1, 2 and 3, indicating that self-reported ancestry does not closely track the genetic ancestry of these samples.
We detected stratification between tall and short samples, with the top two axes of variation explaining 5.1% of the variance in height (P-value ¼ 9 3 10 À5 ).Furthermore, the top two axes of variation explain 22% of the variance of the candidate LCT marker (P-value ¼ 3 3 10 À18 ), indicating that the association of the candidate marker to height is affected by stratification.Indeed, the observed association is no longer significant after correcting for stratification (Table 3).The residual trend towards association (P-value ¼ 0.12) could be due to chance, to other axes of variation (besides those corresponding to clusters 1, 2 and 3) which the panel of 300 markers does not capture, or to a very modest true association between LCT and height.Our results on genome-wide datasets and on the height dataset suggest that other axes of variation are much less likely to contribute to stratification in European Americans than the main axes we have described.However, the possibility remains that other axes, which are not captured by this panel of 300 markers, could contribute to stratification in some studies.
A recent study reported a successful correction for stratification in the height study using data from the 178 markers that were originally genotyped, using a ''stratification score'' method [23].We investigated why the stratification score method succeeded while methods such as STRAT and EIGENSTRAT are unable to correct for stratification using the same data [24,9,1].The stratification score method computes regression coefficients which describe how genotypes of non-candidate markers predict disease status, uses those regression coefficients to estimate the odds of disease of each sample conditional on genotypes of non-candidate markers, and stratifies the association between candidate marker and disease status using the odds of disease (which ostensibly varies due to ancestry).Importantly, the disease status of each sample is included in the calculation of the regression coefficients that are subsequently used to estimate the odds of disease of that sample.If the number of samples is comparable to the number of markers, then each sample's disease status will substantially influence the set of regression coefficients used to compute the odds of disease of that sample, so that the odds of disease will simply overfit the actual disease status, leading to a large loss in power -even if there is no correlation between disease status and ancestry (see Text S1 and Tables S2 and S3).Thus, we believe that informative marker sets are still needed to allow a fully powered correction for stratification in targeted studies such as the height study.It is important to point out that the panel of 300 markers provides a better correction for stratification than selfreported ancestry, even for a study in which the ancestry information is more extensive than is typically available.Although the association between the LCT candidate marker and height is reduced in the 368 samples when self-reported grandparental origin is taken into account, it is not eliminated (P-value ¼ 0.03).This is a consequence of the fact that grandparental origin explains only 3.2% of the variance in height and 17% of the variance of the candidate marker, both substantially less than is explained by ancestry inferred from the panel of 300 markers.These results provide further evidence that genetically inferred ancestry can provide useful information above and beyond self-reported ancestry [25].
We wondered whether using only the 100 markers chosen to be informative for NW vs. SE ancestry would be sufficient to correct for stratification in the height data.The top axis of variation inferred from these markers explains 19% of the variance of the candidate marker, but only 3.6% of the variance in height.Because this axis captures most of the variation attributable to ancestry at the candidate marker, stratification correction is almost as effective as before (Pvalue ¼ 0.08).However, this axis is not fully effective in capturing variation attributable to ancestry in height, because it does not separate clusters 2 and 3 -we observed that samples in cluster 2 are strongly biased towards shorter height but samples in cluster 3 show no bias in height in this dataset (data not shown).Thus, although the 100 NW vs. SE markers may be sufficient to correct for stratification in some instances, associations in European American sample sets between other candidate loci and height could be affected by stratification unless the full panel of 300 markers is used.More generally, the complete panel of 300 markers should enable effective correction for stratification in most targeted association studies involving European Americans.
Discussion
We have analyzed four different genome-wide datasets involving European American samples, and demonstrated that the same two major axes of variation are consistently present in each dataset.The first major axis roughly corresponds to a geographic axis of northwest-southeast European ancestry, with Ashkenazi Jewish samples tending to cluster with southeastern European ancestry; the second major axis largely distinguishes Ashkenazi Jewish ancestry from southeastern European ancestry.We identified and validated a small panel of 300 informative markers that can reliably discern these axes, permitting correction for the major axes of ancestry variation in European Americans even when genome-wide data is not available.We note that while we have corrected for stratification using our EIGENSTRAT method, the panel of markers is not specific to this method, and the STRAT method [24] or other structured association approaches could similarly take advantage of this resource.
Our success in building a panel of markers informative for within-Europe ancestry relied on multiple complementary strategies for ascertaining markers.All strategies were successful in identifying informative markers.We particularly emphasize the success of applying principal components analysis to genome-wide data from European American samples and selecting markers highly differentiated along top axes of variation.This strategy was the source of most of our markers, and will become even more effective as datasets with larger numbers of samples become available, enabling further improvements to the panel and ascertainment of markers to address stratification in other populations.
The panel of 300 markers informative for within-Europe ancestry is practical for genotyping in a small-scale study, and permits correction for population stratification in European Americans at a very small fraction of the cost of a genomewide scan.We envision three applications: 1.The panel can be used to evaluate study design prior to a genome-wide association study.By randomly choosing a few hundred prospective cases and controls and genotyping them on this panel, one can statistically determine whether or not cases and controls are well matched for ancestry in the overall study.If they are poorly matched, then properly matched cases and controls for the study can be ascertained by genotyping all cases and all controls using this panel (see Text S1).
2. The panel can be genotyped in a targeted association study, such as a candidate gene study or a replication study following up a genome-wide association study, in which variants are targeted in large numbers of samples that have not been densely genotyped.The data from markers in the panel can be used to correct for stratification using methods such as EIGENSTRAT [9], to ensure that observed associations are not spurious.This will also make it possible to search for loci whose disease risk is ancestry-specific [26], without relying on self-reported ancestry.
3. The panel can be used to remove genetic outliers and assess genotyping quality of samples in a targeted association study.Although the panel was not ascertained for evaluating continental ancestry, it is sufficiently informative to identify samples with different continental ancestry (Figure S2).It can also be used to identify duplicate or cryptically related samples.
Though we have focused here on the importance of inferring ancestry in association studies, the panel of markers may prove useful in a broad range of medical and forensic applications.
Materials and Methods
Analysis of data from genome-wide association studies.The MS dataset consists of 1,018 European American parents of individuals with MS that were genotyped at Affymetrix GeneChip 500K markers as part of a trio-design genome-wide scan for multiple sclerosis; most of the individuals (.85%) were sampled from San Francisco.The BD dataset consists of 1,727 European American controls that were genotyped at Affymetrix GeneChip 500K markers as part of a genome-wide scan for bipolar disorder; 1,229 individuals were sampled from throughout the U.S. and 498 were sampled from Pittsburgh.The PD dataset consists of 541 European Americans (270 cases and 271 controls) that were genotyped at Illumina Human-Hap300 markers as part of a genome-wide scan for Parkinson's disease [27,28]; individuals were sampled from unspecified locations.The IBD dataset consists of 912 European American controls from the New York Health Project and U.S. Inflammatory Bowel Disease Consortium that were genotyped at Illumina HumanHap300 markers as part of a genome-wide scan for Inflammatory Bowel Disease.[A subset of these samples self-reported their ancestry by indicating one or more of the following: ''Scandinavian'', ''Northern European'', ''Central European'', ''Eastern European'', ''Southern European'', ''East Mediterranean'', or ''Ashkenazi Jewish''; we simplified this classification as follows: individuals indicating one or more of ''Scandinavian'', ''Northern European'', or ''Central European'' with no other ancestries were reclassified as ''IBD-NWreport'', individuals indicating one or more of ''Eastern European'', ''Southern European'', or ''East Mediterranean'' with no other ancestries were reclassified as ''IBD-SEreport'', individuals indicating ''Ashkenazi Jewish'' were reclassified as ''IBD-AJreport'' regardless of other ancestries, and remaining individuals (either unknown or mixed European ancestry and not self-reporting as Ashkenazi Jewish) were reclassified as ''IBD-noreport''.]In each of the four datasets, we removed markers with .5% missing genotypes, markers in regions of extended linkage disequilibrium detected as principal components [29], and outlier samples identified by principal components analysis [9].Analysis of the combined dataset was restricted to ;50,000 markers present in all datasets after applying these constraints.
Assignment of samples to three discrete clusters in combined genome-wide dataset.Although the discrete approximation does not fully capture the continuous northwest-southeast cline described by the data, to simplify our analysis we assigned samples to three discrete clusters so as to minimize distances to centers of clusters, defined as (0.01,0.01) for cluster 1, (À0.02,-0.06)for cluster 2 and (À0.04,0.01)for cluster 3 (Figure 2).Impact of European American population structure on genetic association studies.In association analyses involving the MS and BD datasets, we excluded markers that had .1% missing genotypes, or failed Hardy-Weinberg equilibrium (P-value , 0.001), or had a low minor allele frequency (,5%), in either the MS or BD datasets.Roughly 200,000 of the Affymetrix 500K markers remained after imposing these strict constraints.We also repeated our computations with less stringent data quality filters (,5% missing data, instead of ,1%).
Genome-wide datasets used to ascertain ancestry-informative markers.Genome-wide genotype data used to ascertain markers included the MS, BD, PD and IBD datasets described above, plus three additional European American datasets: a previously described dataset of 488 samples with rheumatoid arthritis (RA) genotyped at Affymetrix GeneChip 100K markers [9], a dataset of 305 unrelated controls from the Framingham Heart Study (FHS) genotyped at Affymetrix GeneChip 100K markers, and a dataset of 297 samples with lung cancer (LC) genotyped at Affymetrix Sty 250K markers.Data from HapMap [22] was also used.
Ascertainment of putatively ancestry-informative markers.Markers were ascertained using multiple methods: (i) 185 markers highly differentiated along the top axis of variation in genome-wide datasets.Differentation was defined as the correlation between genotype and coefficient along the top axis of variation, with a correction for sample size.(ii) 300 markers highly differentiated between individuals discretely assigned to cluster 2 (SE) or cluster 3 (AJ) in genome-wide datasets.Differentiation was measured using F ST , with a correction for sample size.(iii) 112 markers from regions of high p excess [30] between Europeans and non-Europeans in HapMap [22] data.Regions of high p excess were identified as windows of consecutive markers with average p excess values above 0.4, 0.5 or 0.6, comparing allele frequencies in the CEU sample with the pooled YRIþHCBþJPT sample.Within each of the longest such windows, a marker was selected with the highest F ST and a higher derived allele frequency in CEU than in the other populations.(iv) 30 markers that were both in the top 1% of the genome for iHH (integrated haplotype homozygosity, a test of recent natural selection) as reported in [19], and also at least 3 standard deviations above the mean in differentiation between European and Asian samples from HapMap.(v) 30 markers that were both in the top 1% of the genome for iHH as reported in [19], and also part of a large stretch of the genome with high iHH (at least two adjacent 100kb regions, as reported in [19]).(vi) 31 markers from our published African American admixture map [30] which in unpublished genotyping results were highly differentiated between European populations from Baltimore, Chicago, Utah, Italy, Norway and Poland, based on the top two axes of variation.(vii) 10 markers highly differentiated between Spanish and European American populations [31].(viii) 12 markers from the LCT gene and MATP, OCA2, TYRP1, SLC24A5 and MYO5A pigmentation genes [30,32,33,7].Markers which failed primer design or genotyping assay were excluded, yielding a list of 583 putatively ancestry-informative markers.
Dataset of 667 European samples from seven countries.The sample collection was assembled on two plates.The first plate included 60 samples from Sweden [34], 60 UK samples from the European Collection of Cell Cultures (ECACC), 60 Polish samples collected by Genomics Collaborative [1], 60 samples from southern Spain and 43 samples from southern Italy [35].The second plate included 120 additional samples from Sweden, 22 additional UK samples, 81 additional samples from southern Italy, 80 samples from Greece [36] and 81 Ashkenazi Jewish samples from Israel reporting four Jewish grandparents, each born in central or eastern Europe.Genotyping was performed using both the homogeneous MassEXTEND (hME) and iPLEX assays for the Sequenom MassARRAY platform [37].We genotyped an initial set of 50 markers using the hME assay; for greater efficiency, we used the iPLEX assay to genotype the remaining markers (http://www.sequenom.com/seq_genotyping.html).We designed primers using the Sequenom software, MassARRAY Assay Design.We restricted subsequent analysis to markers that genotyped successfully in .85% of samples and had 1 or fewer discrepancies between replicate samples.We excluded markers with Hardy-Weinberg P-values less than 0.01 in more than one ethnic group.
Dataset of 368 European American samples.The samples were collected by Genomics Collaborative/SeraCare, as described previously [1].Genotyping and quality control were performed as described above.
Panel of 300 validated markers.Defining clusters 1, 2 and 3 as described (see Results), we computed F ST (1,2) and F ST (2,3) for each marker passing quality control filters.Due to the limited representation of clusters 2 and 3 on the first plate and to minimize differential bias and differences in quality control filters between plates, only the second plate of European samples was used.We first selected 100 markers with the highest F ST (1,2) and subsequently selected 200 markers with the highest F ST (2,3), and required that each marker be located at least 1Mb from each previously selected marker.The number of markers from each ascertainment source that were included in the final panel of 300 markers is reported in Table S1.
Estimates of F ST that account for upward bias.F ST values for the panel of 300 markers are biased upward since they were computed using the same samples that we used to select the 300 markers.We computed unbiased estimates of the value of F ST for these markers by dividing the samples into four quartiles, with the same distribution of ancestries in each quartile.For each quartile, we selected 300 markers as described above using only samples from the remaining quartiles, then used samples from that quartile to compute unbiased F ST values, and averaged the results across quartiles.F ST computations were performed using the EIGENSOFT software, which fully accounts for differences in sample size [11].
Stratification correction of height samples.We applied the EIGENSTRAT program [9] with default parameters to infer axes of variation of a combined dataset of height samples and the second plate of European samples (see above), using those axes of variation to correct for stratification in the height samples.
Editor:
Jonathan K. Pritchard, University of Chicago, United States of America Received July 16, 2007; Accepted November 16, 2007; Published January 18, 2008 A previous version of this article appeared as an Early Online Release on November 19, 2007 (doi:10.1371/journal.pgen.0030236.eor).Copyright: Ó 2008 Price et al.This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Figure 2 .
Figure 2. The Top Two Axes of Variation of the Combined Dataset (MS, BD, PD, and IBD) Samples from the IBD dataset are labeled according to self-reported ancestry, as in Figure 1E.doi:10.1371/journal.pgen.0030236.g002
Figure 3 .
Figure 3.The Top Two Axes of Variation of a Dataset of Diverse European Samples Results are based on (A) 583 markers putatively ancestry-informative markers, and (B) 300 validated markers.doi:10.1371/journal.pgen.0030236.g003
Figure 4 .
Figure 4.The Top Two Axes of Variation of the Height Samples Together with European Samples Results are based on the 299 markers from our marker panel that are unlinked to the LCT locus.Height samples are labeled according to selfreported grandparental origin: northwest European (Height-NWreport), southeast European (Height-SEreport) or four USA-born grandparents (Height-USAreport).doi:10.1371/journal.pgen.0030236.g004
Table 3 .
Association Statistics between LCT Candidate Marker and Height in 368 European American Samples, before and after Stratification Correction Using Our Panel of 300 Markers | 8,281 | 2007-11-19T00:00:00.000 | [
"Biology"
] |
A New Study on a Type Iax Stellar Remnant and its Probable Association with SN 1181
We report observations and modeling of the stellar remnant and presumed double-degenerate merger of Type Iax supernova Pa 30, which is the probable remnant of SN 1181 AD. It is the only known bound stellar SN remnant and the only star with Wolf – Rayet features that is neither a planetary nebula central star nor a massive Population I progenitor. We model the unique emission-line spectrum with broad, strong O VI and O VIII lines as a fast stellar wind and shocked, hot gas. Non-LTE wind modeling indicates a mass-loss rate of ∼ 10 − 6 M e yr − 1 and a terminal velocity of ∼ 15,000 km s − 1 , consistent with earlier results. O VIII lines indicate shocked gas temperatures of T ; 4 MK. We derive a magnetic fi eld upper limit of B < 2.5 MG, below earlier suggestions. The luminosity indicates a remnant mass of 1.0 – 1.65 M e with ejecta mass 0.15 ± 0.05 M e . Archival photometry suggests the stellar remnant has dimmed by ∼ 0.5 mag over 100 yr. A low Ne / O < 0.15 argues against an O-Ne white dwarf in the merger. A cold dust shell is only the second detection of dust in an SN Iax and the fi rst of cold dust. Our ejecta mass and kinetic energy estimates of the remnant are consistent with Type Iax extragalactic sources.
INTRODUCTION
A new class of thermonuclear supernovae (SNe), produced in low-mass (i.e., M * ≤ 8 M ) binary systems, has been identified in the last twenty years: the Type Iax (e.g., Foley et al. 2013), also known as SN 2002cx-like.These are the least-understood supernovae.Two scenarios are proposed for their formation.In the singledegenerate scenario (e.g., Jordan et al. 2012;Kromer et al. 2015), a carbon-oxygen (CO) white dwarf (WD) that accreted material from a helium donor fails to detonate.In the double-degenerate scenario (e.g., Kashyap et al. 2018), the SN is caused by deflagration of the ac-Corresponding author: Quentin A. Parker<EMAIL_ADDRESS>disk that forms after the merger of a CO WD with an oxygen-neon (ONe) WD companion.Type Iax SNe are sub-luminous (as faint as M = −13 to −14 mag) with expansion velocities from 2000 to 9000 km s −1 , and may leave a stellar remnant behind.The currently known Type Iax SNe population 1 is only a few tens (Foley et al. 2013;Jha 2017), so the full range of observed properties is weakly constrained.All are extragalactic save two in our own Galaxy.One is SN Sgr A East near the Galactic Center as suggested by Zhou et al. (2021).The other 2 is associated with the nebula Pa 30, identified as the most probable remnant of the historical SN 1181 AD (Ritter et al. 2021, hereafter Paper I).
Earlier studies show that the nebula -Pa 30 in Kronberger et al. (2014) and IRAS 00500+6713 in Gvaramadze et al. (2019) -emits strongly in X-rays (Oskinova et al. 2020).The nebula appears circular, approximately 3 -4 across depending on the wavelength of the imagery.Faint, diffuse [O iii] nebular emission is seen via deep narrow-band imaging (Kronberger et al. 2014;Oskinova et al. 2020;Ritter et al. 2021).An [S ii] shell is expanding at 1000 ± 100 km s −1 (Paper I).The stellar remnant, reported by Gvaramadze et al. (2019) as J005311, is associated with the infrared source 2MASS J00531123+6730023.It is located at the center of Pa 30 and has been argued to be the result of a doubledegenerate CO + ONe WD merger (Gvaramadze et al. 2019;Oskinova et al. 2020).
In this paper we analyze the stellar wind and nebula in order to constrain the progenitor system.In the following Section, we describe our own observations as well as the archival data used in this work.Stellar parameters are derived through a comparison of the stellar spectra against NLTE and photoionization models (Sect.3).In Section 4 we explore the remnant's potential variability, followed by a new analysis on the nebular properties in Sect. 5.In the penultimate Section we discuss the implication of our results on the progenitor and its relation to other Type Iax sources, while our conclusions can be found in the last section.
OBSERVATIONS AND DATA REDUCTION
The stellar remnant 2MASS J00531123+6730023 has a Gaia J2000 position RA: 00 h 53 m 11.205 s , DEC: +67 o 30' 02.381'.The Gaia DR3 parallax is 0.4065 ± 0.0259 mas and the distance derived from the inverse parallax is 2460 ± 157 pc.However this transformation is nonlinear and the measured parallax may have large uncertainties for distant and faint objects.In this case, the fractional parallax error is approximately 6.4%.Therefore in this work, as in Paper I, we adopt the distance measurement of Bailer-Jones et al. ( 2021) at 2297 +122 −114 pc, which was derived from a probabilistic approach on Gaia data.The Galactic coordinates (l,b) = (123.1,4.6) place the stellar remnant at 180 pc from the plane, so likely part of the old stellar disk.
OSIRIS/GTC spectroscopy
High signal-to-noise spectroscopic data of both star and nebula were obtained with the OSIRIS instrument on the 10-m Gran Telescopio Canarias (GTC) telescope in late 2016 as part of our planetary nebula (PN) followup program.The faint nebula spectra from SN 1181 was Stellar spectra from SparsePak/WIYN, OSIRIS/GTC, and a small amateur telescope (P.Le Dû) smoothed and scaled to the same flux level.The dereddened spectra are also shown for comparison (using interstellar extinction from Paper I).The major discontinuity bluewards of 4000 Å is seen in the OSIRIS and amateur spectra but the SparsePak spectrum did not go below 4200 Å.The OSIRIS spectrum had the slit slightly off-center to avoid a nearby bright star.To correct for the resulting slight flux underestimation after flux calibration the spectrum was scaled to match the reliable SparsePak/WIYN flux level.
presented in Paper I with only the [S ii] 6716 and 6731 Å doublet and [Ar iii] 7136 Å emission lines detectable.Here, we focus on the spectra we obtained for the stellar core.
SparsePak/WIYN fibre spectroscopy
We analyzed previously un-extracted spectra of the stellar core from the SparsePak fibre-based (fiber diameter 4.7 ), pseudo-integral field unit on the 3.5-m WIYN telescope taken in late 2014.The spectral observations of the surrounding nebula Pa 30 are presented in Paper I.
Kermerrien Observatory spectroscopy
Low resolution (R ∼ 540) optical spectroscopy of Pa 30's central star by our amateur collaborators with a 20-cm F/D ∼ 5 telescope at Kermerrien observatory in Porspoder, France, brought its remarkable spectrum and its strong O vi 3811/34 emission line to our attention in October 2018.The low signal-to-noise spectra were not properly flux-calibrated and were scaled and smoothed to be compared against our OSIRIS/GTC spectrum in Figure 1.
X-ray observations
Faint X-ray emission was first recorded by the ROSAT All-Sky Survey (Voges et al. 2000) within 12 of the central star at a PSPC count rate in the 0.1-2.4keV band of 17 ± 7 cnts ks −1 .Most counts (>70%) have energies greater than 0.5 keV.A Swift XRT (Evans et al. 2014) 1.94 ks observation (ID 00358336000, 26 th July 2009) serendipitously registered this source.Analysis of these observations confirms a point source detection at the star's position with a Swift XRT count rate in the 0.2-5 keV band of 3.6 ± 1.4 cnts ks −1 , together with possible diffuse emission.
Recently Oskinova et al. (2020) presented deep, sensitive XMM-Newton EPIC X-ray observations that detect a central point source while confirming diffuse nebula emission.Their spectral analysis of the diffuse emission describes an optically-thin, thermal plasma composed of carbon, oxygen, neon, and magnesium (see their Table 1).These abundances are attributed to carbonburning ashes, suggesting Pa 30 is a supernova remnant.The total mass of the X-ray-emitting nebula, estimated as 0.1 M , strengthens this conclusion.
The point source XMM-Newton EPIC spectra are described by a hot plasma at temperatures 1-100 MK.A non-thermal component, that would reduce the plasma temperature to ∼1-20 MK, cannot be excluded.The EPIC spectra are consistent with the Swift and ROSAT data, but their much higher quality allow a detailed spectral fit (Oskinova et al. 2020).The un-absorbed X-ray luminosity in the 0.2-12 keV band implied by Oskinova et al. ( 2020) is reduced to 6.6 × 10 32 erg s −1 at our adopted distance of 2.3 ± 0.1 kpc.
To contribute SED high energy points presented in Section 5.2 we re-processed the XMM-Newton EPIC MOS1, MOS2, and pn data corresponding to Obs.ID.0841640101 and 0841640201 using SAS and relevant calibration files.MOS1, MOS2, and pn backgroundsubtracted spectra of the central star of Pa 30 were extracted using a circular aperture (radius=20 ) with suitable background regions and calibration matrices computed using the corresponding SAS tasks.The 0.2-7.0keV extended emission image reveals a shell with a diameter of ≈4 with a filled morphology.
Infrared imaging
We mined the IRSA/IPAC archive4 for available infrared imaging 3 around Pa 30.Apart from the WISE data presented previously (Gvaramadze et al. 2019;Oskinova et al. 2020) and Paper I, usable mid-and farinfrared images exist from AKARI (Doi et al. 2015) and IRAS (Neugebauer et al. 1984).Only the central star is visible in the near-infrared, while the nebula is prominent beyond 10µm.The far-infrared maps (i.e., AKARI and IRAS) do not have sufficient resolution to resolve the central star from the nebula, but they resolve Pa 30 which retains its apparent circular shape.The 12µm WISE image shows a round nebula with a radius of 83 ±4 , corresponding to a physical radius of 0.93±0.04pc, identical to the size of the [O iii] shell, Paper I. The extended X-ray shell seen by XMM-Newton is roughly 120 in radius.No nebular emission has been detected in submillimeter Planck maps.
The infrared photometry from these datasets assume unresolved sources although the nebular component can exceed the size of the apertures.We re-evaluate the photometric measurements taking into account the observed nebular size.In Section 5.2, we present estimates of the nebula's surface brightness.
Photometry
The central star of Pa 30 is relatively bright, at G ≈ 15.4 mag.It has been covered in many sky surveys from which we retrieved all available temporal optical and photometric data together, in order to investigate temporal variability (cf.Sect.4).
Spectroscopy
In the following sections, we directly compare our OSIRIS/GTC and SparsePak/WIYN stellar spectra with Gvaramadze et al. (2019) and Garnavich et al. (2020).These archival long-slit spectra were very kindly provided by these authors.
We also used reduced archival ultraviolet spectra from the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) under program GO15864 (P.I.G. Graefener; epoch: 4 November 2020).The spectral coverage was from 1150 to 3150 Å, with a slit width of 0.2 .These spectra provided a comparison against the cmfgen models in Sect.3.2.
Finally, we complemented our observations with very recent Gaia DR3 BP/RP low resolution spectra of R = 30 − 100 (Montegriffo et al. 2022) that extends beyond OSIRIS/GTC, i.e., from 3200 Å to 1µm and covers the O vi 3811/34 emission line.The Gaia telescope is outside the atmosphere so telluric lines are not present.The spectra were obtained from the UK Gaia Data Mining Platform 5 and are encoded as Hermite functions.
The spectra show instrumental distortions around the O vi 3811/34 line.Between 4000 and 4500 Å, the feature wavelengths differ by 30 Å from our ground-based spectra, possibly due to the response of the Hermite functions to the sharp edge of the O vi 3811/34 line.The wavelength calibration appears correct for λ > 5000 Å.
Extinction
Paper I adopted A V = 2.4 ± 0.1 mag but field stars now imply this may be an underestimate.Figure 2 shows parallax and A 0 from Gaia DR3 field stars within a 5 arcmin radius (black points) from the central star of Pa 30.Red points are stars within 2 arcmin.The curve 5 https://www.gaia.ac.uk/data/uk-gaia-data-mining-platform is the model extinction plot from Vergely et al. (2022) from the explore platform 6 at a spatial resolution of 25 pc (30 arcmin at the distance of Pa 30).
At the parallax of the central star (dashed line), the nominal extinction is A 0 = 2.9 mag, albeit with a large scatter between individual stars.The extinction is primarily due to clouds at around 400 pc and at 800 pc.The Vergely et al. (2022) curve ends at a distance of 3 kpc.Gaia DR3 finds some higher extinctions beyond 2.5 kpc but this is not included in the curve.
The data indicates an extinction A 0 = 2.8 ± 0.4 mag.The central value is the same as listed for our star in Gaia DR3, but that is fortuitous in view of the complexity of the spectrum.In this paper we will argue that the extinction for the central star is at the higher end of the range.
The stellar spectrum
The central star of Pa 30 has a unique spectrum (Figure 1) with very broad, very high excitation emission lines and a striking discontinuity bluewards of 4000 Å.There is a complete absence of hydrogen, helium and nitrogen lines.The broad lines are mostly identified with very high ionization stages of oxygen (Figure 3).The sharp and significant emission line that cuts-on around 4000 Å is identified as O vi 3811/34 Å.This feature turns over near the atmospheric cut-off as confirmed in the recent spectrum of Garnavich et al. (2020).This doublet is the defining criteria for the WO spectral type in both Pop I stars (e.g., Crowther et al. 1998;Gvaramadze et al. 2019), as well as old low-mass central stars of planetary nebulae (designated as [WO]) before they enter the white dwarf phase (e.g., Smith & Aller 1969;Sanduleak 1971).The broadened emission at 5270/91 Å is identified as O vi.The central star of Pa 30 seems unique in arising from neither pathway if the double degenerate merger and accretion fed Type Iax SN are correct (Gvaramadze et al. 2019;Oskinova et al. 2020).
In addition to these strong, broad features, a number of weaker, individual lines are seen.Their profiles can be accurately fitted with Gaussians with FWHM of 0.015c.The wings extend to velocities as high as 15,000 km s −1 (≈ 0.05c).Several lines coincide with O viii and are tentatively identified as such.They are well centered on the systemic velocity.The broad, prominent spectral features are indicative of a strong stellar wind.We modeled it using the non-LTE line-blanketed radiative transfer code cmfgen7 adapted to WC and WO stars (Hillier & Miller 1998;Hillier 2012;Hillier & Dessart 2012).Input parameters were our OSIRIS/GTC spectrum that includes the O vi doublet at 3811/44 Å with the interstellar extinction derived in Paper I, and a wind velocity of 15,000 km s −1 .A classic velocity law is used,
The stellar wind cmfgen models
where v ∞ is the wind terminal velocity measured from the spectra and R * is the stellar radius.There is a consensus that stellar winds are clumped.This is taken into account in cmfgen modeling via a volume filling factor, adopted here as f = 0.1.For the spectral modeling the ratio Ṁ / √ f , where Ṁ is the mass-loss rate, can be considered an invariant.
We explored a range of stellar luminosity and temperature, mass-loss rate, stellar radius, velocity law and chemical abundances, summarized in Table 1.Given the limited spectral coverage of our OSIRIS/GTC spectrum (and indeed of all available optical spectra-except that from Gaia, see above), we could not find a unique model that completely reproduces the observations (unlike Gvaramadze et al. 2019, although they also stated large uncertainties, e.g., Table 1).Nevertheless, many models could reproduce most spectral features.The results shown are best approximations and/or upper limits to the individual parameters.Most models tested could reproduce the visual and near-infrared photometry.Determining errors on the fits is tricky and we refrain from giving error bars as the errors are not easily transformed into a standard deviation and strong coupling exists between some parameters.Potentially four species can have mass fractions >0.1, so changing the mass fraction of one species necessitates a corresponding change of at least one other species.Therefore, below, we provide viable ranges for the stellar parameters and indicate the properties used to constrain them.
The three primary modeling constraints come from matching the mean flux level, the strength of the O vi 3811/34 Å doublet, and the O v 5572-5604 Å line strength.Only models with luminosities between 25,000 and 60,000 L provide reasonable fits.Below the low luminosity limit either the O vi doublet is too weak and/or O v too strong, while at the upper limit either O vi is too strong and/or O v is too weak.At very high lu-minosities the O vi doublet strength is suppressed (but other O vi lines remain strong) since the upper levels preferentially decay directly to the ground state.We hence set an upper limit at ≤260,000 K8 .
The peak near 5300 Å is due to O vi offset to the red from observations, perhaps due to the overlapping O v line being too strong in the model (Figure 3).The model matches bumps seen at ∼4500 Å and ∼4659 Å, but fails to predict the feature near 4250 Å which is assigned to O viii.An alternative suggestion for this emission is Ne viii, which has approximately the same wavelength as the O viii line.In a model with a slightly lower mass loss, the Ne viii is clearly visible in predicted spectra though the overall fit to the spectrum is worse.As apparent from Figure 3, the broad emission lines blend together making it impossible to see the true continuum.
We provide some upper limits for mass fractions of key elements (He, C, O and Ne) assuming Solar abundance ratios for other elements, that is Na, Mg, Al, Si, S, Ar, Cl, Ca, Fe, and Ni9 .In these hot models helium has little influence on the spectra.From the apparent absence of the He ii 4686 Å line we place an upper limit of 0.4 on X He .The O vi features at 3811/34 Å and 5270/91 Å are clear indicators that the stellar atmosphere is oxygenrich.Therefore we used an oxygen-rich model atmosphere with an oxygen mass fraction < 0.647.This reproduces the O vi doublets, as well as O v lines.All the strongest features can be reproduced but not some weaker individual ones.The lines at 4500 and 4658 Å are sitting on a broader O vi feature whose strength depends on continuum choice, uncertain due to the density.
Carbon-free and carbon-enhanced models were tested.The C iv line can contribute to the 4568 Å line but otherwise there is no easily distinguishable carbon line contribution.We set an upper limit for X C ≤ 0.261.We explored a range of neon abundances (Table 1) and find an enhanced neon abundance could contribute to the spectral features at the 3600-4000 Å plateau (Ne vii), at 4341 Å and at 6070 Å (Ne viii).Inclusion of neon does not affect our ability to get a good fit to the oxygen lines.On the other hand, the 4341 Å line can also be identified with O viii (see below).
The fit shown in Figure 3 provides a 'maximum' model where as many features as possible are reproduced.Oxygen is well fitted, but carbon may be less certain and neon is likely significantly over predicted.The plotted model has X Ne = 0.436 and may be considered as the upper limit.A more carbon-rich composition cannot reproduce the spectrum.The ranges of chemical abundance of carbon, oxygen and neon in Table 1 agree with those derived by Oskinova et al. (2020) from the central star's X-ray spectrum.
The broad wind profile (≈15,000 km s −1 ) provides a slightly better fit to the broad spectral profiles for both the O vi doublets and O v feature at 5900 Å (Figure 3).This value is close to the measured value of Gvaramadze et al. (2019, 16,000 km s −1 ).Since O v is present, this emerges from the outer regions, and so a higher terminal velocity is needed to fit both oxygen features simultaneously.As found by Gvaramadze et al. ( 2019) radiation pressure appears insufficient to drive the wind.Though the OSIRIS/GTC spectra do not extend below 3600 Å, the cmfgen models predict an O vi line at 3435 Å similar to Gvaramadze et al. (2019) and has been observed in the spectra of Garnavich et al. (2020).
After finalizing the model, we compared it to HST ultraviolet (UV) spectra of the stellar remnant (Sect.2.4.4).To fit the UV data we needed to adjust the reddening to E(B-V)= 1.05.Although we eventually updated our reddening, our initial models started off assuming the reddening from Paper I. With the new reddening the fluxes no longer matched, so we recomputed the model with scaled parameters that would yield the same spectral shape, and this model is shown in Figure 3. 10 Given the O vi 3811/34 emission line's strength, the estimated reddening should be more accurate than that obtained using optical fluxes over a limited wavelength range.
No further model modifications were made and there is excellent agreement between the STIS/HST fluxes and the OSIRIS/GTC spectrum.Figure 3 shows the cmfgen model correctly predicts the O v and Ne viii blend at ∼2800 Å and the strong O v emission at ∼1370 Å.The model also predicts strong O vi resonance emission (1032, 1038 Å).Absorption shortward of 1500 Å is from the C vi resonance transition (1548, 1551 Å) being too strong, possibly indicating a lower carbon abundance than in the model.The strong 2200 Å band and its Galactic variation make it difficult to judge agreement between model and observation for features in the band.
produce similar spectra -only differing in flux (Schmutz et al. 1989).The parameters for one distance can be scaled to a new distance using the following scaling relations: We computed additional models to better constrain the parameters and potentially improve the fit, but this did not lead to a change in the adopted parameters.One possible exception is the neon abundance -a model with the abundance reduced by a factor of 2 might give a better fit to the UV spectrum.However, a better understanding of the interstellar 2200 Å band is required to draw firm conclusions.
In a recent study of two WO stars in the Large Magellanic Cloud by Aadland et al. (2022) it was found that X(C)/X(O) was greater than 5 despite the presence of strong O vi features at 3811/34 Å. However this star is descended from a massive O star and has a very different evolutionary history than the central star of Pa30.Further on, the spectra are quite distinct -in the WO stars, for example, the C iv spectrum is still very prevalent and neon lines are relatively weak and can be matched by a neon abundance similar to that expected from the processing of CNO into 14 N in the CNO cycle, and the subsequent processing into 22 Ne.The high temperatures used in our models will weaken the C iv lines, but this alone cannot explain the weakness of the carbon spectrum and the strength of the oxygen spectrum.
Shocked gas: cloudy models
The X-ray spectrum indicates an additional high temperature gas component (Oskinova et al. 2020) that is strong compared to what is typical for OB stars.The unabsorbed X-ray to bolometric luminosity ratio, log(L X /L bol ), is in the range of −4.8 to −6.1, one to two orders of magnitude larger than OB stars, i.e., log(L X /L bol ) = −6.912± 0.153 (Sana et al. 2006).Oskinova et al. (2020) attribute the bulk of the remnant's thermal X-ray emission to the wind's outer regions.
The wind model describes well the shape and many features of the optical spectrum.However, some weaker lines are not fitted and the 4340 Å feature requires Ne viii at a high neon abundance.This line coincides with a high-excitation O viii line.We therefore explore a spectral contribution from the high-temperature gas component which is not included in the wind model.
As a hydrogenic ion, O viii has a restricted number of lines.The 4340 Å line coincides with the O viii 8-9 transition and the 6069 Å line with the O viii 9-10 transition.These transitions have a wavelength ratio of 5/7 and the two line profiles overplot accurately when wavelengths are scaled by this ratio.The observed intensity ratio of the two lines (after subtraction of an estimated continuum) is reproduced by these O viii lines for an extinction in agreement with the foreground extinction.This strengthens identification as O viii, although other lines may contribute to these features.The temperature We ran exploratory photo-ionisation models using the cloudy code (ver.17. 03 Ferland et al. 2017).In the 'coronal mode' gas temperature is pre-defined and all lines are assumed collisionally excited.There is no input radiation field.A single shell model was used, with a constant density and temperature and without considering optical depth effects.Models were run for a range of temperatures and abundances.The hydrogen density was set to log n = 7.5.All lines were assumed to have a Gaussian profile with sigma of v/c = ∆λ/λ = 0.0092 as found from line fitting.The cloudy line output list was convolved with this Gaussian profile to obtain model spectra.The model spectra were multiplied by the extinction curve of Cardelli et al. (1988) and scaled to the peak intensity of the 4340 Å line.The ratio of the integrated line strength of the 4337 Å and 6064 Å lines was used to derive the extinction A V , where all contributions to these features were taken into account.
The broad wind features at 3800 Å and 5200 Å are not reproduced in this model: the temperatures are too high for significant O vi emission.However, the weaker lines absent from the wind model could be reproduced with this hot gas.
The model fit is shown in Figure 4 along with the observed spectra for a gas temperature of T = 4 MK.The best fit gave abundances (number ratios) of He/C/N/O/Ne = 0.73/0.07/0.02/0.16/0.01,where helium must be considered as an upper limit.These abundances are within the wind model range.The brightest predicted lines between 3000 Å and 7000 Å are listed in Table 2.
An estimated constant-value continuum was subtracted from the observed spectrum across the full wavelength range with no attempt to correct for any contin- uum slope.The line strengths depend on the continuum choice.This introduces an uncertainty, as the dominant broad wind features obliterate most of the continuum.The subtraction of a constant level from the WIYN spectrum brought both the blue and red ends to zero.For the GTC spectrum, a constant subtraction left an offset between the ends.We used the WIYN spectrum for the fit.The procedure indicated an extinction of A V = 2.9 ± 0.2 mag from the WIYN data, in excellent agreement with that found in Sect.2.5 and with the UV extinction indicated by the wind model.We adopted tively, though the observed 4655 Å line is stronger than predicted.Another O viii line at 5790 Å coincides with a peak inside the broad wind band.This identification is not as secure because of the unknown optical depth in this wind band.The 4800 Å feature is reproduced with Ne ix.The He ii line at 4686 Å contributes to the observed line at this wavelength together with the O viii line.A good fit requires a helium abundance ratio of He/O = 4 (by number).However, there is also a C vi and an Ne x line at this wavelength which could contribute.Hence, this helium abundance is considered as an upper limit.The 4940 Å feature is not reproduced.A weaker model line predicted at 5290 Å is due to C vi but it is likely obscured by the broad wind feature.
Our cloudy hot-gas model is able to reproduce most lines not present in the wind model, using abundances equivalent to that of the wind model, for a gas temperature of T = 3.5-4 MK.Importantly, all lines which are predicted to be detectable are seen in the spectrum, with a sole exception the Ne x line at 6900 Å, which is affected by a telluric feature.
The carbon abundance is poorly constrained because of a lack of bright lines though several fainter carbon Figure 5.As in Figure 4, but with a much higher carbon abundance and a lower helium abundance.
lines do coincide with features in the spectra.Figure 5 shows a model with the carbon abundance (by number) increased to C/O = 4.5, and with a lower helium abundance.This produces a good fit to many of the lines but adds a strong component to the 5290 Å line which is already well fitted in the wind model without this component.There is no other evidence for such a high carbon abundance and the wind model needs oxygen-rich gas.We therefore do not adopt these abundances.
Figure 6 shows the same fit, but now the observed spectra are shown with the wind model subtracted.Only wavelength regions free of the broad wind features are shown.The wind model accounts for almost all of the continuum present at these wavelengths.A small excess was subtracted shortward of 5000 Å.This shows that the combination of the wind and hot gas models can explain much of the observed spectra.
The abundances in the cloudy model were adopted from the wind model.The nitrogen abundance is not constrained in this model because of a lack of bright lines.There are potential C vi lines at 6195 Å and at 4683 Å, which can fit lines at these locations, however these lines need a C/O ratio well above unity to fit the corresponding features, while the wind model does not support C/O > 1.In addition, the 5290 Å C vi line becomes too strong at higher carbon abundance.The temperature is mainly constrained by O viii lines which affect the 6064 Å line at temperatures below 3 MK, and the Ne x line at 6894 Å which becomes too strong for T > 4 MK.Because of the model's simplicity these constraints should be viewed with caution, along with the assumption of uniform temperature and density, and the fact that other wind lines may be present.
The line widths and temperature indicate that the hot gas is closely related to the stellar wind.The line widths are similar or slightly smaller to that of the wind.The assumption of a gaussian profile is not identical to the wind profile of Eq (1).The symmetric line profiles favor a physical location embedded in the outer wind.The gas can be heated by shocks as regions with different speeds collide.This is a common situation in OB stellar winds.A smaller line width for the hot gas could argue for a location further inside the wind but this remains to be investigated.
For Wolf-Rayet winds (hydrogen poor), the shocks can be considered approximately isothermal (Hillier et al. 1993).We use the relation (Hillier et al. 1993): where v s is the differential velocity between wind and shock front, µ is the mean ionic weight, and γ the ratio of electrons to ions.Putting in O viii as the dominant ion, the second term is approximately unity.For T x ∼ 5 MK, v s ∼ 500 km s −1 .This is much less than the wind velocity but plausible as internal differential velocities.
Neon
Figure 7 shows the Gaia BP/RP spectrum.It covers a much larger wavelength range to the red.The sensitivity is less than that of our other spectra, and there are calibration issues as mentioned before.The top panel shows the wind model overplotted with the Gaia spectra.It shows that the wind feature at 7500-8000 Å is well Figure 7.The Gaia BP/RP spectrum (black).Upper panel: the wind model (red).Lower panel: wind model (dashed, red), the hot-gas model (red), hot gas model with three times higher neon abundance (dashed, blue).In the lower panel, for clarity a level of 1.0 has been subtracted from the Gaia spectrum and wind model reproduced.We use this 'above atmosphere' spectrum to discuss the neon abundance.Table 2 lists a number of expected neon lines.The chosen neon abundance was based on lines in our optical spectra.The predicted Ne x line at 6894 Å is too strong but this coincides with a strong telluric feature that was removed from the ground-based spectra.
The bottom panel shows the Gaia spectrum with the wind model (red, dashed; shifted down for clarity).The lower red line shows the hot-gas model.The blue-dashed line shows the same model but with three times higher neon abundance.The neon lines now disagree with the observed spectrum.From this we deduce an upper limit of Ne/O < 0.15 by number.
The wind spectrum is seen to overpredict the line at 8200 Å.In the wind model this is a Ne vii complex, while the hot-gas model has an O viii line at this location.The model that is shown has a high neon abundance.Again this points at a lower neon abundance for the remnant.
Magnetic fields
The presence of strong magnetic fields has been proposed by Gvaramadze et al. (2019).This is based on predictions of WD merger models which yield B ∼ 200 MG (Ji et al. 2013), and on the expectation that the stellar wind is magnetically driven from a very rapidly rotating star.For the SN 1181 stellar remnant, Kashiyama et al. (2019) carried out magneto-hydrodynamic simulations that argue for a WD remnant with a strong magnetic field (20 -50 MG) but still an order of magnitude lower than Gvaramadze et al. (2019).
Magnetic fields have been proposed for hot (T > 60 kK), hydrogen-poor (DO) white dwarfs with O viii lines and other ultra-high excitation lines (UHE) in absorption.Reindl et al. (2019) attributes these coronal lines to gas trapped in an equatorial, magneticallyconfined, shock-heated magnetosphere.The confinement requires where B is the magnetic field strength, R * the stellar radius, v w the wind speed, and Ṁ the mass-loss rate.
For this star, the mass-loss rate from the cmfgen models require B > 0.1 MG for η > 1, a modest magnetic field for such stars.
The hydrogenic O viii lines can provide strong constraints on magnetic field strength due to Zeeman splitting of the line into π and σ components.For moderate fields, the central π component remains unchanged in wavelength, whilst the two σ components shift to either side.The relative strength of the π component depends on the magnetic field's orientation angle: it is absent when the magnetic field points at the observer and it is equal in intensity to the sum of the σ lines if the magnetic field is perpendicular to the line of sight.Each of the three Zeeman components can be considered as a Gaussian of width derived from the temperature.The observed line is the sum of these three Gaussians.For small shifts much less than the width of each line, this sum is itself a Gaussian widened by the shift.For larger shifts the line profile deviates.
The shift ∆λ between the two σ lines can be parametrized as ∆λ = αB, where α is a constant for each ion and transition.The value for α is tabulated by Blom & Jupén (2002, their Table 1).For transitions considered here, that is O viii 8-9, O viii 9-10, and C vi 7-8, and for field strengths in Gauss, the α parameter is 1.8 × 10 −5 , 3.4 × 10 −5 , and 2.6 × 10 −5 , respectively.The 6064 Å O viii line has a FWHM of 93 Å.Assuming a separation between the σ lines equal to the FWHM would have strongly distorted the shape, we constrain B < 2.7 MG.The 4340 Å line with FWHM of 69 Å gives a conservative upper limit of 3.8 MG.
We ran models adding the Zeeman components assuming all three Zeeman lines have equal intensity.This ratio corresponds to a magnetic field orientation to the line of sight of 55 degrees.We then fit the profile of the 6064 Å line, where the intrinsic width is a fitting pa-rameter.For B = 1.2 MG an acceptable fit to the line shape is found.For B = 1.5 MG the line profile deviation was clearly visible at 6064 Å, and developed a flat peak.Running the same test on the 4337 Å line shows it is indeed a bit less sensitive.We also ran the test with the central π line twice as strong as each accompanying σ line corresponding to a magnetic field oriented along the line of sight.Here the line remains centrally peaked which makes it easier to fit the profile.A notable deviation from the observed profile of the 6064 Å line was found for a field of 2.5 MG.At this field strength, half the line width is due to Zeeman splitting.
The O viii line shapes give an upper magnetic field limit of B < 2.5 MG, easily within the requirement of the magnetically-confined toroidal structure proposed by Reindl et al. (2019).Based on the available data we can not rule out the alternative of such a structure.
This new field strength limit is far below that proposed by Gvaramadze et al. (2019, 200 MG) and of typical low-mass, magnetic WDs (10 MG).There is some caution here.If the hot gas is located in the wind, then the surface magnetic field may be underestimated because the field strength of a magnetic dipole will go down as 1/r 2 with distance r.Also, the star has a very high luminosity and temperature, and so a larger radius than a typical WD.If it evolves as a WD it will contract significantly, and the field would strengthen.Hence, our limits are not inconsistent with known field strengths of magnetic WDs, which are of order 10 MG but they do not support values as high as 200 MG previously proposed.
Photometry
The stellar remnant (Gaia DR3 526287189172936320) has been observed by various terrestrial and space-based all-sky photometric surveys over many decades.Some offer short cadences to detect transient events (e.g., TESS 2-min and 30-min cadence), while others cover decades of sparse photometric sampling with long exposures (e.g, DASCH).Many have variable depth and sensitivity and large effective detector pixel sizes and large photometric apertures.This can result in blending and nearby source contamination.
A faint, foreground star (Gaia DR3 526287189166341504) is 2.3 eastwards of the stellar remnant (Figure 8).Though this star is clearly distinguished in short-exposure IPHAS (Drew et al. 2005) images, it is ∼ 4.5 mag fainter in Gaia filters and well below limits of earlier photographic imagery so should not affect recorded optical photometry.A very bright star (Gaia DR3 526287257892412928) is ∼21 directly west.Because recent all-sky surveys use large photo- metric apertures (e.g., the ASAS-SN PSF is ∼15 , or single-pixel aperture ∼17 in OMC), the bright star (G ∼ 11 mag) can affect adjacent pixels and influence photometric results.However, it is well separated from the stellar remnant in scanned data from all older photographic plates recorded in DASCH and so it is unlikely any variability for the stellar remnant is affected by the brighter star in 20 th century observations.
Here, we show B band results from various surveys as this filter has the best temporal coverage.The data include photographic photometry from DASCH, the Ukrainian FON astrographic catalogue (Andruk et al. 2016) The stellar remnant is faint and the optical light curve sparsely populated over much of its recorded history (Figure 9) but evidence of dimming is seen.The photographic B j magnitude estimates, calibrated via the GSC 2.3.2 (Lasker et al. 2008) provided by DASCH for measurements between 1924.8 and 1950.7,show a mean B j magnitude of 15.6 ± 0.2 mag, and maximum magnitude variation of 0.85 from 37 data points with a dimming slope of 0.02 magnitudes/year and an R 2 = 0.38 (which indicates a weak though positive correlation).Exposures where the recorded plate limit was within 0.3 mags of the star's value are omitted as unreliable.Seven nearby stars, both fainter and brighter (including the bright star to the west), have much lower R 2 values 55500 56000 56500 57000 57500 58000 58500 59000 (average of 0.047 with rms of 0.064) and are flat (average slope of 0.0007 mag/yr with rms also of 0.007).See the top panel in Figure 9. Overall, the temporal coverage is rather sparse apart from the ATLAS data 11 .For the ATLAS dataset (before binning), no significant peaks were found in Lomb-Scargle periodograms so the source appears stable on short timescales in optical bands.
We extracted archival photometry from the Wide-field Infrared Survey Explorer (WISE) 3-band Cryo data release (Wright et al. 2010), and the NEOWISE latest data release.Figure 10 shows the averaged photometry per epoch in bands W 1 (3.4µm) and W 2 (4.6µm).Each epoch covers about six months.Frame inspection showed there may be bright star contamination within the standard photometric aperture of NEOWISE (radius 8.25 ).There is no nebula contamination as this is not visible at those wavelengths.The W 1 and W 2 light curves show a modest decline of ∼0.01 mag/year over 11 years, half the rate seen at earlier epochs in the optical B-band.
The wind model indicates some weak but no strong emission features in the WISE W 1 and W 2 filters: the emission is dominated by continuum.Since the Pa 30 nebula is not visible at these wavelengths, we can exclude nebular lines such as [Fe ii] and [Ar vi] or the 3.3µm PAH band.Therefore the potential decline would be likely due to the continuum.Kashiyama et al. (2019) propose that the central star (WD J005311 in their paper) is a highly-magnetic WD 11 Which has a pixel size of ∼ 2 in two bands: the "cyan" (c) band (4200-6500 Å) and the "orange" (o) band (5600-8200 Å).
spinning at an angular frequency of 0.2 to 0.5 s −1 .This would suggest very short light-curve variability at a period of 12.6 to 31.4 seconds.The current data does not trace this time scale.
Wind variability
In Section 3.1 we hinted at variability in the stellar wind.The SparsePak/WIYN data from 2014, our 2016 OSIRIS/GTC spectra, the 2017 spectrum by Gvaramadze et al. ( 2019), and the averaged spectra in 2020 by Garnavich et al. (2020) are examined for long-term spectral variability (Figure 11).The SparsePak/WIYN fibre is 4.7 and should include all stellar flux assuming the fibre is well placed.Comparison of the synthetic photometry from this spectrum (using the Python package pyphot12 ) to the photometric measurements provided by the individual surveys justifies this assumption.Hence, all slit spectra are scaled to match the SparsePak/WIYN spectrum.To improve the signal-tonoise ratio the SparsePak/WIYN and Garnavich et al. ( 2020) spectra have been smoothed with a mean filter of length of 7 and 9 pixels respectively.
The most striking differences are intensity changes in the O vi 3811/34 emission line, as seen in panels (a) and (b) of Figure 11.The discontinuity in the Gvaramadze et al. ( 2019) spectrum at 3850 Å and the different spectral shape further to the blue are to be taken with caution, since flux calibration is less reliable in the far blue.Garnavich et al. (2020) found that the O vi 3811/34 line shows short-term < 2hour variability, which was attributed to clumpiness in the stellar wind.Calibration differences among the different instruments do not allow a direct comparison of wind variability within the O vi 3811/34 line, but the OSIRIS/GTC spectra overall shape appears consistent with the averaged spectra of Garnavich et al. (2020).A decrease in the peak emission of the O vi 5300 Å line between 2014 and 2017 (panel (e) in Figure 11) is also seen, perhaps associated with a change in the stellar wind.Small changes in continuum slope beyond 5500 Å can be due to instrumental and calibration effects between epochs, as the photometry appears to have remained constant throughout (cf.Sect.4).Future photometric monitoring observations should be obtained at cadences of 10 seconds or less in the visual and at angular resolutions sufficient to allow separation of the target from the adjacent bright star (e.g., ≤ 5 ). Figure 11.Spectral differences between four epochs: our SparsePak/WIYN (2014) and OSIRIS/GTC (2016) against the Gvaramadze et al. (2019Gvaramadze et al. ( , epoch: 2017) ) and Garnavich et al. (2020Garnavich et al. ( , epoch: 2020)).In panel (a) all spectra are scaled to the SparsePak/WIYN (blue) levels to accommodate for slit losses.In all remaining panels spectra have been scaled to a mean value of 1.0 to achieve the best overlap and make differences in the shape of individual lines more obvious.See main text (Sect.4.2) for more details.including the new photometry in Sect.5.2 (60 K blackbody; blue) and the OSIRIS/GTC spectrum (red).There is no IR excess below 10µm.The photometry below 3µm and spectrum have been dereddened using extinction from Paper I.
Hydrogen emission
Although no hydrogen lines were detected in the nebular spectrum from large aperture telescopes (Paper I), the outer Pa 30 nebula is tentatively detected in deep, ≈1 Rayleigh sensitivity13 Hα imaging from the low angular resolution (pixel size 1.6 ) Virginia Tech "Hα" Spectral-line Survey (VTSS; Dennison et al. 1998) that revealed a single enhanced pixel at Pa 30's position.The 5-σ excursion within a 15 ×15 region indicates a lowsurface brightness hydrogen (Hα) shell of similar size to the [O iii] shell found in Paper I. The VTSS survey limit corresponds to 5.661 × 10 −18 erg s −1 cm −2 arcsec −2 at Hα.The continuum-corrected surface brightness within a 1 pc radius of Pa 30 is ≈ 5.6 Rayleigh ≈ 8.8 × 10 −13 erg s −1 cm −2 .Adopting the extinction law of Howarth (1983) for Hα, c Hα = 0.99 E(B − V ), then the dereddened Hα brightness becomes 7.4 × 10 −12 erg s −1 cm −2 for A V = 2.9, which translates to L Hα ≤ 1.2 L .Deeper narrow-band Hα imagery is needed to confirm this tentative detection.Note-( †) stands for upper limit.
Infrared emission
Pa 30 is visible at mid-and far-infrared wavelengths though not detected in the IRAS 12µm maps.In the farinfrared, the shell is round as indicated by AKARI 65 to 140µm maps, extending from 88 up to 128 (±5 ) from the central star, or else a radius of about 0.98 -1.42 pc at the adopted distance.Foreground interstellar emission confuses the images beyond 100µm.We attribute the far-infrared wavelengths to dust emission and the midinfrared emission to line emission.
Given the infrared shell sizes and the uncertainties in archival point-source photometry, we have re-estimated surface brightnesses for all IRAS and AKARI images.Images were downloaded from IRSA/IPAC with the same angular resolution (15 /pixel): HIRES maps for IRAS, and Far-Infrared Surveyor maps for AKARI.An 100 radius aperture was selected for all maps.We use the conversion factor of Ueta et al. (2019) to derive flux densities.The final photometry beyond 100µm was corrected for interstellar contamination (Table 3).IRAS 12µm and AKARI 160µm flux densities are upper limits (non detection).The new measurements are given in Figure 12, along with archival photometry from Pan-STARRS1, Gaia, IPHAS, 2MASS, and WISE.The IRAS and AKARI fluxes integrated over the nebula indicate the shell flux peaks at λ ∼90µm, while the foreground emission is colder.Using the aperture photometry results for the nebula and the VOSA tool, we derive a shell blackbody temperature of 60 K (cf. Figure 12).
We ran a cloudy model for the shell assuming it is ionized gas with normal ISM abundances.We assume a blackbody star with L * = 3 × 10 4 L , T * = 2.3 × 10 5 K, surrounded by a shell with inner and outer radius of 0.8 and 1.2 pc, and density n = 17 cm −3 .Standard ISM abundances and dust content were used as defined in cloudy, with a range of grain size and a dust-togas ratio of 3.8 × 10 −3 .This gives a total gas mass of ∼ 2.1 M and a dust mass of 8 × 10 −3 M .The model predicts broadband fluxes listed in Table 3, in good agreement with the infrared measurements.At shorter wavelengths, the model flux is dominated by a few strong emission lines: [Ne vi] at 7.6µm, [Ne v] at 14.7µm and 25µm, and [O iv] at 25.9µm.The longer wavelength emission cannot be explained by emission lines and is from dust.The predicted Hα flux is 2.2 × 10 −11 erg s −1 cm −2 , which, after extinction correction, gives 13 Rayleighs, 2.3× the VTSS Hα surface brightness.The model explains the infrared photometry but overpredicts the apparent Hα surface brightness.
Alternatively, it can be assumed that the Hα emission comes from the accelerated gas seen in the [S ii] emission reported in Paper I. The average density of this component, derived from the [S ii] lines, is N e = 120 cm −3 .We assume a temperature T e = 10 4 K and use the relation (Pottasch 1984): where F (Hβ) is in units of 10 −11 erg cm −2 s −1 , distance d in kpc, electron temperature T e in K, and electron density N e in cm −3 .The intrinsic Hβ flux, derived from the Hα flux assuming recombination Case B for a theoretical Hα to Hβ ratio of 2.85, is F (Hβ) ≈ 7.2 × 10 −12 erg cm −2 s −1 .Hence, an ionized mass of ≈ 0.3 M is derived.
The mass is therefore dominated by the outer shell, rather the accelerated gas seen in [S ii].The overprediction of Hα for the outer shell may indicate that this gas is hydrogen-poor, in which case the mass is less than the derived 2.1 M .
The presence of dust in the circumstellar shell is strongly supported.The mass is dependent on composition and grain size.The dust origin remains open: it can be supernova dust, interstellar dust heated by the star, or dust from previous mass loss.The circumstellar shell could represent swept-up ISM or matter ejected in a mass-loss episode before the supernova explosion, such as a relic planetary nebula.
6. DISCUSSION 6.1.SN shell mass Oskinova et al. (2020) estimate the amount of hydrogen-free, X-ray-emitting nebula gas to be 0.1 M .Our ionized mass estimate from the tentative Hα detection is an additional 0.3 to 2.1 M (Sec.5.2).However, the relation of this gas to the supernova is unclear (see next subsection) and its kinematics are not known.
Consider the kinetic energy of the H-poor ejecta, E kin = (1/2) M ej v 2 exp , where v exp is the ejecta expansion velocity.In Paper I, we find the shocked emission has a radial velocity of 1100 km s −1 .If the extended halo-like feature in the AKARI 90µm data (radius 128±5 arcsec) is the farthest extent of the nebula, then the ejecta velocity would be ∼1700 km s −1 not much different to the shocked emission.Both values are consistent with the general decline of ejecta velocities more than 100 days post-eruption in Type Iax SNe (e.g., Kawabata et al. 2018;Srivastav et al. 2021).
Using the above ejecta mass and velocities, the kinetic energy of the ejecta E kin ranges from 1×10 48 to 3×10 48 erg.These are conservative estimates since this is a lower limit for ejected mass, while the expansion velocity at the time of eruption could have been higher.
We compare these tentative estimates against known extragalactic Type Iax SNe in Figure 13.These include theoretical models by Lach et al. (2022, open et al. 2014), 2007gd (blue;McClelland et al. 2010Tomasella et al. 2016Srivastav et al. 2020), and2020kg (green;Srivastav et al. 2021).Both the low E kin and ejecta mass estimate are consistent with observed ranges for Type Iax sources.However, they do not agree with the model predictions for pure deflagrations of Chandrasekhar-mass CO WDs of Lach et al. (2022).
Merger and ejected mass
The stellar luminosity and temperature place the star on a location in the HR diagram similar to post-AGB stars.In these stars, the stellar luminosity comes from residual hydrogen burning and follows the core massluminosity relation, originally proposed by Paczyński (1970).Using the star's luminosity determined as L * = (40 ± 10) × 10 3 L , this relation gives a core mass (which is essentially the mass of the remnant star) of M * = 1.20 ± 0.17 M .The relation from Vassiliadis & Wood (1994) gives 0.1 M less.
The application of these models to post-merger evolution can be queried.In the absence of detectable hydrogen, helium burning may be expected.The models of Vassiliadis & Wood (1994) find a similar luminosity for stable helium burning, however this helium very quickly runs out in post-AGB stars and the luminosity subsequently declines.
Merger models published by Schwab et al. (2016) and Schwab (2021) reach a similar location in the HR diagram as carbon burning stars, but take 10 to 20 kyr to reach this.They reach the post-AGB phases at 30% lower luminosity than predicted by the Vassiliadis & Wood (1994) for their mass.Lower mass merger models by Schwab & Bauer (2021) take 170 kyr to reach the post-AGB phase and similar models from Wu et al. (2022) reach the right luminosity but do so as cool red giant stars.All these merger models spend a long time as red giants.The models assume a red giant mass loss recipe.The evolution could conceivably be much accelerated if an instantaneous mass loss is assumed, either due to the explosion or akin to common envelope systems, thereby skipping the red giant phase and the early post-AGB phase.Accretion may also affect the evolution in the post-AGB region of the HR diagram.
Assuming the change in luminosity by 30%, the mass of the star would become 1.45±0.2M .The mass range does not require the star to have super-Chandrasekhar mass as proposed by Gvaramadze et al. (2019), although this is not ruled out either.
The range of possible masses fits with WD merger products.The CO WD mass distribution peaks at approximately 0.6 M , while helium WDs are approximately 0.45 M .Two merging CO WDs or a CO+He WD can therefore reach the minimum remnant mass.O-Ne WDs are more massive and a merger involving one (Gvaramadze et al. 2019) would lead to a more massive remnant.The moderate neon abundance may not support involvement of such a WD.
The mass ejected in the merger should be included in the progenitor masses.The minimum ejecta mass is that derived from the X-ray emission (Oskinova et al. 2020) of 0.1 M .The outer shell may contain further mass ejected during the merger.The hydrogen mass in the shell cannot come from the merger as WDs have very thin hydrogen layers and this should not be be counted in the ejecta mass.The hydrogen is seen only in the VTSS detection and needs confirmation.If the outer shell is hydrogen-free, then it may add another 0.1 M .The merger ejecta are therefore estimated at M ej = 0.15 ± 0.05 M .
This leaves the question of where the outer shell originates.If there is hydrogen in the outer shell, it can be swept-up ISM or from mass lost by the WD progenitor.If so the faint, outer shell is a relic planetary nebula, re-ionized by the luminous, hot post-merger star.
Dust
Pa 30 is only the second detection of dust emission in Type Iax SNe and the only known case of cold dust.The other example is the recent Type Iax SN 2014dt in M 61.Spitzer observations by Fox et al. (2016) found excess mid-infrared emission (3.6 and 4.5µm) one year posteruption in 2014dt, that could be a pre-existing dusty circumstellar matter or newly formed dust (∼ 10 −5 M of carbonaceous material).Coincidentally, 2014dt has been suggested as a possibly bound remnant (Kawabata et al. 2018).We note that cold dust, as in Pa 30, would not be detectable in extragalactic SNe.
The lack of a kicked stellar remnant
Theoretical models of Type Iax explosions predict a kicked stellar remnant (e.g., Jordan et al. 2012;Kashyap et al. 2018).However, the radial velocity of Pa 30's stellar remnant, as well as the mean radial velocity of its surrounding nebula, is nearly zero, while the tangential velocity is low (∼ 30 km s −1 ).The Gaia DR3 proper motion indicates the central star could have moved by at most ∼ 2 since 1181 AD.It clearly remains at the center of Pa 30.So these data are already a challenge to existing models.The nearby faint star (Figure 8) cannot be a kicked companion as this star is in the foreground (1.1 +0.4 −0.3 kpc; Bailer-Jones et al. 2021) with a much higher proper motion (21.285 mas yr −1 ) Lach et al. (2022) estimate kick velocities between 6.9 km s −1 and 369.8 km s −1 for bound remnants in Type Iax explosions.These could be in the ballpark for Pa 30.
The (only) CO+ONe WD merger model is by Kashyap et al. (2018) with a predicted kick velocity of ∼ 90 km s −1 .However, an ONe WD may not be required here because of the remnant mass and the moderate neon abundance.
CONCLUSIONS
The association of Pa 30 with the supernova of 1181 AD (Paper I) makes this system one of only two Galactic Type Iax supernovae known, the other being Sgr A East (Zhou et al. 2021).Pa 30 is the only bound SN with a visible stellar remnant in the Galaxy.Its Wolf-Rayet-type optical spectrum is the only example known that is neither the central star of a planetary nebula nor the product of a high-mass Population I progenitor.This system, unique in several respects, is the nearest and youngest Type Iax remnant known and amenable to detailed study.It is also the only example where a Type Iax stellar and nebular remnant can be related to an observed eruption almost a millennium earlier.
Most features of the optical spectrum can be reproduced with a stellar wind model, with v w = 15, 000 km s −1 and mass loss rate Ṁ ∼ 10 −6 M yr −1 .The wind appears H-free, is strongly deficient in helium relative to C, N, O, and Ne, and has C/O∼ 0.25 by number.The wind cannot be driven by radiation pressure alone.Some broad O viii lines not reproduced by the wind model can be well fitted with hot gas with T 4 MK, either shocked gas in the outer or inner wind where the opacity of the wind favours the former.This gas, with the same abundances as the wind, is seen in lines of O viii and possibly C vi.The hot gas shows a low neon abundance, Ne/O < 0.15 by number.
The infrared spectrum can be fitted with a circumstellar shell where shorter wavelengths (10-30µm) show line emission and longer wavelengths show emission from cold dust, with M dust ∼ 8 × 10 −3 M .The shell is also tentatively detected in wide-field Hα imaging.This is only the second detection of dust in a Type Iax SN and the only detection of cold dust.The origin of this dust is open: possibilities are supernova dust, heated interstellar dust, or dust from previous mass loss from the system.
On timescales of the order of a century, the visual light curves of the central star show evidence of ∼ 0.5 magnitude dimming between 1925 and 1950 around a value of B j ∼15.8 mag.Recent mid-infrared data from WISE may also show a slight fading over 11 years of about 0.1 magnitudes for W 1 and W 2.
The abundances, ejecta mass, explosion luminosity and energetics are all consistent with a type Iax supernova.There are two scenarios for this type of supernova: a pure deflagration of a Chandrasekhar-mass CO WD, and a double-degenerate merger.The observed parameters and the presence of a highly evolved posteruption star indicate that the stellar remnant is an example of the second scenario: a merger of two WDs (also proposed by Gvaramadze et al. 2019;Oskinova et al. 2020).The luminosity , as derived from the stellar wind modeling L = (40 ± 10) × 10 3 L for the assumed distance and reddening, indicates a mass of the remnant star of 1.2 ± 0.2 M to 1.45 ± 0.2 M dependent on the stellar models used for the post-AGB region of the HR diagram.This allows for the possiblity of masses below the Chandrasekhar mass, in which case the object might not go supernova again (Gvaramadze et al. 2019).The mass is consistent with a merger between two CO WDs or a high-mass CO+He WD.The low neon abundance may argue for the former.The ejecta indicate that 0.15 ± 0.05 M was lost during the merger.The outer shell, if indeed H-rich, cannot be purely explained as merger ejecta and could be swept-up ISM or a fossil planetary nebula from pre-merger mass-loss.
Merger products are expected to be fast rotators.Our observations cannot establish a short rotational period and this remains to be tested.Merger products are also assumed to be the progenitors of highly magnetic WDs.The absence of detectable Zeeman splitting places an upper limit of B < 2.5 MG, way below the previously predicted values from Gvaramadze et al. (2019, 200 MG) but may increase during future remnant contraction.
The mass of the merger progenitor stars, the ejecta and circumstellar mass, the non-detection of a significant magnetic field, the fact that the remnant has remained at the center of its bound nebula post-eruption, and the possibility of modest photometric dimming of the stellar remnant, are some of its key observed properties that that somewhat deviate from the typical (inferred or assumed) properties of Type Iax SNe, and therefore merit detailed investigation to improve our understanding of Type Iax evolutionary models.
Figure 1.Stellar spectra from SparsePak/WIYN, OSIRIS/GTC, and a small amateur telescope (P.Le Dû) smoothed and scaled to the same flux level.The dereddened spectra are also shown for comparison (using interstellar extinction from Paper I).The major discontinuity bluewards of 4000 Å is seen in the OSIRIS and amateur spectra but the SparsePak spectrum did not go below 4200 Å.The OSIRIS spectrum had the slit slightly off-center to avoid a nearby bright star.To correct for the resulting slight flux underestimation after flux calibration the spectrum was scaled to match the reliable SparsePak/WIYN flux level.
Figure 2 .
Figure 2. A0 (550 nm) extinction values for field stars taken from the Gaia DR3 catalogue.Black points are for stars within a radius of 5 arcmin of Pa 30, and red points for stars within 2 arcmin.The blue line shows the extinction curve for this direction from Vergely et al. (2022), and the black dashed line marks the parallax of Pa 30.
Figure 4 .
Figure 4.A cloudy model for the hot gas emission at a gas temperature of 4 MK (red line) and AV = 3.0 mag compared to the central star's SparsePak/WIYN spectrum (black) .The spectrum has an estimated constant continuum level subtracted of 1.45 on the scale of the plot.For further description see Sect.3.3.
3.0 mag for the final model.The model is shown with the continuum-subtracted WIYN spectrum in Figure 4.The cloudy model fits the 4337, 4655, and 6064 Å with the O viii 8-9, 10-12, and 9-10 transitions, respec-
Figure 6 .
Figure 6.Hot-gas cloudy model (red) against the residual SparsePak/WIYN (black) and OSIRIS/GTC (blue) spectra after subtraction of the wind model.A residual continuum of 0.1 for the WIYN spectrum and 0.2 for the GTC spectrum was subtracted shortward of 5000 Å. Vertical bars indicate wavelengths of diffuse interstellar bands which are visible in the spectra.The cloudy model was scaled to the 4363 Å line with an extinction AV = 3.0.
Figure 8 .
Figure 8. IPHAS r-band image of Pa 30 identifying the faint and bright stars mentioned in Sect. 4. The green circle (radius 22 ) is centered on the stellar remnant and it indicates the distance to the brightest star in the field.Gaia DR3 sources within a 25 radius from the remnant are marked as blue circles.
and the Palomar Observatory Sky Surveys (POSS-I and POSS-II); CCD data from the AAVSO Photometric All-Sky Survey (APASS DR10, Henden et al. 2018), the INT Galactic Plane Survey (IGAPS/UVEX; g, r; Monguió et al. 2020), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS1, g, r, i; Tonry et al. 2012), the Asteroid Terrestrial-impact Last Alert System survey (ATLAS, o, c; Tonry et al. 2018), the Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015), and the Near-Earth Object Wide-field Infrared Survey Explorer Reactivation Mission (NEO-WISE, 3.6, 4.5µm; Mainzer et al. 2014).The single epoch IGAPS/UVEX g, r photometry from Pan-STARRS1, and the Pan-STARRS1 g, r, i photometry, were converted to B using the transformations of Tonry
Figure 9 .
Figure 9. Extended light curve of the stellar remnant from 1920 until today.Top: DASCH Bj GSC calibrated photometry from 1920 to 1950 with the best-fit linear trend line plotted.Middle: limited photographic (pre-2000) and CCD (post-2010) optical photometry in the B-band.Respective surveys are indicated in the legend.Bottom: AT-LAS binned photometry in the "cyan" (c) lower band (4200-6500 Å) and the "orange" (o) upper band (5600-8200 Å).
Figure 10 .
Figure 10.WISE W 1 (blue; 3.4µm) and W 2 (red; 4.6µm) light curves binned per epoch.The errors are the standard deviations of the binned measurements.The abscissa is given in years and Mean Julian Date.
Figure 12 .
Figure12.Infrared spectral energy distribution of Pa 30 including the new photometry in Sect.5.2 (60 K blackbody; blue) and the OSIRIS/GTC spectrum (red).There is no IR excess below 10µm.The photometry below 3µm and spectrum have been dereddened using extinction from Paper I.
Figure 13 .
Figure 13.Comparison of the ejecta mass against kinetic energy released by Type Iax SNe.Pa 30/SN 1181 is marked in red.Lach et al. (2022) models are also shown (open diamonds), while the range of expected values for Iax by Foley et al. (2013) is shown in black (dashed).A few examples of known extragalactic Iax are also shown: 2008ha and 2010ae data by Stritzinger et al. (2014), 2007gd by McClelland et al. (2010), 2014ck by Tomasella et al. (2016), 2019gsc by Srivastav et al. (2020), and 2020kg by Srivastav et al. (2021).
Table 1 .
Gvaramadze et al. (2019)ers for cmfgen models.The last two columns show the values fromGvaramadze et al. (2019)and from the hot gas model in the Sect.3.3.
required for O viii is >1 MK, far above the photospheric or wind temperature.
Table 2 .
Predicted, brightest ultra-high excitation (UHE) lines from the cloudy models.The line intensity, I0, is relative to the 4337 Å O viii 8-9 transition, and are not corrected for extinction.
diamonds), the proposed range of values for Type Iax by Foley et al. (2013, black dashed bar), and a few examples of extragalactic sources: 2008ha and 2010ae (gray; Stritzinger | 15,391.6 | 2022-08-08T00:00:00.000 | [
"Physics",
"Geology"
] |
Large Rapidity Gap Method to Select Soft Diffraction Dissociation at the LHC
In proton-proton (pp) collisions, any process involves exchanging the vacuum quantum numbers is known as diffractive process. A diffractive process with no large Q2 is called soft diffractive process. The diffractive processes are important for understanding nonperturbative QCD effects and they also constitute a significant fraction of the total pp cross section. The diffractive events are typically characterized by a region of the detector without particles, known as a rapidity gap. In order to observe diffractive events in this way, we consider the pseudorapidity acceptance in the forward region of the ATLAS and CMS detectors at the Large Hadron Collider (LHC) and discuss the methods to select soft diffractive dissociation for pp collisions at√s = 7TeV. It is shown that, in the limited detector rapidity acceptance, it is possible to select diffractive dissociation events by requiring a rapidity gap in the event; however, without using forward detectors, it seems not possible to fully separate single and double diffractive dissociation events. The Zero Degree Calorimeters can be used to distinguish the type of the diffractive processes up to a certain extent.
Introduction
The measurement of the main characteristics of diffractive interactions is essential to improve our understanding of pp collisions.However, the modelling of diffraction is still mainly generator dependent and there is no unique, agreed upon experimental definition of diffraction [1,2].
While the physics of diffractive dissociation at the LHC are very important, the detector capabilities in the forward region are limited.In this paper, using the rapidity gap technique and by considering the forward rapidity coverage of the LHC experiments ATLAS [3] and CMS [4], a number of studies are carried out to select soft diffraction dissociation.In addition, the potential of Zero Degree Calorimeter for diffractive events selection is discussed.
Event Classification
In pp (or more generally hadron-hadron) scattering, interactions can be classified as either elastic or inelastic by the characteristic signatures of the final states.Furthermore, it is conventional to divide inelastic processes into diffractive and nondiffractive parts.In the theoretical concept, hadronic diffractive dissociation is principally explained to be mediated by the exchange of the Pomeron, which carries the quantum numbers of the vacuum; thus, the initial and final states in the scattering process have the same quantum numbers.If the Pomeron exchange process is additionally associated with a hard scattering (such as the production of jets, -quark, and boson), the process is known as hard diffractive; otherwise, it is soft diffractive dissociation.Introductory reviews on this can be found in [5,6].Diffractive events at hadron colliders can be classified into the following categories: single, double diffractive dissociation, and central diffraction (a.k.a."Double Pomeron Exchange"), with higher order "multi-Pomeron" processes [7].Thus, the total pp cross section can be written as the following series: Advances in High Energy Physics Figure 1: Illustration of a single (top) and double (bottom) diffractive dissociative event in which a Pomeron (P) is exchanged in a pp collision. and are the invariant masses of the dissociation systems and , respectively.In single diffractive dissociation, = , where is the mass of the intact proton.Δ refers to the size of the large rapidity gap.
where ND is nondiffractive process, SDD (DDD) is single (double) diffraction dissociation, DPE corresponds to the Double Pomeron Exchange and MPE refers to the Multi-Pomeron Exchange.The precise measurements of the total cross sections of each process separately are important as they provide input to phenomenological models and are needed for tuning of the parameters of Monte Carlo event generators.At 7 TeV, diffractive processes together with the elastic scattering represent about 50% of the total pp cross section [8,9].
In single diffractive dissociation, one of the incoming protons dissociates into a "low-mass" system (a system of particles with low invariant mass with respect to the centre of mass energy of the collision) while in double diffractive dissociation both of the incoming protons dissociate into "low-mass" systems as represented in Figure 1.
Diffractive events are classified by a large gap in the pseudorapidity distribution of final state particles.(the pseudorapidity of a particle is defined as = ln tan(/2), where is the polar angle with respect to the beam direction (z-axis), and rapidity is = (1/2) ln[( + )/( − )], where is the longitudinal momentum of the particle.Pseudorapidity and rapidity are equal for massless particles) The large rapidity gap can be defined as the difference between the rapidity of the diffractively scattered proton and that of the particle closest to it in (pseudo)rapidity.However, the existing ATLAS and CMS detectors are not well suited for measuring the forward rapidity gaps.Therefore, from the experimental point of view, rapidity gaps should be defined by a total absence of particles in a particular interval of pseudorapidity.The large rapidity gap, Δ, is the largest rapidity gap between those rapidity gaps in a final state and determines the type of the diffraction process.
Fractional Longitudinal Momentum
Loss.In single diffractive collisions, one of the two incident protons emits a Pomeron and remains intact by losing a few percent of its initial longitudinal momentum.The fractional longitudinal momentum loss of the intact proton is related to the momentum fraction taken by the Pomeron: where final
𝑧
is the final and initial is the initial longitudinal momentum of the proton.The Pomeron scatters with the other beam proton and the proton dissociates into a system of particles with low invariant mass, .DDD processes are described by the invariant masses and of the dissociation systems and , respectively, as shown in Figure 1.The fractional longitudinal momentum loss of the proton can be determined by measuring the invariant mass of the dissociation system(s): where √ is the centre-of-mass energy for pp collisions.In the following, the convention > is adopted and is referred to as .
Diffractive Mass.
The mass of the diffractive system, , can be measured experimentally by summing up the masses of all final state particles in the dissociation system: However, it is not possible to make a precise measurement throughout the whole pseudorapidity range due to the lack of the detector coverage in the very forward regions [10,11].Therefore, one can expect some differences between the measured mass of the diffractive system and the actual mass.Without their forward detectors, the nominal rapidity coverage of ATLAS and CMS experiments is || < 5.2.The difference between the diffractive masses reconstructed from particles in || < 5.2 and in full phase space domain is shown in Figure 2 for single diffractive events simulated by PYTHIA 8.142 [12].It seems not possible to reconstruct the actual mass of the diffractive system within the limited rapidity range due to the particles which scatter into the very forward rapidities and escape detection.It is clear that the wider the range of rapidity covered is, the more accurately the diffractive mass can be determined.
Large Rapidity Gap.
The gap signature in diffractive dissociation has been observed in the previous hadronhadron collision experiments [13,14].The type of diffractive processes can be determined by looking at the number of large rapidity gaps and at their position in the rapidity space.Single diffractive dissociation processes are characterized by an edge (forward) gap only at one side of the detector while the double diffractive dissociation processes are characterized by a central gap in the central pseudorapidity region of the detector.
The large rapidity gap in an event and variable are closely related to each other.In SDD case, the pseudorapidity difference between the intact proton and the X system is given as Δ ∼ − ln .If the size of the large rapidity gap or the invariant mass of the dissociation system is measured, the fractional longitudinal momentum loss of the proton can be determined.
Measuring Diffractive Events
The measurements of the diffractive processes can be done based on the determination of the size of the large rapidity gap, Δ, and the correlation between Δ and can be used.However, due to the forward acceptance limitations, it is also not possible to measure the gaps in the very forward rapidities or the whole size of the actual gap in some cases.It is therefore important to study the kinematical variables of the diffractive processes within the detector limits where the experimental measurements will be performed.Figure 3 shows the relation between the size of the large rapidity gap Δ and log 10 for single diffractive dissociation events.As can be seen from the figure, after applying the detector acceptance cuts, the correlation between Δ and log 10 is still present.
In this paper, the relation = 2 / is used for the calculation of .First, the largest gap in an event is reconstructed in full phase space domain.All the particles with the pseudorapidity less than (or equal to) the lower boundary of the gap are considered in one system and the rest are in the other.Then, the four vectors of all particles in the given system are summed to get the invariant mass and thus .The RIVET [15] analysis toolkit was used throughout the analyses in this study.
Detector Rapidity Acceptance.
The edge gap differential cross section, /Δ, for single and double diffractive dissociation events in the different detector rapidity acceptances || < 5.2 and || < 8.1 is given in Figure 4.The distributions are normalized by the cross section of the processes obtained from PYTHIA 8.These cross sections for different processes are given in Table 1.The minimum bias event class corresponds to total inelastic collisions.As can be seen in Figure 4, the large rapidity gap distribution for SDD and DDD events is slightly different for the different detector rapidity acceptance.A clear distinction between SDD and DDD processes is possible within the larger detector acceptance || < 8.1, but in the limited acceptance it is not possible.For the rest of this analysis, || < 5.2 is used.
Low-𝑝 T Threshold.
It is important to make a precise measurement of the size of the large rapidity gap, since it is directly related to the mass of the dissociation system and the longitudinal momentum loss of the proton.There are several factors such as radiation from multiple partonparton interactions and accelerator related radiation that can affect the measurement.Also, limitations of detector response and resolution and the electronic noise will not allow the measurement of very low transverse momentum ( T ) particles.All these factors should be considered when using the method of large rapidity gaps for the measurement of diffractive dissociation events.In Figure 5, edge gap distributions are given for minimum bias events with various low- T thresholds on the final state particles.It is obvious that when the threshold is increased, some of the soft particles (i.e., pions) could have lower T than the threshold; therefore, the gap size becomes larger.
In addition, the distributions of /Δ and / log 10 are given for different T cuts in Figure 6 for edge gaps and in Figure 7 for central gaps.As is clearly represented in the figures, the gap size and the ND contribution in minimum bias event content become larger with increasing T cut.A cut of T > 500 MeV for all final state particles enhances the size of the gap for ND events.On the other hand, each experiment must determine a reasonable T threshold regarding the capabilities of their detector and a low T cut, such as 100 MeV, might not be suitable for the measurements with the data due to the detector noise at this level.Therefore, a cut of T > 200 MeV for all final state particles seems to be an ideal cut to perform the measurements experimentally.
The Δ > 3 cut is for edge gaps; for central gaps it looks like Δ > 4 is a better cut.When Δ > 3 cut is applied with a cut of T > 200 MeV for all final state particles in || < 5.2, it seems possible to suppress a large fraction of ND events and select the diffractive dissociation events in minimum bias data.These cuts also will allow us to perform the measurements experimentally.
Distinguishing SDD and DDD Events.
From a phenomenological point of view, looking at the number and position of the large rapidity gaps in rapidity space, one can differentiate the type of the diffractive process.However, considering the limited rapidity coverage of the detectors, this might not be possible since the gap sizes in the very forward rapidities are not measured precisely.
Edge Gaps.
The distinguishability of SDD and DDD events in || < 5.2 is studied by requiring an edge gap in the events.The visible cross sections of events that pass the various Δ cuts with T > 200 MeV for all final state particles are given in Table 2. Similarly, the visible cross sections for different cuts on T of the final state particles for Δ > 3 are given in Table 3.The cross section for ND events is higher for high T cuts.On the other hand, ND events are suppressed with increasing Δ cut.If a cut of T > 200 MeV is applied for all final state particles in || < 5.2 and the events with an edge gap Δ > 3 are selected, 98.8% of the minimum bias events will be diffractive dissociation events.However, with these cuts, one cannot distinguish SDD and DDD events since 42.1% of the diffractive dissociation events will be DDD events.These results are presented by a histogram in Figure 8.
Central Gaps.
A similar event selection is performed by requiring a central gap in the events.These events should be dominated by DDD events where both diffraction systems 5.As indicated in the tables, the visible cross section decreases with increasing Δ.The ND contribution in minimum bias data is dominant in the larger T and, also, with the increasing T cut, visible cross section for SDD events increases while it decreases for DDD events.If a cut of T > 200 MeV is applied for all final state particles in || < 5.2 and the events with a central gap Δ > 4 are selected, 95.3% of the minimum bias events will be diffractive dissociation events.SDD events in this diffractive dissociation event content will be almost 9.
Advances in High Energy Physics Although central gaps look like an ideal cut to separate SDD and DDD events, only a small fraction of the DDD cross section has a central gap.It mostly has an edge gap.Due to a class of DDD events with a low diffractive mass on one side, the particles beyond the acceptance of the detector are not detected and they look like SDD events in the limited detector rapidity acceptance.The fraction of DDD events which can be tagged as SDD events in the limited rapidity range, || < 5.2, is calculated.Results for different cuts on the size of the edge gap Δ are given in Table 6.As indicated in the table, for Δ > 3, 44.1% of the DDD events can be tagged as SDD events in the limited detector rapidity coverage.This event fraction is smaller for the larger gap sizes.
Multiplicity and Total Energy Deposition.
In order to distinguish SDD events from DDD events with a low diffractive mass, the distributions ∑( ± ), total energy deposition, and particle multiplicity were investigated.The sum for ∑( ± ) runs over all final state particles in || < 5.2.For p much larger than the particle mass, the longitudinal momentum is calculated as cos , where is the energy of the particle and is the angle between the particle momentum p and the beam axis.The distributions for the events which have an edge gap Δ > 3 with a cut of T > 200 MeV for all final state particles are given in Figure 10.As shown in the figure, the shapes of the distributions for different event classes look very similar and, therefore, it seems not possible to separate SDD from DDD events with these cuts by using edge gaps.The distributions for ND events class are not represented since there are very few ND events which pass the event selection.same distributions were studied also for central gaps.The distributions for the events which have a central gap Δ > 4 with a cut of T > 200 MeV for all final state particles in || < 5.2 are given in Figure 11.
The distributions for events with an edge gap do not distinguish SDD and DDD (Figure 10), and, as regards those for central gaps (Figure 11), there are some differences, but the SDD contribution is anyway very suppressed in these events.
Events Tagging at Very Forward Rapidities.
As it was discussed in the previous sections, a class of DDD events with a low diffractive mass on one side can be tagged as SDD events in the limited rapidity acceptance of the detector.Since the particles in such events dissociate into the forward rapidities, looking at the particle activity in the very forward detectors can provide more accurate information about the type of the process.The ATLAS and CMS Zero Degree Calorimeters, ZDC, are located ∼140 meters away from the interaction point (IP) on both sides, at the end of the straight LHC beam-line section [16,17].The ZDC cover the pseudorapidity region || > 8.1 and are able to detect very forward neutral particles (, , ∘ ) at a 0 ∘ polar angle.
The total energy deposition and the multiplicity of the neutral particles in the ZDC acceptance are studied, in order to investigate whether there is a way to separate SDD and lowmass DDD events.An edge gap was required in the events with a gap size Δ > 3 and with a cut of T > 200 MeV for all final state particles in || < 5.2.Additionally, a certain amount of energy deposition ̸ = 0 was required in the opposite side from the gap (at either < 0 or > 0 depending on the gap position).In Figure 12, the total energy deposition and the multiplicity of the neutral particles in ZDC (the ZDC detector on the side opposite the gap) and ZDC (the ZDC detector on the side with the gap) are given with a cut of 100 MeV for the transverse momentum of the neutral particles ( 0 T ).The 0 T > 100 MeV is a reasonable cut given the ZDC noise levels.Also, the fraction of the events which have at least one neutral particle in ZDC for different cuts on the transverse momentum of the neutral particles with different cuts on the size of the gap is given in Table 7 for SDD and in Table 8 for DDD processes.As can be seen, for the events that have an edge gap with a size of Δ > 3 and 0 T > 100 MeV, SDD events have almost no neutral particle in ZDC while 60.2% of the DDD events have at least one neutral particle with 0 T > 100 MeV scattering into Figure 12: The total energy deposition and the multiplicity of the neutral particles with 0 T > 100 MeV in the ZDC detectors for the events that have an edge gap in || < 5.2 with a gap size Δ > 3, and with ̸ = 0 in the opposite side from the gap (either at < 0 or > 0 depending on the gap position).A cut of T > 200 MeV was applied for the final state particles in || < 5.2 to find the size of the gap.ZDC refers to the ZDC which is on the side opposite to the gap and ZDC is the ZDC on the side with the gap.Total energy deposition in (a) ZDC and (b) ZDC and particle multiplicity in (c) ZDC and (d) ZDC are given.The distributions are normalized to the number of events, N, that pass the analysis cuts.these forward rapidities.Although ZDC can provide a better distinction for SDD and DDD events, about 40% of the DDD events do not have any particles within ZDC and, therefore, they cannot be distinguished from SDD events in || < 5.2.
The selection efficiency can be improved by extending the nominal rapidity coverage of detectors.
4.5.Bias from Vertex.One of the background sources of diffractive processes at the LHC is the radiation coming from Figure 13: Diffractive dissociation events without a primary vertex and with a primary vertex for the different number of charged particles, ch .Tracker region is considered as || < 2.5, and T > 200 MeV charged particles are used to form the vertex.Only the events that have an edge gap Δ > 3 with a cut of T > 200 MeV for all final state particles in || < 5.2 are considered.The vertex requirement suppresses the events that have Δ > 8 which corresponds to the very low-mass soft diffractive processes.of charged particles, associated with a primary vertex can be different for different processes.In some cases, such as low-mass diffractive dissociation, the system can dissociate into the very forward rapidities and thus all the particles may appear at small polar angles.The problem in this case is the limited detector instruments in this region.The tracker of ATLAS and CMS, which is used to measure charged particles, covers the pseudorapidity region || < 2.5.Therefore, it will not be possible to measure charged particles when the system dissociates into the very forward rapidities.These types of events may not form a reconstructable primary vertex if all the particles are outside of the tracker region.
Figure 13 shows the large rapidity gap distribution for diffractive events with and without a primary vertex where experimental ways of eliminating the background coming from noncolliding bunches.In this limited tracker range, a primary vertex cut is not practical to perform measurements for the diffractive dissociation events.
Model Dependency.
As it was discussed in the previous sections, a calculation of the diffractive mass can be made through its relation to the size of the rapidity gap.However, this is model dependent.For a given gap size, the range of value can be different for different models.As an example, two different sizes of edge gap, 3.0 < Δ < 3.2 and 5.0 < Δ < 5.2, are considered and the diffractive masses of the dissociation systems are calculated for SDD events.The events are simulated by PYTHIA 8 and PYTHIA 6-D6T [18], which use different set of parameters for the simulation.The range of values and the difference in the range between models are given in Figure 14.
A study of the model dependence of correcting an inclusive minimum bias measurement to one for SDD processes only is presented in Figure 15.It looks like the difference is quite small especially in the Δ > 3 region, but of course the difference could be larger with other models.The correction to get to SDD is large (about a factor of 2) and it is preferable to measure Δ distribution without performing such a correction, such that dependence on MC models is minimized.
Conclusions
Methods to select soft diffraction dissociation at the LHC experiments ATLAS and CMS are studied by using large rapidity gaps in the events.It is shown that the larger the rapidity covered is, the more precisely the measurements for diffractive dissociation events can be done.A primary vertex requirement in the event selection is not practical due to the limited tracker range || < 2.5.In particular, very low-mass soft diffractive events which dissociate into the very forward rapidities and have a gap with a size of Δ > 8 are suppressed by the primary vertex requirement.
In the limited detector rapidity coverage, || < 5.2, one can select a sample of events of which 98.8% are diffractive dissociation, according to PYTHIA 8, by requiring an edge gap with a gap size Δ > 3 and with a cut of T > 200 MeV for all final state particles.However, with this event selection, 42.1% of the diffractive dissociation events will be DDD events and it seems not possible to fully separate SDD from DDD events by edge gap reconstruction in || < 5.2.Central gaps look like a better candidate in order to distinguish SDD and DDD events; however, only a small fraction of DDD events have a central gap.Using Zero Degree Calorimeters, a more accurate distinction between SDD and DDD events can be done up to a certain degree.This study also puts forward the importance of ZDC in diffractive measurements.
Figure 2 :
Figure 2: Distribution of the diffractive mass for SDD events simulated by PYTHIA 8.142.Generated diffractive mass (red line) and calculated diffractive mass from all particles in the full coverage (green line) and in a limited coverage || < 5.2 (blue line).
Figure 3 :
Figure 3: The relation between the size of the large rapidity gap, Δ, and log 10 for SDD events.The gap is defined as an edge gap.(a) Whole pseudorapidity range is used without any transverse momentum ( T ) threshold on particles.(b) For particles within || < 5.2 without any T threshold.(c) For particles within || < 5.2 with T > 200 MeV.
Figure 7 :
Figure 7: On the left side /Δ and on the right side / log 10 for different event classes.Gap is defined as central gap and no T cut (a, b), T > 200 MeV (c, d), and T > 500 MeV (e, f) cuts are applied for all final state particles in || < 5.2.
Figure 8 :
Figure 8: Large rapidity gap distribution for the events that have an edge gap Δ > 3 with a cut of T > 200 MeV for all final state particles in || < 5.2.The MinBias event class is used as a reference for the ratio.
Figure 9 :
Figure 9: Large rapidity gap distribution for the events that have a central gap Δ > 4 with a cut of T > 200 MeV for all final state particles in || < 5.2.The MinBias event class is used as a reference for the ratio.
6 ∑ 1 ) 1 / 1 )Figure 10 :
Figure10: Distributions of ∑( ± ) (a), total energy deposition (b), and particle multiplicity (c) for the events that have an edge gap Δ > 3 with a T > 200 MeV cut on final state particles in || < 5.2.The distributions are normalized to the number of events, N, that pass the analysis cuts.The MinBias event class is used as a reference for the ratio.
)Figure 11 :
Figure11: Distributions of ∑( ± ) (a), total energy deposition (b), and particle multiplicity (c) for the events that have a central gap Δ > 4 with a T > 200 MeV cut on final state particles in || < 5.2.The distributions are normalized to the number of events, N, that pass the analysis cuts.The MinBias event class is used as a reference for the ratio.
Figure 14 :
Figure 14: The range of values for SDD events with a gap size (a) 3.0 < Δ < 3.2 and (b) 5.0 < Δ < 5.2.Events are simulated by PYTHIA 8 and PYTHIA 6-D6T, and only edge gaps are considered with a cut of T > 200 MeV for the final state particles in || < 5.2.
Single diffractive, || < 5.2 Single diffractive, || < 8.1 d/dΔ (mb) SDD/DDD, || < 5.2 SDD/DDD, || < 8.1Figure 4: Large rapidity gap distribution for single and double diffractive dissociation events in the different detector rapidity acceptance.The gap is defined as an edge gap.Ratio of single to double diffractive dissociation events is given on the ratio plot.
Table 2 :
MinBias, edge gaps, no p T cut MinBias, edge gaps, p T > 100 MeV MinBias, edge gaps, p T > 200 MeV MinBias, edge gaps, p Visible cross sections for different Δ cuts.A cut of T > 200 MeV is applied for all final state particles in || < 5.2 and the gap is defined as edge gap.
T > 500 MeV MinBias, edge gaps, p T > 100 MeV MinBias, edge gaps, p T > 200 MeV MinBias, edge gaps, p T > 500 MeV Figure5: Large rapidity gap distribution for minimum bias events (SDD + DDD + ND).The gap is defined as an edge gap from all final state particles in || < 5.2.The "MinBias, edge gaps, no T cut" is used as a reference on the ratio plot.
Table 3 :
Visible cross sections for different cuts on the transverse momentum of the final state particles in || < 5.2 for Δ > 3 cut.Gap is defined as edge gap.The visible cross sections of events that pass the various Δ cuts with T > 200 MeV for all final state particles are given in Table4.The visible cross sections for different cuts on the transverse momentum of the final state particles for Δ > 4 are presented in Table
Table 4 :
Visible cross sections for different Δ cuts.A cut of T > 200 MeV is applied for all final state particles in || < 5.2 and the gap is defined as central gap.
completely suppressed.These results are summarized by a histogram in Figure
Table 5 :
Visible cross sections for different cuts on the transverse momentum of the final state particles in || < 5.2 for Δ > 4 cut.Gap is defined as central gap.
Table 6 :
The fraction of DDD events which can be tagged as SDD events in the limited rapidity range || < 5.2.Gap is defined as edge gap.
Table 7 :
The fraction of SDD events which have at least one neutral particle in ZDC (the ZDC detector on the side with the gap) is given for different cuts on the size of the Δ and with different thresholds for the transverse momentum of the neutral particles.The gap is defined as an edge gap and the final state particles within || < 5.2 with T > 200 MeV are used to find the size of the gap.
Table 8 :
The fraction of DDD events which have at least one neutral particle in ZDC (the ZDC detector on the side with the gap) is given for different cuts on the size of Δ and with different thresholds for the transverse momentum of the neutral particles.The gap is defined as an edge gap and the final state particles within || < 5.2 with T > 200 MeV are used to find the size of the gap.
Table 9 :
The fraction of events without a primary vertex and with a primary vertex for the different number of charged particles, ch .Tracker region is considered as || < 2.5 and T > 200 MeV charged particles are used to form the vertex.Only the events that have an edge gap Δ > 3 with a cut of T > 200 MeV for all final state particles in || < 5.2 are considered.
ch , is given in Table9for different event classes.Requiring a primary vertex which is reconstructed with two or more charged particles suppresses at least 33.2% of the diffractive events.Particularly, very low-mass soft diffractive dissociation events with a very large gap, Δ > 8, are suppressed by the primary vertex cut.Therefore, instead of a primary vertex cut, one should investigate other | 7,222.6 | 2016-09-08T00:00:00.000 | [
"Physics"
] |
On Third Hankel Determinant for Certain Subclass of Bi-Univalent Functions
: This study presents a subclass 𝒮(𝛽) of bi-univalent functions within the open unit disk region 𝐷 . The objective of this class is to determine the bounds of the Hankel determinant of order 3, ( Ⱨ (cid:2871) (1) ) . In this study, new constraints for the estimates of the third Hankel determinant for the class 𝒮(𝛽) are presented, which are of considerable interest in various fi elds of mathematics, including complex analysis and geometric function theory. Here, we de fi ne these bi-univalent functions as 𝒮(𝛽) and impose constraints on the coe ffi cients │𝑎 (cid:3041) │ . Our investigation provides the upper bounds for the bi-univalent functions in this newly developed subclass, speci fi cally for n = 2, 3, 4, and 5. We then derive the third Hankel determinant for this particular class, which reveals several intriguing scenarios. These fi ndings contribute to the broader understanding of bi-univalent functions and their potential applications in diverse mathematical contexts. Notably, the results obtained may serve as a foundation for future investigations into the properties and applications of bi-univalent functions and their subclasses.
Introduction
Let indicate the collection of functions analytic in the open unit disk = ᶎ: ᶎ ∈ ℂ and |ᶎ| < 1 .An analytic function ∈ has Taylor series expansion of the form: The class of all functions in which are univalent in is denoted by .The Koebe-One-Quarter Theorem [1] ensures that the image of under each ∈ contains a disk of radius A function ∈ ∑ is said to be bi-univalent in if both (ᶎ) and (ᶎ) are univalent in .
In 1967, Lewin [2] obtained a coefficient bound that is given by | | < 1.51 for all function ∈ ∑ of the form (1), and he looked at the class ∑ of bi-univalent functions in .In 1967, Clunie and Brannan [3] conjectured that | | ≤ √2 for ∈ ∑.After that, mates on the first two Taylor-Maclaurin coefficients were found in these subclasses (see [7][8][9][10]).Several authors introduced initial Maclaurin coefficients bounds for subclasses of bi-univalent functions (see [11,12]).Many researchers ([11,13,14]) have studied numerous curious subclasses of the bi-univalent function class Ω and observed non-sharp bounds on the first two Taylor-Maclaurin coefficients.As well as this, the coefficient problem for all of the Taylor-Maclaurin coefficients | |, n = 3,4,... is as yet an open problem ( [2]).Also, let represent the class of analytic functions that are normalized by the condition: Noonan and Thomas [15] , ( = ).
By applying triangle inequality for Ⱨ (), we have Our paper provides a subclass () of bi-univalent functions within the open unit disk region .The objective of this class is to determine the bounds of the Hankel determinant of order 3, (Ⱨ (1)).In this study, new constraints for the estimates of the third Hankel determinant for the class () are presented.
The subsequent lemmas are important for establishing our results:
Main Results
Definition 1.A function belonging to the class ∑, as defined by equation ( 1) is considered to be in the class () if it fulfills the following requirement: and where (0 < ≤ 1), ᶎ, ∈ and = .
Conclusions
This article presented a comprehensive investigation of the third Hankel determinant H3(1) for a certain subclass of bi-univalent functions, ().This subclass is of significant interest in various mathematical fields, including complex analysis and geometric function theory.We defined the bi-univalent functions () and imposed constraints on the coefficients │ │.Our findings provided the upper bounds for the bi-univalent functions in this newly developed subclass, specifically for n = 2, 3, 4, and 5. Furthermore, we advanced the understanding of these functions by deriving the third Hankel determinant for this particular class, which revealed several intriguing scenarios.This achievement led to the improvement of the bound of the third Hankel determinant for the class of bi-univalent functions ().Our study contributes to the broader understanding of bi-univalent functions, their subclasses, and their potential applications in diverse mathematical contexts.The results obtained may serve as a foundation for future investigations into the properties and applications of bi-univalent functions and their subclasses.Future research endeavors could explore further refinements of the bounds, as well as examine other subclasses of bi-univalent functions to uncover novel insights into their characteristics and potential applications.Ultimately, this study paves the way for a deeper exploration of the fascinating world of bi-univalent functions and their role in the realm of mathematics.
defined the Hankel determinant of , in 1976 for ≥ 1 and ≥ 1 by | 1,061.2 | 2024-02-16T00:00:00.000 | [
"Mathematics"
] |
Method for 3 D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes
Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method. Keywords—3D image representation; Volume rendering; NTSC image display
I. INTRODUCTION
Computer input by human eyes only is proposed and implemented [1]- [3] together with its application to many fields, communication aids, electric wheel chair controls, having meal aids, information collection aids (phoning, search engine, watching TV, listening to radio, e-book/e-comic/elearning/,etc., domestic helper robotics and so on [4]- [15].In particular, the proposed computer input system by human eyes only does work like keyboard as well as mouse.Therefore, not only key-in operations but also mouse operations (right and left button click, drag and drop, single and double click) are available for the proposed system.
It is well known that hands, fingers operation is much slower than line of sight vector movements.It is also known that accidental blink is done within 0.3 second.Therefore, the proposed computer input system by human eyes only decides the specified key or location when the line of vector is fixed at the certain position of computer display for more than 0.3 second.In other words, the system can update the key or the location every 0.3 second.It is fast enough for most of all application fields.
Meanwhile, 3D image display and manipulation can be done with 3D display.Attempts are also done with 2D display for 3D image display and manipulation [16], [17], on the other hands.Most of previous attempts are based on touch panel based manipulation by hands and fingers.As aforementioned, eyes operations are much faster than hands & fingers operations.Therefore, 3D image display and manipulation method by human eyes only is proposed in this paper.
3D image representations are widely used for a variety of applications such as medical electronics diagnostics image display, LSI pattern designs, and so on.Volume rendering is most popular method for 3D image representations, in general.It, however, takes a huge computational resources.
Volume rendering is in general costly.Real time representation, therefore, is not easy for volume rendering even for grid computing is used.For instance, 6 PCs of grid computing can be reduced by 35% of process time for the 3D representation with volume rendering.In order to reduce the process time of volume rendering, the number of frames which have to be display is reduced by using afterimage phenomenon.Time resolution of human eyes ranges from 50 to 100 ms.Illumination switching between on and off within the 50 to 100 ms is recognized as continuous illumination by human eyes.Therefore, it can be done to reduce the number of frames for 3D object image representation by using multilayer representation.On the other hand, refresh cycle of the NTSC video signal corresponds to 33 ms.Therefore, the interval of refresh cycle has to be within 33 ms for the proposed volume rendering.
The next section describes the proposed system followed by experiment.Then concluding remarks are described with some discussions.
A. Basic Idea
Basic Idea for the Proposed Method 3D model that displays a picture or item in a form that appears to be physically present with a designated structure.Essentially, it allows items that appeared flat to the human eye to be display in a form that allows for various dimensions to be represented.These dimensions include width, depth, and height.3D model can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena.The model can also be physically created using 3D printing devices.Models may be created automatically or manually.The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting.
There are some previously proposed methods for 3D display such as tracing object contour and reconstruct 3D object with line drawings, and wireframe representation and display 3D object with volume rendering.There is another method, so called OCT: Optical Coherence Tomography.It, however, quit expensive than the others.Fundamental idea of the proposed method is afterimage.Response time of human eyes is that the time resolution of human eyes: 50ms~100ms.An afterimage or ghost image or image burn-in is an optical illusion that refers to an image continuing to appear in one's vision after the exposure to the original image has ceased.One of the most common afterimages is the bright glow that seems to float before one's eyes after looking into a light source for a few seconds.Afterimages come in two forms, negative (inverted) and positive (retaining original color).The process behind positive afterimages are unknown, though thought to be related to neural adaptation.On the other hand, negative afterimages are a retinal phenomenon and are well understood.Example of 3D image on to 2D display is shown in Fig. 1.This is the proposed system concept.In the example, "A" marked 3D image of multi-fidus which is acquired with CT scanner is displayed onto computer screen."B", "C", ... are behind it.
B. Displayed Image in Automatic Mode
Implementation of the proposed system is conducted.By using mouse operation by human eyes only, 3D image with different aspects can be recreated and display.It is confirmed that conventional image processing and analysis can be done with mouse operation by human eyes only.Fig. 2 shows the example of displayed layered images.In this example, 1024 of layer images are prepared for 3D object.By displaying the prepared layered images alternatively in automatic mode, 3D object appears on the screen.Furthermore, as shown in Fig. 2, internal structure is visible other than the surface of the 3D object.It looks like a semitransparent 3D surface and internal structure in side of the 3D objects.
Another example is shown in Fig. 3.In the figure, the first to 768 th layer images are shown together with a slant view of the 3D object image with the several layer images.1024 layer of images are created for the 3D object by using Open GL.Surface rendering with the 1024 layer of images makes a 3D object rendering like Fig. 3.
C. Changing the Transparency
In order to represent the depth information, transparency is used.The purpose of the depth information representation is to express the internal structure of the 3D object.By changing the transparency, somewhat depth information can be represented as shown in Fig. 6.In the figure, 100% denotes 0% transparent image while 5% also denotes 95% transparent image, respectively.It is clear that internal structure can be seen by changing transparency.Therefore, users can select the transparency depending on which portion of internal structure would like to see through visual perception.
D. Foggy Representation for Depth Representations
In order to represent the depth information, not only transparency but also foggy representation is added.Because of depth information representation is getting week depending on the transparency, it is needed to add some other representation for depth information.One of the effect is foggy representation with the different color.
E. Implementation
Implementation is done based on Open GL.Parameter setting can be done with the slide ruler shown in Fig. 8. Slid rulers for rotation angles in x, y, and z axis which ranges from 0 to 355 degree, refresh cycle of time interval which ranges from 1 to 100 ms, and object interval (the number of frames which ranges from 1 to 127) are available to set.The other parameter, such as transparency (ranges from 0 to 100 %), foggy representation can be set automatically.
As mentioned above, the time resolution of human eyes ranges from 50 to 100 ms.Illumination switching between on and off within the 50 to 100 ms is recognized as continuous illumination by human eyes.Therefore, it can be done to reduce the number of frames for 3D object image representation by using multi-layer representation.On the other hand, refresh cycle of the NTSC video signal corresponds to 33 ms.Therefore, the interval of refresh cycle has to be within 33 ms for the proposed volume rendering.Example of a frame image of the proposed volume rendering is shown in Fig. 9. 3D object image representation based on the proposed method with the image datasets used in this experiment were from the Laboratory of Human Anatomy and Embryology, University of Brussels (ULB), Belgium.Fig. 10 shows examples of the resultant images.The 3D images used are derived from CT scan images.In these cases, the number of frames used is 127 while the transparency is set at 50 %.Meanwhile, the refresh cycle of the time interval is set at 17 ms which corresponds to 58 f/s.Thus, 1/6 of reduction can be achieved successfully.IV.CONCLUSION Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency.Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6.Also, it can represent the 3D depth through visual perceptions.Thus, real time representation of 3D object can be displayed onto computer screen.
Further investigations are required for improvement of frame reduction ratio.The refresh cycle has to be optimized.Also, the transparency and foggy representation has to be optimized.In other word, a method for optimization of the parameters has to be created.
Fig. 1 .
Fig. 1.System Concept Such this layered 3D images are aligned with depth direction.Attached character "A" to "Z" are transparent and displayed at just beside the layered image at the different locations.Therefore, user can recognize the character and can select one of those characters by their eye.Arrow shows the line of sight vector.Curser can be controlled by human eyes only.By sweeping the character, 3D images are displayed by layer by layer.Therefore, it looks like time division delay of 3D images.
Fig. 6 .
Fig. 6.Examples of 3D object image representation with changing transparency Fig. 7. Examples of Fog added 3D image for depth representation Fig.7 shows examples of foggy representation of depth information representations.
Fig. 8 .
Fig. 8. Scroll slide bar for designation of parameters of 3D object image representations (the slide ruler is for the rotation angle in x axis followed by y and z axis and refresh cycle of the time interval of display as well as object frame interval ranges from 1 to 127
Fig. 9 .
Fig. 9. Example of a frame image of the proposed volume rendering III.EXPERIMENT
Fig. 10 .
Fig. 10.3D object image representation based on the proposed method with the image datasets used in this experiment were from the Laboratory of Human Anatomy and Embryology, University of Brussels (ULB), Belgium | 2,503.2 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Unsupervised Does Not Mean Uninterpretable: The Case for Word Sense Induction and Disambiguation
The current trend in NLP is the use of highly opaque models, e.g. neural networks and word embeddings. While these models yield state-of-the-art results on a range of tasks, their drawback is poor interpretability. On the example of word sense induction and disambiguation (WSID), we show that it is possible to develop an interpretable model that matches the state-of-the-art models in accuracy. Namely, we present an unsupervised, knowledge-free WSID approach, which is interpretable at three levels: word sense inventory, sense feature representations, and disambiguation procedure. Experiments show that our model performs on par with state-of-the-art word sense embeddings and other unsupervised systems while offering the possibility to justify its decisions in human-readable form.
Introduction
A word sense disambiguation (WSD) system takes as input a target word t and its context C. The system returns an identifier of a word sense s i from the word sense inventory {s 1 , ..., s n } of t, where the senses are typically defined manually in advance. Despite significant progress in methodology during the two last decades (Ide and Véronis, 1998;Agirre and Edmonds, 2007;Moro and Navigli, 2015), WSD is still not widespread in applications (Navigli, 2009), which indicates the need for further progress. The difficulty of the problem largely stems from the lack of domain-specific training data. A fixed sense inventory, such as the one of WordNet (Miller, 1995), may contain irrelevant senses for the given application and at the same time lack relevant domain-specific senses.
Word sense induction from domain-specific corpora is a supposed to solve this problem. However, most approaches to word sense induction and disambiguation, e.g. (Schütze, 1998;Li and Jurafsky, 2015;Bartunov et al., 2016), rely on clustering methods and dense vector representations that make a WSD model uninterpretable as compared to knowledge-based WSD methods.
Interpretability of a statistical model is important as it lets us understand the reasons behind its predictions (Vellido et al., 2011;Freitas, 2014;Li et al., 2016). Interpretability of WSD models (1) lets a user understand why in the given context one observed a given sense (e.g., for educational applications); (2) performs a comprehensive analysis of correct and erroneous predictions, giving rise to improved disambiguation models.
The contribution of this paper is an interpretable unsupervised knowledge-free WSD method. The novelty of our method is in (1) a technique to disambiguation that relies on induced inventories as a pivot for learning sense feature representations, (2) a technique for making induced sense representations interpretable by labeling them with hypernyms and images.
Our method tackles the interpretability issue of the prior methods; it is interpretable at the levels of (1) sense inventory, (2) sense feature representation, and (3) disambiguation procedure. In contrast to word sense induction by context clustering (Schütze (1998), inter alia), our method constructs an explicit word sense inventory. The method yields performance comparable to the state-of-the-art unsupervised systems, including two methods based on word sense embeddings. An open source implementation of the method featuring a live demo of several pre-trained models is available online. 1
Related Work
Multiple designs of WSD systems were proposed (Agirre and Edmonds, 2007;Navigli, 2009). They vary according to the level of supervision and the amount of external knowledge used. Most current systems either make use of lexical resources and/or rely on an explicitly annotated sense corpus.
Supervised approaches use a sense-labeled corpus to train a model, usually building one submodel per target word (Ng, 1997;Lee and Ng, 2002;Klein et al., 2002;Wee, 2010). The IMS system by Zhong and Ng (2010) provides an implementation of the supervised approach to WSD that yields state-of-the-art results. While supervised approaches demonstrate top performance in competitions, they require large amounts of senselabeled examples per target word.
Knowledge-based approaches rely on a lexical resource that provides a sense inventory and features for disambiguation and vary from the classical Lesk (1986) algorithm that uses word definitions to the Babelfy (Moro et al., 2014) system that uses harnesses a multilingual lexicalsemantic network. Classical examples of such approaches include (Banerjee and Pedersen, 2002;Pedersen et al., 2005;Miller et al., 2012). More recently, several methods were proposed to learn sense embeddings on the basis of the sense inventory of a lexical resource Rothe and Schütze, 2015;Camacho-Collados et al., 2015;Iacobacci et al., 2015;Nieto Piña and Johansson, 2016).
Unsupervised knowledge-free approaches use neither handcrafted lexical resources nor handannotated sense-labeled corpora. Instead, they induce word sense inventories automatically from corpora. Unsupervised WSD methods fall into two main categories: context clustering and word ego-network clustering.
Context clustering approaches, e.g. (Pedersen and Bruce, 1997;Schütze, 1998), represent an instance usually by a vector that characterizes its context, where the definition of context can vary greatly. These vectors of each instance are then clustered. Multi-prototype extensions of the skipgram model (Mikolov et al., 2013) that use no predefined sense inventory learn one embedding word vector per one word sense and are commonly fitted with a disambiguation mechanism (Huang et al., 2012;Tian et al., 2014;Neelakantan et al., 2014;Bartunov et al., 2016;Li and Jurafsky, 2015;Pelevina et al., 2016). Comparisons of the Ada-Gram (Bartunov et al., 2016) to (Neelakantan et al., 2014) on three SemEval word sense induction and disambiguation datasets show the advantage of the former. For this reason, we use AdaGram as a representative of the state-of-the-art word sense embeddings in our experiments. In addition, we compare to SenseGram, an alternative sense embedding based approach by Pelevina et al. (2016). What makes the comparison to the later method interesting is that this approach is similar to ours, but instead of sparse representations the authors rely on word embeddings, making their approach less interpretable.
Word ego-network clustering methods (Lin, 1998;Pantel and Lin, 2002;Widdows and Dorow, 2002;Biemann, 2006;Hope and Keller, 2013) cluster graphs of words semantically related to the ambiguous word. An ego network consists of a single node (ego) together with the nodes they are connected to (alters) and all the edges among those alters (Everett and Borgatti, 2005). In our case, such a network is a local neighborhood of one word. Nodes of the ego-network can be (1) words semantically similar to the target word, as in our approach, or (2) context words relevant to the target, as in the UoS system (Hope and Keller, 2013). Graph edges represent semantic relations between words derived using corpus-based methods (e.g. distributional semantics) or gathered from dictionaries. The sense induction process using word graphs is explored by (Widdows and Dorow, 2002;Biemann, 2006;Hope and Keller, 2013). Disambiguation of instances is performed by assigning the sense with the highest overlap between the instance's context words and the words of the sense cluster. Véronis (2004) compiles a corpus with contexts of polysemous nouns using a search engine. A word graph is built by drawing edges between co-occurring words in the gathered corpus, where edges below a certain similarity threshold were discarded. His HyperLex algorithm detects hubs of this graph, which are interpreted as word senses. Disambiguation is this experiment is performed by computing the distance between context words and hubs in this graph.
Di Marco and Navigli (2013) presents a comprehensive study of several graph-based WSI methods including Chinese Whispers, HyperLex, curvature clustering (Dorow et al., 2005). Besides, authors propose two novel algorithms: Balanced Maximum Spanning Tree Clustering and Squares (B-MST), Triangles and Diamonds (SquaT++). To construct graphs, authors use first-order and second-order relations extracted from a background corpus as well as keywords from snippets. This research goes beyond intrinsic evaluations of induced senses and measures impact of the WSI in the context of an information retrieval via clustering and diversifying Web search results. Depending on the dataset, HyperLex, B-MST or Chinese-Whispers provided the best results.
Our system combines several of above ideas and adds features ensuring interpretability. Most notably, we use a word sense inventory based on clustering word similarities (Pantel and Lin, 2002); for disambiguation we rely on syntactic context features, co-occurrences (Hope and Keller, 2013) and language models (Yuret, 2012).
Interpretable approaches. The need in methods that interpret results of opaque statistical models is widely recognised (Vellido et al., 2011;Vellido et al., 2012;Freitas, 2014;Li et al., 2016;Park et al., 2016). An interpretable WSD system is expected to provide (1) a human-readable sense inventory, (2) human-readable reasons why in a given context c a given sense s i was detected. Lexical resources, such as WordNet, solve the first problem by providing manually-crafted definitions of senses, examples of usage, hypernyms, and synonyms. The BabelNet (Navigli and Ponzetto, 2010) integrates all these sense representations, adding to them links to external resources, such as Wikipedia, topical category labels, and images representing the sense. The unsupervised models listed above do not feature any of these representations making them much less interpretable as compared to the knowledge-based models. Ruppert et al. (2015) proposed a system for visualising sense inventories derived in an unsupervised way using graph-based distributional semantics. Panchenko (2016) proposed a method for making sense inventory of word sense embeddings interpretable by mapping it to BabelNet.
Our approach was inspired by the knowledgebased system Babelfy (Moro et al., 2014). While the inventory of Babelfy is interpretable as it relies on BabelNet, the system provides no underlying reasons behind sense predictions. Our objective was to reach interpretability level of knowledgebased models within an unsupervised framework.
Method: Unsupervised Interpretable Word Sense Disambiguation
Our unsupervised word sense disambiguation method consist of the five steps illustrated in Figure 1: extraction of context features (Section 3.1); computing word and feature similarities (Section 3.2); word sense induction (Section 3.3); labeling of clusters with hypernyms and images (Section 3.4), disambiguation of words in context based on the induced inventory (Section 3.5), and finally interpretation of the model (Section 3.6). Feature similarity and co-occurrence computation steps (drawn with a dashed lines) are optional, since they did not consistently improve performance.
Extraction of Context Features
The goal of this step is to extract word-feature counts from the input corpus. In particular, we extract three types of features: Dependency Features. These feature represents a word by a syntactic dependency such as "nn(•,writing)" or "prep at(sit,•)", extracted from the Stanford Dependencies (De Marneffe et al., 2006) obtained with the the PCFG model of the Stanford parser (Klein and Manning, 2003). Weights are computed using the Local Mutual Information (LMI) (Evert, 2005). One word is represented with 1000 most significant features.
Co-occurrence Features. This type of features represents a word by another word. We extract the list of words that significantly co-occur in a sentence with the target word in the input corpus based on the log-likelihood as word-feature weight (Dunning, 1993).
Language Model Feature. This type of features are based on a trigram model with Kneser-Ney smoothing (Kneser and Ney, 1995). In particular, a word is represented by (1) right and left context words, e.g. "office • and", (2) two preceding words "new office •", and (3) two succeeding words, e.g. "• and chairs". We use the conditional probabilities of the resulting trigrams as word-feature weights.
Computing Word and Feature Similarities
The goal of this step is to build a graph of word similarities, such as ( Figure 1: Outline of our unsupervised interpretable method for word sense induction and disambiguation. 2013) as it yields comparable performance on semantic similarity to state-of-the-art dense representations (Mikolov et al., 2013) compared on the WordNet as gold standard (Riedl, 2016), but is interpretable as word are represented by sparse interpretable features. Namely we use dependencybased features as, according to prior evaluations, this kind of features provides state-of-the-art semantic relatedness scores (Padó and Lapata, 2007;Van de Cruys, 2010;Panchenko and Morozova, 2012;Levy and Goldberg, 2014). First, features of each word are ranked using the LMI metric (Evert, 2005). Second, the word representations are pruned keeping 1000 most salient features per word and 1000 most salient words per feature. The pruning reduces computational complexity and noise. Finally, word similarities are computed as a number of common features for two words. This is again followed by a pruning step in which only the 200 most similar terms are kept to every word. The resulting word similarities are browsable online. 2 Note that while words can be characterized with distributions over features, features can vice versa be characterized by a distribution over words. We use this duality to compute feature similarities using the same mechanism and explore their use in disambiguation below.
Word Sense Induction
We induce a sense inventory by clustering of egonetwork of similar words. In our case, an inventory represents senses by a word cluster, such as "chair, bed, bench, stool, sofa, desk, cabinet" for the "furniture" sense of the word "table".
The sense induction processes one word t of the distributional thesaurus T per iteration. First, we retrieve nodes of the ego-network G of t being the N most similar words of t according to T (see Figure 2 (1)). Note that the target word t itself is not part of the ego-network. Second, we connect each node in G to its n most similar words according to T . Finally, the ego-network is clustered with Chinese Whispers (Biemann, 2006), a non-parametric algorithm that discovers the number of senses automatically. The n parameter regulates the granularity of the inventory: we experiment with n ∈ {200, 100, 50} and N = 200.
The choice of Chinese Whispers among other algorithms, such as HyperLex (Véronis, 2004) or MCL (Van Dongen, 2008), was motivated by the absence of meta-parameters and its comparable performance on the WSI task to the state-of-theart (Di Marco and Navigli, 2013).
Labeling Induced Senses with Hypernyms and Images
Each sense cluster is automatically labeled to improve its interpretability. First, we extract hypernyms from the input corpus using Hearst (1992) patterns. Second, we rank hypernyms relevant to the cluster by a product of two scores: the hypernym relevance score, calculated as w∈cluster sim(t, w)f req(w, h), and the hypernym coverage score, calculated as w∈cluster min(f req(w, h), 1).
Here the sim(t, w) is the relatedness of the cluster word w to the target word t, and the f req(w, h) is the frequency of the hypernymy relation (w, h) as extracted via patterns. Thus, a high-ranked hypernym h has high relevance, but also is confirmed by several cluster words. This stage results in a ranked list of labels that specify the word sense, for which we here show the first one, e.g. "table (furniture)" or "table (data)". Faralli and Navigli (2012) showed that web search engines can be used to bootstrap senserelated information. To further improve interpretability of induced senses, we assign an image to each word in the cluster (see Figure 2) by query-ing the Bing image search API 3 using the query composed of the target word and its hypernym, e.g. "jaguar car". The first hit of this query is selected to represent the induced word sense.
Algorithm 1: Unsupervised WSD of the word t based on the induced word sense inventory I.
input : Word t, context features C, sense inventory I, word-feature table F , use largest cluster back-off LCB, use feature expansion F E. output: Sense of the target word t in inventory I and confidence score. To disambiguate a target word t in context, we extract context features C and pass them to Algorithm 1. We use the induced sense inventory I and select the sense that has the largest weighted feature overlap with context features or fall back to the largest cluster back-off when context features C do not match the learned sense representations. The algorithm starts by retrieving induced sense clusters of the target word (line 1).
Next, the method starts to accumulate context feature weights of each sense in α[sense]. Each word w in a sense cluster brings all its word-feature counts F (w, c): see lines 5-12. Finally, a sense that maximizes mean weight across all context features is chosen (lines 13-21). Optionally, we can resort to the largest cluster back-off (LCB) strategy in case if no context features match sense representations.
Note that the induced inventory I is used as a pivot to aggregate word-feature counts F (w, c) of the words in the cluster in order to build feature representations of each induced sense. We assume that the sets of similar words per sense are compatible with each other's context. Thus, we can aggregate ambiguous feature representations of words in a sense cluster. In a way, occurrences of cluster members form the training set for the sense, i.e. contexts of {chair, bed, bench, stool, sofa, desk, cabinet}, add to the representation of "table (furniture)" in the model. Here, ambiguous cluster members like "chair" (which could also mean "chairman") add some noise, but its influence is dwarfed by the aggregation over all cluster members. Besides, it is unlikely that the target ("table") and the cluster member ("chair") share the same homonymy, thus noisy context features hardly play a role when disambiguating the target in context. For instance, for scoring using language model features, we retrieve the context of the target word and substitute the target word one by one of the cluster words. To close the gap between the aggregated dependency per sense α[sense] and dependencies observed in the target's context C, we use the similarity of features: we expand every feature c ∈ C with 200 of most similar features and use them as additional features (lines 2-4).
We run disambiguation independently for each of the feature types listed above, e.g. dependencies or co-occurrences. Next, independent predictions are combined using the majority-voting rule.
Interpretability of the Method
Results of disambiguation can be interpreted by humans as illustrated by Figure 2. In particular, our approach is interpretable at three levels: 1. Word sense inventory. To make induced word sense inventories interpretable we display senses of each word as an ego-network of its semantically related words. For instance, the network of the word "table" in our example is constructed from two tightly related groups of words that correspond to "furniture" and "data" senses. These labels of the clusters are obtained automatically (see Section 3.4).
While alternative methods, such as AdaGram, can generate sense clusters, our approach makes the senses better interpretable due to hypernyms and image labels that summarize senses. (2) sense feature representation; (3) results of disambiguation in context. The sense labels ("furniture" and "data") are obtained automatically based on cluster labeling with hypernyms. The images associated with the senses are retrieved using a search engine:"table data" and "table furniture".
2. Sense feature representation. Each sense in our model is characterized by a list of sparse features ordered by relevance to the sense. Figure 2 (2) shows most salient dependency features to senses of the word "table". These feature representations are obtained by aggregating features of sense cluster words.
In systems based on dense vector representations, there is no straightforward way to get the most salient features of a sense, which makes the analysis of learned representations problematic.
3. Disambiguation method. To provide the reasons for sense assignment in context, our method highlights the most discriminative context features that caused the prediction. The discriminative power of a feature is defined as the ratio between its weights for different senses.
In Figure 2 (3) words "information", "cookies", "deployed" and "website" are highlighted as they are most discriminative and intuitively indicate on the "data" sense of the word "table" as opposed to the "furniture" sense. The same is observed for other types of features. For instance, the syntactic dependency to the word "information" is specific to the "data" sense.
Alternative unsupervised WSD methods that rely on word sense embeddings make it difficult to explain sense assignment in context due to the use of dense features whose dimensions are not interpretable.
Experiments
We use two lexical sample collections suitable for evaluation of unsupervised WSD systems. The first one is the Turk Bootstrap Word Sense Inventory (TWSI) dataset introduced by Biemann (2012). It is used for testing different configurations of our approach. The second collection, the SemEval 2013 word sense induction dataset by Jurgens and Klapaftis (2013), is used to compare our approach to existing systems. In both datasets, to measure WSD performance, induced senses are mapped to gold standard senses. In experiments with the TWSI dataset, the models were trained on the Wikipedia corpus 4 while in experiments with the SemEval datasets models are trained on the ukWaC corpus (Baroni et al., 2009) for a fair comparison with other participants.
Dataset and Evaluation Metrics
This test collection is based on a crowdsourced resource that comprises 1,012 frequent nouns with 2,333 senses and average polysemy of 2.31 senses per word. For these nouns, 145,140 annotated sentences are provided. Besides, a sense inventory is explicitly provided, where each sense is represented with a list of words that can substitute target noun in a given sentence. The sense distribution across sentences in the dataset is highly skewed as 79% of contexts are assigned to the most frequent senses. Thus, in addition to the full TWSI dataset, we also use a balanced subset featuring five contexts per sense and 6,166 sentences to assess the quality of the disambiguation mechanism for smaller senses. This dataset contains no monosemous words to completely remove the bias of the most frequent sense. Note that de-biasing the evaluation set does not de-bias the word sense inventory, thus the task becomes harder for the balanced subset.
For the TWSI evaluation, we create an explicit mapping between the system-provided sense inventory and the TWSI word senses: senses are represented as the bag of words, which are compared using cosine similarity. Every induced sense gets assigned at most one TWSI sense. Once the mapping is completed, we calculate Precision, Recall, and F-measure. We use the following baselines to facilitate interpretation of the results: (1) MFS of the TWSI inventory always assigns the most frequent sense in the TWSI dataset; (2) LCB of the induced inventory always assigns the largest sense cluster; (3) Upper bound of the induced vocabulary always selects the correct sense for the context, but only if the mapping exists for this sense; (4) Random sense of the TWSI and the induced inventories.
Discussion of Results
The results of the TWSI evaluation are presented in Table 1. In accordance with prior art in word sense disambiguation, the most frequent sense (MFS) proved to be a strong baseline, reaching an F-score of 0.787, while the random sense over the TWSI inventory drops to 0.536. The upper bound on our induced inventory (F-score of 0.900) shows that the sense mapping technique used prior to evaluation does not drastically distort the evaluation scores. The LCB baseline of the induced inventory achieves an F-score of 0.691, demonstrating the efficiency of the LCB technique.
Let us first consider models based on single features. Dependency features yield the highest precision of 0.728, but have a moderate recall of 0.343 since they rarely match due to their sparsity. The LCB strategy for these rejected contexts helps to improve recall at cost of precision. Co-occurrence features yield significantly lower precision than the dependency-based features, but their recall is higher. Finally, the language model features yield very balanced results in terms of both precision and recall. Yet, the precision of the model based on this feature type is significantly lower than that of dependencies.
Not all combinations improve results, e.g. combination of three types of features yields inferior results as compared to the language model alone. However, a combination of the language model with dependency features does provide an improvement over the single models as both these models bring strong signal of complementary nature about the semantics of the context. The dependency features represent syntactic information, while the LM features represent lexical information. This improvement is even more pronounced in the case of the balanced TWSI dataset. This combined model yields the best F-scores overall. Table 2 presents the effect of the feature expansion based on the graph of similar features. For a low-recall model such the one based on syntactic dependencies, feature expansion makes a lot of sense: it almost doubles recall, while losing some precision. The gain in F-score using this technique is almost 20 points on the full TWSI dataset. However, the need for such expansion vanishes when two principally different types of features (precise syntactic dependencies and high-coverage trigram language model) are combined. Both precision and F-score of this combined model outperforms that of the dependency-based model with feature expansion by a large margin. Figure 3 illustrates how granularity of the induced sense inventory influences WSD performance. For this experiment, we constructed three inventories, setting the number of most similar words in the ego-network n to 200, 100 and 50. These settings produced inventories with respectively 1.96, 2.98 and 5.21 average senses per target word. We observe that a higher sense granularity leads to lower F-scores. This can be explained because of (1) the fact that granularity of the TWSI is similar to granularity of the most coarse-grained inventory; (2) the higher the number of senses, the higher the chance to make a wrong sense assignment; (3) due to the reduced size of individual clusters, we get less signal per sense cluster and noise becomes more pronounced.
To summarize, the best precision is reached by a model based on un-expanded dependencies and the best F-score can be obtained by a combination of models based on un-expanded dependency features and language model features.
Dataset and Evaluation Metrics
The task of word sense induction for graded and non-graded senses provides 20 nouns, 20 verbs and 10 adjectives in WordNet-sense-tagged contexts. It contains 20-100 contexts per word, and 4,664 contexts in total with 6,73 sense per word in average. Participants were asked to cluster instances into groups corresponding to distinct word senses. Instances with multiple senses were labeled with a score between 0 and 1. Performance is measured with three measures that require a mapping of inventories (Jaccard Index, Tau, WNDCG) and two cluster comparison measures (Fuzzy NMI, Fuzzy B-Cubed). Table 3 presents results of evaluation of the best configuration of our approach trained on the ukWaC corpus. We compare our approach to four SemEval participants and two state-of-the-art systems based on word sense embeddings: Ada-Gram (Bartunov et al., 2016) based on Bayesian stick-breaking process 5 and SenseGram (Pelevina et al., 2016) based on clustering of ego-network 5 https://github.com/sbos/AdaGram.jl generated using word embeddings 6 . The AI-KU system (Baskaya et al., 2013) directly clusters test contexts using the k-means algorithm based on lexical substitution features. The Unimelb system (Lau et al., 2013) uses one hierarchical topic model to induce and disambiguate senses of one word. The UoS system (Hope and Keller, 2013) induces senses by building an ego-network of a word using dependency relations, which is subsequently clustered using the MaxMax clustering algorithm. The La Sapienza system (Jurgens and Klapaftis, 2013), relies on WordNet for the sense inventory and disambiguation.
Discussion of Results
In contrast to the TWSI evaluation, the most fine-grained model yields the best scores, yet the inventory of the task is also more fine-grained than the one of the TWSI (7.08 vs. 2.31 avg. senses per word). Our method outperforms the knowledgebased system of La Sapienza according to two of three metrics metrics and the SenseGram system based on sense embeddings according to four of five metrics. Note that SenseGram outperforms all other systems according to the Fuzzy B-Cubed metric, which is maximized in the "All instances, One sense" settings. Thus this result may be due to Figure 3: Impact of word sense inventory granularity on WSD performance: the TWSI dataset. Table 3: WSD performance of the best configuration of our method identified on the TWSI dataset as compared to participants of the SemEval 2013 Task 13 and two systems based on word sense embeddings (AdaGram and SenseGram). All models were trained on the ukWaC corpus. difference in granularities: the average polysemy of the SenseGram model is 1.56, while the polysemy of our models range from 1.96 to 5.21. Besides, our system performs comparably to the top unsupervised systems participated in the competition: It is on par with the top SemEval submissions (AI-KU and UoS) and the another system based on embeddings (AdaGram), in terms of four out of five metrics (Jaccard Index, Tau, Fuzzy B-Cubed, Fuzzy NMI).
Therefore, we conclude that our system yields comparable results to the state-of-the-art unsupervised systems. Note, however, that none of the rivaling systems has a comparable level of interpretability to our approach. This is where our method is unique in the class of unsupervised methods: feature representations and disambiguation procedure of the neural-based AdaGram and SenseGram systems cannot be straightforwardly interpreted. Besides, inventories of the existing systems are represented as ranked lists of words lacking features that improve readability, such as hypernyms and images.
Conclusion
In this paper, we have presented a novel method for word sense induction and disambiguation that relies on a meta-combination of dependency features with a language model. The majority of existing unsupervised approaches focus on optimizing the accuracy of the method, sacrificing its interpretability due to the use of opaque models, such as neural networks. In contrast, our approach places a focus on interpretability with the help of sparse readable features. While being interpretable at three levels (sense inventory, sense representations and disambiguation), our method is competitive to the state-of-the-art, including two recent approaches based on sense embeddings, in a word sense induction task. Therefore, it is possible to match the performance of accurate, but opaque methods when interpretability matters. | 6,851.6 | 2017-04-01T00:00:00.000 | [
"Computer Science"
] |
SOCIAL WELL-BEING OF THE POPULATION OF THE BORDER RUSSIAN REGION IN TERMS OF HUMANITARIAN CHALLENGES (THE EXAMPLE OF THE ROSTOV REGION)
The issues related to the study of social well-being of the population are presented in terms of the humanitarian challenges experienced in Russian regions. The paper identifies the latent sources of social tension with the border regions classified as high-risk areas of special research interest. In this article, the authors identify and analyze the main components affecting the well-being of the population in the Rostov region, a border region of Russia, from a macrosocial perspective. The empirical basis for the work includes the materials of sociological research conducted by scientists of the Southern Federal University and the South Russian Branch of the Federal Research Sociological Center of the Russian Academy of Sciences in 2013–2017. Based on the results of the empirical measurements, the authors propose that an increase in the population’s overall level of social well-being is vital in forming the long-term strategic goal of the public authorities for social stability in the Rostov border region. The study findings suggest that the main measures need to be aimed at finding ways to reduce the conflict potential of migration; increase the openness of the labor market and reduce the level of unemployment; and improve the security measures, taking into account the proximity to state border. UDC Classification: 316.4; DOI: http://dx.doi.org/10.12955/cbup.v6.1243
Introduction
The Rostov region, located in the southern part of the East European Plain and the North Caucasus region, occupies a vast area of 100 800 km 2 in the river basin of the Lower Don. It is a part of the Southern Federal District, the center of which is Rostov-on-Don, a city with a million-plus population. The region has a state border with Ukraine, the length of which is 660 km. The specific nature of border regions requires a special approach for their development. The researchers attribute these areas with a lower level of social and economic development than in the deeper regions. Slightly less than half of the regions with the repressive type of development and the majority of deeply repressed regions of Russia are located on the border territories (Badarchi & Dabiev, 2012). Humanitarian challenges, primarily related to the increase in migrant flows, the instability of the social and economic situation, and the intensifying social differentiation, have had an impact on society in the border lands. The consequences of this impact affect the social well-being of the population of the border region, creating sources of social tension. The aforementioned represents the need for an analysis of the main components of the population's social well-being in the border areas in terms of the humanitarian challenges. At the same time, this topic is particularly acute in regions with a geopolitical position that could characterize them as areas of increased risk. The Rostov region belongs to this type of region, having a common border with south-eastern regions of Ukraine. It has a status of the 'gate of the Caucasus'. These are the additional factors that expand frustrations among its population. In this regard, this article presents the results of a study into the social wellbeing, particularly, the humanitarian challenges of the Rostov people, who live in this border region of Russia. Social well-being is an important factor affecting an individual's quality of life and attitude towards the main social structure. In terms of the humanitarian challenges, it becomes a 'litmus paper' of the societal status and substantially supplements the economic statistics. In the scientific literature, there are different views on the components of this social well-being (Morrow, 1999, Ryan & Deci, 2000Gorshkov at al., 2016;Volkov at al., 2013).
Data and Methodology
Taking into account the specifics of the studied region (its border location), the main research focused on the following variables: 1. Common emotional and psychological state of the population of the border region; 2. Level of tension in the society by the inhabitants of the region; 3. Perceptual acuity of the problems that have a decisive impact on the quality of life and the social infrastructure in the settlement; 4. The personal security level, taking into account the proximity to the state border; 5. The interethnic relations and the effect migration has on it; and 6. The social well-being of the population in the labor market of the region in terms of the humanitarian challenges. The analysis of the identified components of the social well-being required a preliminary description of its macrosocial context, which was related to the peculiarities of the geopolitical position of the region and the social and economic state. Data were obtained from the Territorial Body of the Federal State Statistics Service (FSSS) (ROSSTAT, 2018) in the Rostov Region. The results of recent sociological research provided the basis for characterizing the local labor market as a potential deterrent for the social and professional mobility. These data were obtained from the South Russian branch of the Federal Research Sociological Center (FRSC) of the Russian Academy of Sciences.
Results and Discussion
There are several points to note about the ethnic composition of the population of the region and the flow of migrants. Apart from the numerical predominance of Russians, the region has historically evolved as a multicultural one. According to the Territorial Body of the FSSS of the Russian Federation in the Rostov region, today, about 150 ethnic groups live in the region. This number rises annually due to the expanding flow of migrants from the North Caucasus republics and from nearby abroad countries (Territorial Body of the FSSS in the Rostov Region). This has been caused by, on the one hand, the attractiveness of the region for migration and on the other, by its proximity to areas of local conflict. Altogether, this increases the burden on the social infrastructure of host settlements and creates sources of social tension, which makes the flow of migrants a serious challenge for the social stability of the studied region (Serikov, 2015). A number of important features about the regional labor market are evident as part of the macrosocial context of the social well-being of the Rostov population. According to the official statistics of 2000-2017, the level of registered unemployment (according to the International Labour Organization (ILO) methodology) declined steadily from 15% in 2000 to 5.3% in 2017 (Territorial Body of the Federal State Statistics Service in the Rostov Region). This indicator slightly exceeded the level of unemployment in the country as a whole, though it was the lowest in the Southern Federal District at the time. Although the level of registered unemployment in the Rostov region, as a whole, showed no deviation from the 'all-Russian' indicators, the data of the South Russian branch of the FRSC of the Russian Academy of Sciences showed that almost half of the working population acknowledged the risk of job loss. Inequality in accessing jobs was consistently included in the three most identified inequalities. The work of Posukhova (2014) showed that 23% of the regional residents worked concurrently at several workplaces to improve their financial situation; 21% used any opportunity for one-time temporary benefits; and 19% of the respondents accepted overtime or part-time jobs. In the period of the social and economic situation instability, the labor market, as an indicator of the social well-being, increased its significance substantially. The production and social infrastructure of the region were highly differentiated due to two main trends. The first is the deformation of some economic sectors that form the bases of survival for a number of settlements (mainly in the extractive industries). The second is the restructuring of the agrarian sector that was the basis for the rural economy, which slowed the development of the industrial and social infrastructure. As a result of these processes, a social and professional structure and a model of the labor market formed in each of the territories. There was a sharp polarization of the territories according to their power and economic, human, and cultural capital (Panfilova, 2017). The noted tendencies intensified social inequality, which represents one of the humanitarian challenges for the regions. The social well-being of the population in the Rostov region showed general trends. The macrosocial changes influenced the state of the social well-being of the population in the border region. The results of the triennial measurements show that, since the 2015 year, the relative balance between positive and negative assessments of the daily emotional and psychological state remains the same. It repeats the all-Russian trends on the whole (Table 1). A sense of calm and balance dominated the responses consistently, apart from a minor decrease in 2016. The 2015 level reflects that of 2017. Therefore, the practices of forced social adaptation to life's complexity were widespread among the residents of the area in terms of contemporary challenges. However, coinciding with the period of social and economic instability in 2017, every fifth respondent responded that general life was accompanied by apathy and indifference. This result was 5% and 8% higher than for the previous two years. The reply from some of a depressive state is also confirmed by an increase in response of general anxiety, as compared with the decrease in the share of the most positive grades (compared to 2015). Nonetheless, more negative states ('I feel the irritation, anger, aggression') showed a slight decrease (from 19.1% to 12.8%).
Source: Authors
With regard to the level of tension in the society, Table 2 shows the results of the research. Possibly the specifics of the geopolitical position of the Rostov region impose a certain effect on the population's social well-being. According to a survey conducted by the Southern Federal University, the represented residents of the region noted an increase in tension more often than those of Russia as a whole (65.6% and 52.2%, respectively (Volkov et al., 2016)). The general well-being of a population is largely determined by a range of parameters that are related to problems experience by a person in daily life. As the scientists of the FRSC of the Russian Academy of Sciences emphasized, the problems noted by the population are the ones most relevant for showing the level of the socioeconomic development of the country and the region (Gorshkov at al., 2016). The results of the regional research show that the problems relating to the satisfaction of basic needs (food, health, housing conditions, and personal security level) were not on of greatest urgency for the residents of the region ('good' estimates for these needs ranged from 28.9%-48.6% and 'bad' estimates were within 12.6%). The main troublesome areas were the quality of medical care and opportunities for recreation during vacation and leisure times ( Table 3). The financial problems were more common in the small towns and urban-type settlements where the residents often noted dissatisfaction with the housing conditions and dress status. There were also low assessments of health care quality (69.2% in urban-type settlements), which is the reflection of problems in this sphere that have become particularly acute in small towns (Vyalykh, 2015). As for rural settlements, contrary to expectations, the overall degree of positive assessments was quite high. The quality of medicine (36.8%), leisure (33.3%), and recreation during the vacation (39.3%) were among the most urgent problems considered by rural respondents. Meanwhile, the villagers were most satisfied with life in general (only 1.9% noted 'bad' as the option).
Source: Authors
Personal security was estimated for the population of the border region. The concept of a 'border region' implies that the territory included in it is experiencing a significant impact on the state border. This somehow affects the assessments of the level of personal security. In the case of the Rostov region, the situation is complicated by the proximity to regions of the North Caucasus. In the terms of the events in Syria, the threat of radical Islamic spread has increased. However, the empirical measurements of the last three years show that personal security remains a sphere where the population of the region considers there has been significant improvements (49%-56%); in comparison with the all-Russian indicators the level of positive attitudes was almost 10 points higher among the inhabitants of the region (Volkov and Serikov, 2015;. This result reflects the important work of state institutions in this area in the context of the humanitarian challenges. Nevertheless, the level of anxiety about personal safety shows an increase with the complexity of territory infrastructure (7.7% in rural areas and 15.5% in the regional center). The assessment of the scope of the interethnic relations was an important component of the social well-being of the population of a multi-ethnic region. The results of the research by the scientists of Southern Federal University (Volkov et al., 2016) show that, despite some distance, the nature of the interaction between the ethnic groups remained calm (Table 4). However, concerning the ethnic flows of labor migration, as a factor of interethnic relations, the estimates of the region's population were more polarized. The researchers of Southern Federal University recognized migration to the Rostov region as a distinct ethnic character. Their research results showed differentiation of the respondents' answers. One group implied migration had no influence on interethnic relations (30.8%) while others indicated varying levels negative consequences (58.5%). It is also noted that 10.7% of the respondents found it difficult to provide a definite assessment. These residents were considered a 'risk group' since, in case of a deteriorating situation, they would likely join the group with negative attitudes (Volkov et al., 2016). Heightened tension occurs primarily in places of concentrated migrant settlement (the area in the southeast of the region with favorable landscape and climatic conditions for economic activity). In 2014, a more complicated situation arose due to migrant influx from the southeast of Ukraine. Experts emphasized that the receiving environment could not absorb the migration without distress to local communities caused by the integration of an incoming population. The competition for economic, labor, and land resources in a situation of largely low employment in the rural areas led to accumulated discontent among the local population and worsening of the general social and economic well-being. The high level of migrant group cohesion and their isolation and behavioral norms amplified hostilities in the host community (Serikov & Bedrik, 2017). Thus, the labor migration to the region has a certain conflict potential.
While the interethnic relations, as a component of social well-being, is viewed positively by the general population of the Rostov region, the expanding migration flows is a factor of increasing general anxiety. For the social well-being of those within the regional labor market, in terms of humanitarian challenges, the following trends were observed. The position and opportunities in the regional labor market is an important component of the social well-being of the residents in the border region in terms of dealing with economic instability, internal and external migration, and increased competition of the ethnic groups for certain niches within the employment areas. According to the data of the South Russia Branch of the FRSC of the Russian Academy of Sciences, the degree of positive ratings in this sphere was quite low. Apart from extreme negative gradings (below 11%), the majority of the respondents favorably viewed the chances for successful employment (56.4%) and opportunities for the professional fulfillment (52.4%). These indicators had not changed over the previous five years. The comparison of the spheres of potential employment indicated that the most prestigious social and professional positions in the region are difficult to obtain. The monitoring of the regional vacancy market showed that the vacancies related to the administrative personnel, banking and financial workers, and information technology rarely appear for general access. The territorial deformation of the regional labor market, mentioned above, affects the assessment of the career opportunities. The absence of a variety of areas of employment in rural areas and small towns, caused by a small set of industries, significantly narrows professional choice. At the same time, the opportunities for obtaining additional knowledge were viewed positively by a third of the respondents (33.9% as 'good'). This is most likely due to the rather high level of the educational infrastructure development in the studied region.
Conclusion
The results of this study show that the frontier position of the Rostov region influences the main components of the population's social well-being. The humanitarian challenges faced by Russian regions in recent years, i.e., the increase of migrants, socio-economic instability, and social polarization, are experienced more so by the residents of the border area, subject to a complex geopolitical situation. This situation affects the nature of social well-being in the following way. A number of important conclusions can be drawn. First, the border area, in most cases, becomes an area of slow economic development. The Rostov region belongs to the type of region having a rate of development below the national average. As a consequence, a slow economy affects opportunities for material well-being of several population groups and in the regional labor market, creates tension, which is a destructive factor for social well-being. Second, the additional burden on social and domestic infrastructure and the local labor market caused by the external and internal migrants is adversely perceived by the host population. This burden becomes a factor of expanding frustration. Such a trend is especially observed in small towns and rural settlements, which were the least favorable positions in the territorial differentiation because of low incomes, lack of professional fulfillment and opportunities for human capital, and problems with social services. As a result, the level of migrant phobia has significantly increased with the migrant laborers considered potential carriers of social risks and that almost any domestic conflict is deemed interethnic related. Third, the proximity of local sources of conflict causes a higher level of tension in the border region than in Russia as a whole. The analysis of the general emotional and psychological state of the population showed a prevalence of residents who felt calm and balanced in general life. On the one hand, the results showed that a high adaptive potential of the population in terms of challenges, and on the other hand, it indicated their satisfaction with personal security, as evidenced by the high assessments of the state institutions' activities in this field. Finally, to improve social stability, it is recommended that public authorities of the Rostov region instill a long-term strategy to increase the overall level of social well-being of the population. First and foremost, to mitigate the potential conflict, due to migrant influx, include ways that increase labor market access, reduce unemployment, and further improve security measures in terms of the proximity to the state border. | 4,320.4 | 2018-09-27T00:00:00.000 | [
"Economics"
] |
Ensuring the Confidentiality of Nuclear Information at Cloud Using Modular Encryption Standard
the original
Introduction
e IAEA (International Atomic Energy Agency) has provided suggestions on nuclear security against radioactive material and its associated resources.e general goal of the nuclear security regime is to ensure the security of people, society, and the environment from malevolent acts [1].A "nuclear security plan" is created as a component of the nuclear security regime of a state.e con dentiality of nuclear information is a critical constituent of this plan.e demand for security of con dential information is not a new experience [1].Cryptography has been utilized for a huge number of years in distinct domains, for example, in battles and political and judicial issues.It is the investigation of systems for secure transmission, which can be dated back to old Greece and Egypt.
Con dential nuclear information is highly vulnerable these days because of the rise of cyber warfare, cyberterrorism, and hacking.One quite certain domain of information con dentiality with expanding risks is nuclear information security.Information con dentiality is a vital piece of nuclear security.e security of nuclear systems is considerate of the security of nuclear information from the threats explicit to nuclear frameworks.e aims of security of nuclear information constitute security against the larceny of assets (both information and physical); security against cyberterrorist acts; and security against a joined cyber assault, guaranteeing business progression, nuclear protection, and assurance against loss of nuclear con dential or additionally characterized information [1,2].e following section will explain the cloud-computing backdrop.
Cloud computing (CC) refers to network-based computing which provides shared computing power rather than relying on the local server to personal devices for high-level computation.
e generic model for cloud computing is shown in Figure 1.
But when we use CC for nuclear information monitoring, con dentiality is the rst and foremost issue to consider [5].So, this paper is concerned about nuclear information security against cloud computing using cryptography techniques.
e structure for the rest of the paper is as follows.Section 2 presents the literature review.Section 3 presents an overview of the proposed framework.Section 4 demonstrates complete MES working.Section 5 presents performance analysis.Section 6 provides the application domain of the proposed scheme, and finally, the paper concludes with Section 7.
Problem Statement.
e general target of nuclear security administration is to ensure the security of people, society, property, and the environment from harmful utilization of nuclear information.Now, with the immense increase in cyberattacks, nuclear information security has accumulated worldwide attention.Individuals and groups aiming to devise any spiteful action including radioactive material or nuclear material may be benefited by access to confidential information.Information monitoring and outsourcing at cloud ought to be managed in order to guarantee that it is not unintentionally imparted to or uncovered to the attackers or other intruders.Such information maintenance ought to be done by means of identification, classification, and securing with the adequate measures according to the user-specified preferences.
Literature Review
Many researchers have proposed different approaches for the security of confidential information.
e Data Encryption Standard is the US Government affirmed enciphering protocol.Its production has embraced open review, as it is exceptionally acclaimed and broadly acknowledged as a symmetric ciphering scheme in organizations and business places.ANSI (American National Standard Institute) has acknowledged it as US National Standard.It comprises Feistel cipher-based 16 rounds.is algorithm takes 64-bit plaintext as the input and by using the 64-bit key; it results in the 64-bit cipher text.e real change is performed by 56 bits of the key, which are enrolled as autonomous bits, and the remainder 8 bits are used for the purpose of error detection [7].
Triple DES proposed after the DES protocol, introduced in the mid 1970s, utilizes a 56-bit key.e viable security 3DES relies on the 112 bits against the meet-in-the-middle attacks.Triple DES runs multiple times slower than DES yet is much progressively secure whenever utilized appropriately.
e method for deciphering is equivalent to the method for enciphering, aside from it running in the reverse manner [8].
e US Government discipline and NIST ("National Institute of Standards and Technologies") in 1997 stepped up and discovered a DES alternative.Because of mechanical, processing, and power progresses, DES expressed as less secure.e point behind this crusade by the US and NIST was the recognizable proof of DES alternative as a security, ensuing effort for nonmilitary applications.It would be useful for commercial as well as nongovernmental purposes.Among all the submissions, AES was specified as the most recent and most secure protocol as a block cipher with three distinct kinds of keys rangeing from 128 bits to 256 bits [9].
Blowfish is a Feistel structure-originated protocol and has block cipher classification.As input, it takes a 64-bit data chunk.Its key size changes between 32 and 448 bits.Its execution encompasses 16 rounds.Two significant functions of this protocol are the key extension and data enciphering.ere is no reliance on keys and s-boxes.It requests for more execution time because of its variety of key sizes.As the composition of the sub-key sets uses extra time for execution, in case of brute force attack, it results in extraordinary trouble.Long-term security is given by Blowfish, with no concealed weaknesses.Its unwavering quality is incredibly influenced because of the broad utilization of less-secure keys.Four initial rounds are uncovered to differential attacks [9].
RC5 is a block cipher symmetric key algorithm, prominent for its clarity.Structured by Ronald Rivest in 1994, RC means "Rivest Cipher."A new highlight of RC5 is the overwhelming utilization of rotation that is data-dependent.It possesses varying sized words, varying round numbers, and a variable-length private key.It comprises three parts: encryption, decryption, and key expansion algorithm [10].
Data coloring and software watermarking techniques are used for protecting shared data objects with multiway authentications which tighten access control of confidential information in any type of cloud, private or public.In [11], they have proposed that each file uploaded by the data owner is encrypted into TTP-based hash code and the user gets the secure e-mail with the complete file.Other key details in secure cloud data sharing using a trusted third party is discussed here.
e privacy of confidential information remains the most significant concern while utilizing public cloud storage.One of the rising techniques for tending to this problem is cryptography.Bentajer et al. [12] presented the ID-based encryption design, i.e., CS-IBE, for using public cloud storage that targets to secure the confidentiality of sensitive information.According to CS-IBE, configuration files are associated with single file accession practice with the client ID which will be utilized as the key to encipher.User ID is used to encrypt the files before uploading them to third- 2 Security and Communication Networks party cloud providers, so for the outsourced information, it will add a layer of security.Moreover, CS-IBE operates like an overlay framework against cloud storage.Dorairaj et al. proposed a system that depends on a multiaspect/multilevel scheme for secure data storage in the cloud.Initially, the data are evaluated, characterized, and divided appropriately if necessary.Furthermore, data are secured from intruders by utilizing enciphering techniques depending on the sensitivity and criticality.e confidential information is secured from unapproved access by the consideration of Mandatory Access Control that incorporates multifaceted verification that relies on categorization.Consequently, in the log register, all types of access, i.e., authorized or unauthorized, are recorded which can be utilized later for foreseeing the attacks, taking control estimates dependent on the attacks, attempted by reclassifying or updating the measures of security to adapt the data changing sensitivity as indicated by the business requirements [13].
Different security attacks were identified by classifying security concerns at multiple levels.A new dimension was provided in [14] by highlighting threats at each cloud layer.
ese security layers can also have access as low, medium, and high.e security requirements, e.g., data privacy, multitenancy, and data encryption, were mapped to different cloud security issues for achieving confidentiality and integrity in the cloud environment.
To enable complete security transparency analysis at a cloud, a framework was proposed in [15].In the cloud, security transparency is probably going to turn into the key concern that supports a proper revelation of security practices and designs that intensify client confidence towards cloud services.In this paper, they presented a structure that empowers an investigation of security transparency for cloud-enabled frameworks.Specifically, they considered security transparency from three distinct degrees of deliberation, i.e., organizational, technical, and conceptual levels, and from the perspective of these levels, they identified some important facets as well.
Data are protective if they satisfy the three conditions, specifically CIA, i.e., Confidentiality, Integrity, and Availability.Confidentiality is attained with the assistance of cryptography in cloud computing.Symmetric enciphering algorithms, eminently the Blowfish scheme, showed remarkable achievement.In [16], a distinct technique of the Blowfish algorithm utilizing the Shuffle algorithm was proposed against the assurance of data confidentiality at the cloud.
Remote information storage presents difficulties such as unsystematic use of assets and threat from insider's attack to information at rest in the distributed storage.In [17], an architecture is proposed for distributed allocation of storage for reasonable use of resources and likewise an incorporated end-to-end protective system for information at rest to take out insider's risks in the cloud storage.
For directing security requirements, HGKA-OA (i.e., hierarchical group key agreement protocol using orientable attribute) is presented in [18].Using this scheme, diverse confidential data can be shared among many individuals who have distinct levels of authority.Expecting distinct levels of authorization, where one individual has confidential data, he can replace data with certain individuals (i.e., those who have the specific degree of security consents) instead of all individuals of the group.
When a client enciphers his information by deterministic enciphering technique, a frequency attack is a critical issue to consider.Moreover, clients' information security is probably going to be uncovered to servers at the cloud when their encoded information is upgraded.In order to tackle these issues, in [19], the author proposed an efficient enciphering query technique over the data (that is outsourced to the third-party provider).Using this protocol, clients' information is enciphered based on every single feasible query, to fulfill clients' requests.Moreover, a twofold AES enciphering strategy is proposed to tackle the deterministic encryption-based frequency attacks.
In [21], the author proposed a security module dependent on a "Field Programmable Gate Array (FPGA)" to alleviate man-in-the-middle attack in nuclear power plants.It additionally provides support for applications that demand cybersecurity with embedded computing (i.e., a model-based engineering technique is provided).is FPGA-relied security module is proposed to tackle the attacks that are specifically intended to gain access to confidential information by gaining the system's physical access.
A steganography tool dependent on DCT is actualized in [2] to secure the nuclear reactor's confidential and sensitive data, utilizing the middle frequency-based "sequential embedding strategy."It was indicated that the proposed scheme improves the accuracy and security of the information and supplies a large capacity of embeddedness without distorting the resulting visual image.
Overview of the Proposed System
is scheme starts with the identification and classification of the intended information.is identification and classification would be from the perspective of the degree of security required.First of all, at the user side (at first level), an auto-generated key will be provided through the autogenerated key module (i.e., entropy-based key generation/ randomly generated bits, i.e., to make it hintless).e data owner has the choice of keys.He/she can choose any of the keys according to his/her requirements, e.g., as mentioned in the classification step.At the next level, data are encrypted to some extent through the extender/contractor module (this module will accept 56 bits and extend those to 64 bits by adding 4 random bits at the beginning and 4 random bits at the last).After passing through the extender/contractor module, it is passed to the intermediary cloud.So, data are not handed over to the 3rd party cloud (i.e., intermediary cloud) in actual form but instead in the extended form (i.e., the data transmitted to the intermediary cloud are not the actual form of data but extended form).Here, the intermediary cloud is responsible for cryptography (for performing step 3, i.e., securing), and here, data would be split into various blocks and these blocks will be stored at different clouds (in encrypted form).
As it is a modular encryption algorithm, one module (for autogenerated key) is implemented on the user side, a Security and Communication Networks second module for extension/contraction before uploading to cloud, and the rest of the modules on the intermediary cloud; hence, this modular approach at distinct levels leads us to multilevel security.us, even CSP cannot have access to our confidential data because each CSP has access to just a block of data (in encrypted form) and not to the entire data, i.e., protection not just from outsiders but from insider's access as well.When a user requests for data, this request will proceed to the intermediary cloud as well as to the data owner.Requested data would be sent to the user through the intermediary cloud; secondly, the encryption scheme and extender/ contractor module scheme would be shared by the data owner to the user through a secure message.e multilevel modeling of our proposed algorithm can be seen in Figure 2. Our proposed framework does not specify any particular type of cloud (i.e., private, public, community, or hybrid), and it can be used for any cloud environment, but we have proposed a new cloud storage framework (utilizing multicloud and modular paradigm).
Our proposed algorithm lies in the category "c" identified in [6].Hence, this approach does not just protect us against insider attacks but also from outsider unauthorized access, and it can work against any other cryptographybased symmetric block cipher requirements.
Methodology
A multilevel security approach through MES at cloud against nuclear information security has been proposed as shown in Figure 3.Our proposed scheme comprises three major steps, i.e., Identification, Classification, and Securing.is solution can work against any type of cloud.
Identification.
e need for protecting nuclear information is regulated by performing identification and classification according to the degree of risk factor.Here, we have to identify the sensitivity and criticality of the data.e identification of nuclear information is based upon userspecified requirements.It generally has two broad categories (with further subcategories): confidential information, which requires protection, and public information, which does not require protection.Public Information (i) Category 1: publicized information Confidential Information (ii) Category 2: the information which is selected to be kept confidential, but whose disclosure would not result in any risk (iii) Category 3: the information whose disclosure would result in the threat of damage to nuclear warheads (iv) Category 4: the information if disclosed would almost certainly result in a serious threat to nuclear weapons or the people (v) Category 5: the information if disclosed would almost certainly result in a severe threat to nuclear warfare Accordingly, there would be five categories based on the confidentiality level of information.e user would specify his preferences to secure his information such as category 1, category 2, etc., based upon the sensitivity of the information.4.2.Classification.Nuclear information classification scheme decides the level of confidentiality of the information.is is helpful in choosing the information that actually should be protected, and it ultimately diminishes the cost of security.We have classified these two categories into five different subcategories (based upon the sensitivity level).
ese five different subcategories are mentioned below.In the securing step, we have proposed five different types of keys for the five subcategories mentioned as follows (based upon the sensitive level of nuclear information): Public information (i) Nonconfidential Information Confidential information (ii) Less confidential information (iii) Moderately confidential information (iv) Highly confidential information (v) Extremely high confidential information 4.3.Securing.Here, we are going to explain the entire securing step thoroughly after classifying the type of information as shown in Figure 4.
is algorithm is a symmetric block cipher; it will accept 56-bit plaintext as an input (block size).Our proposed algorithm categorization can be seen in Figure 3. Secondly, a simplified encryption model is illustrated in Figure 5.
Extension/Contraction.
e extension (at encryption side) is the addition of random 4 bits at the beginning of the 56-bit chunk and at the end of the chunk.Similarly, decryption side contraction is the removal of 4 bits from the left and right side.Now, it will pass through the key whitening step.e main reason behind this extension is that data are not handed over to the third-party cloud provider in actual form but in extended form.
Key Whitening.
is step constitutes three substeps, i.e., expansion, key addition, and contraction.In this step, we will first expand our data from 64 bits to 128 bits.e expansion process is shown in Figure 6.
is step performs processing on data before the 1 st round by performing key addition (for merging of data with key portions).We used here a large-size key to increase security, and for key addition, the data size should be equal to the key size.erefore, we performed extension, and after key whitening, we performed the contraction on data (to get the actual size).
One data block would have eight bytes, and it would be converted to sixteen bytes by generating two bytes from a single one.
Security and Communication Networks
For this, we will perform XOR for every two bits and will get the 3rd bit after every two bits.Now, we have converted every eight bits to 12 bits.Moreover, XOR every three bits and we will get the fourth bit after every three bits.Our data have been converted to 16 bits from eight 8 bits, for every chunk (1 byte). is step will be performed on 8 bytes, and consequently, we will get 16 bytes (expansion).Now, we will perform key whitening by performing XNOR (it checks the logical equality of input bits) on key 0 with our expanded 128-bit data.After key addition, we will contract our data by discarding every two bits following two bits, like 3 rd , 4 th , 7 th , 8 th , and so forth.Now, we would have 64-bit data at the end of the key-whitening step.After this, data would pass to round 1.
Encryption
(1) Round 1 (i) Permutation (odd round): now, we would have 64-bit data in the 8 × 8 matrix by taking the first two rows (16 bits) from the left to right manner.Here, these numbers represent the location of the bits.e 8 × 8 matrix is shown in Figure 7.We would tackle this by taking two rows at a time.Likewise, we would take the first two rows from the left to right manner and place them in the 4 × 4 matrix as shown in Figure 8. e second time, we would take the next two rows of this 8 × 8 matrix from the right to left manner according to Figure 7 and so on.e 4 × 4-matrix permutation process is shown in Figure 9. Now, place the first two rows of Figure 7 (i.e., first 16 bits) in a 4x4 matrix.
Permutation would be performed in a way elaborated as follows (Figure 9).1st location would replace 16th location bit, 4th would replace 13th, 6th would replace 11th bit, and 7th would be replaced 10th bit, just like a small matrix within a large one.Figure 10 demonstrates the output 4 × 4 matrix, i.e., after permutation.
(ii) Shifting: shifting of 64-bit matrix would be performed in manner like "X." 1st location bit would replace 8th location bit, 10th would replace 15th, 19th would replace 22nd, 28th would replace 29th and so on.Triangles below, up, left, and right of the cross would be replaced as follows: the left triangle would replace the right triangle and the upper triangle would replace the lower one.is step constitutes diffusion.Figure 11 shows preshifting step and Figure 12 shows the postshifting step, i.e., the 8 × 8 matrix.
(iii) Substitution: substitution would be performed by looking at the values through the s-box (lookup table), i.e., Table 1.As input is the 64-bit matrix with each row containing 8-bits, we divide each row into two nibbles (i.e., the first four bits in a row represent one nibble and the second four bit represent the second nibble).We would have to substitute each nibble with the s-box value (first two digits represent a row of s-box and second two columns represent a column of s-box) which indicates a certain cell of s-box. is particular value is the substituted value.is step involves confusion (which makes it hintless).
(iv) Key addition: now, we would XOR our 64-bit data with the key 1's first 64 bits (v) Key subtraction: after this, we would XNOR our data with the next 64 bits of key 1.
Security and Communication Networks
(2) Round 2 (i) Permutation (even round).In round 2, the permutation process is changed from round 1. Figure 13 shows the 64-bit matrix.Extract 1st sixteen bits and write them in a 4 × 4 matrix (as depicted in Figure 14) and each row of that matrix (nibble) will be written in a 2 × 2 matrix and permute as diagonal bits would replace by itself (2nd and 3rd) as shown in Figure 15.After processing 4 rows in this way and writing them in a 4 × 4 matrix, we would again permute this 4 × 4 matrix by replacing 1st and last bits (1st 6 Security and Communication Networks and 16th) and replace the diagonal digits with others in the same diagonal as shown in Figure 16.e 2nd location bit would replace the 5th location bit, 3rd would replace 9th, 6th would remain unshifted, 7th would replace 10th, 4th would replace 13th, 8th would replace 14th, 11th would remain unshifted, and 12th would replace 15th.In this way, just 16 bits will be permuted.We will repeat these steps for complete 64 bits by taking 16 bits of data at a time.We will perform the same permutation in the 3rd round and in the 4th round, and the 1st round's permutation will be used.erefore, in the 1st, 4th, and 7th rounds, the same permutation will be used and all other rounds will have the same permutation.Figure 15 shows the 2 × 2-matrix Security and Communication Networks permutation, and the 4 × 4-matrix permutation is shown in Figure 16.e rest of the steps will be the same as round 1.
Decryption
(1) Round 1 (i) Key subtraction: the ciphertext of 64 bits will XOR with 64 bits of the key 9 (ii) Key addition: data will XNOR with the next 64 bits of key 9 (iii) Inverse substitution: inverse substitution by looking at data in inverse s-boxes (iv) Inverse shifting: inverse shifting will be the same as shifting in encryption (v) Inverse permutation: inverse permutation will be the same as encryption but opposite (arrows) in direction In the second round, all other steps will be the same as round 1, and just inverse permutation will differ (as defined in encryption).Inverse permutation will be as like in encryption in round 2.
Key Transformation (5 Types of Keys)
(1) For 128-Bit Key. e key is divided into 4 chunks (32 bits or 4 bytes for each chunk).
Step 1: first, multiplications will be performed on 1st and 3rd chunk and 2nd and 4th chunk (byte-by-byte multiplication).From this, we will get two chunks; each chunk consists of four parts.
Step 2: from the highest common factor (HCF) between 1st part of first chunk and 1st part of second chunk, 2nd part of first chunk and 2nd part of second chunk, 3rd part of first chunk and 3rd part of second chunk and 4th part of first chunk and 4th part of second chunk, we will get a single chunk with four parts.
Step 3: each part may exceed 8 bits; we will reduce it to 8 bits by dividing each bit with fix prime polynomial of "x 8 + x 6 + x 3 +x 2 + x." Now, the chunk obtained with four parts, and each part will be equal to 1 byte (total 32 bits).
Step 4: for each byte, we will get first nibble's XOR result and second nibble's XOR result.As a result, we will get two bits from each byte, these two bits will XOR with each other, and we will get a single bit from each byte.Since the whole chunk has four bytes, we will get 4 bits after processing each byte.By performing XNOR on first two bits and 3rd and 4th bit, we will get 2 bits.XOR these two bits again and now the whole chunk with four bytes has been reduced to a single bit.
Step 5: now, this single bit will process each byte of the chunk obtained after step 3 (derived from mathematical synthetic division concept).We named it "Synthesion."Synthesion process is shown in Figure 17.
Step 6: after this, we will get a new chunk with four bytes.is single chunk (bytes) will XNOR each of the 8 Security and Communication Networks four chunks obtained at the beginning before step 1 (on key 0).After this, the new chunk obtained is our key 1. e same steps will be repeated for key 2, key 3, key 4 to key 9. us, 10 keys will be generated by a nine-time transformation of key 0.
(2) For 160-Bit Key. e key is divided into 5 chunks (each of 32 bits).e fifth chunk will not be used at the beginning, but instead, we will XNOR that chunk with the chunk obtained after step 3.After this, the same steps will be performed in the same sequence except step 6 where we will just utilize the last four chunks of key 0 to XNOR (instead of complete key 0).
Step 1: modulo-2 multiplications will be performed on 1st and 4th chunk, 2nd with 5th chunk, and 3rd with 6th chunk.It will result in three chunks; then, we will again multiply (modulo-2 multiplication) first and third chunk.Place this output chunk first and then the second chunk is placed.Finally, we have two chunks.
Step 2: HCF of 1st part of each chunk, 2nd part of each chunk, 3rd part of each chunk, and 4th part of each chunk; from this, we will get a single chunk and next steps will remain the same as for 128-bit key except step 6 where we will just utilize middle four chunks of key 0 to XNOR (instead of complete key0).
(4) For 224-Bit Key. e key is divided into 7 chunks.e seventh chunk will only be used for XNOR with the chunk obtained after step 3. erefore, we have the first six chunks.e same step sequence with the same method will be used for the rest of the key transformation as like the 192-bit key.
Step 1: modulo-2 multiplications will be performed on 1st and 5th chunk, 2nd and 6th chunk, 3rd and 7th chunk, and 4th and 8th chunk.Now, we will get four chunks.Multiply (modulo-2) first and fourth chunk and second and third chunk.Now, we will get two chunks.
Step 2: HCF of 1st part of each chunk, 2nd part of each chunk, 3rd part of each chunk, and 4th part of each chunk will be taken.From this, we will get a single chunk.e rest of the steps are the same except step 6 where we will make two parts of key 0 (one part of the first two chunks and last two chunks (each chunk with 4 sub-parts) and the second part of the middle four chunks).First, we will XNOR these two chunks and it will result in a single chunk (with four subparts).Now, this chunk will be XNORed with step 5 results.e rest of the key transformation is the same as explained for 128 bits.Next keys (i.e., key 2, key 3 to key9) will also use 128-bit key transformation steps.No matter whether the key is of 128 bits, 160 bits, 192 bits, 224 bits, or 256 bits, the key whitening step will take only 128 bits and key addition and key subtraction will take 64 bits at a time.
Key Encryption and Decryption
(1) Selection of Forward (Encryption) and Backward (Decryption) Key No.After encrypting the data and converting them to ciphertext, we will encrypt the key as well.For example, for the 128-bit key (4 chunks of 32 bits), each chunk has 4 bytes or 8 nibbles.After performing XOR on each nibble, we will get 8 bits and after performing XOR on the 8 bits, we will get one single bit.erefore, from each chunk (32 bits or 4 bytes or 8 nibbles), we will get a single bit.From the 4 chunks, we will have 4 bits.Convert these four bits to decimal and get a remainder with 26. e number obtained as remainder will be used for forward and backward intervals (just like Caeser cipher, but Caeser cipher has fixed key of 3 but we will use the number obtained as remainder).
(2) Key Encryption.First, each nibble will be converted to decimal and this number is matched in the alphabetic table.
en, the matched number will be forwarded to the times the number obtained as a remainder, and now after forwarding, the matched alphabetic is obtained as the ciphertext form of that nibble.
(3) Key Decryption.For decryption, the same alphabet will be matched in the table to its corresponding number and backward move to the times the number obtained as remainder (this number will be shared between the sender and receiver especially for data sharing between the data owner and user) and now the matched number will be converted to binary (required nibble).Key encryption and decryption schemes can be seen in Figure 18.Security and Communication Networks e proposed System Hierarchy is shown in Figure 19.
(4) S-Box.In a nonlinear way, s-box and inverse s-box are designed.Table 1 shows the s-box values.
Signi cance of the Proposed Algorithm
(i) Our proposed algorithm provides ve keys (two extra keys than AES), but with no extra bit utilization (256 bits same as AES).(ii) It constitutes key encryption as well (to enhance security).(iii) It is a modular algorithm.Some modules are at the user side (autogenerated), though extender/contractor module data will be extended before uploading to any cloud and other modules for the intermediary cloud.erefore, the overall bene t is that data will not be handed over in actual form to any third party.(iv) Every single key is treated in two di erent ways, i.e., half key for XORing and half key for XNORing (except key whitening).(v) Every single key transforms the data two times (except key whitening).erefore (except key whitening), the key intermix 18 times with data instead of 9 times (for 9 rounds), as key addition and key subtraction are, in actual, the data and key subsuming.is is the major reason behind 64-bit data and using a large size key to subsume the single block of data (8x8 matrix) by a single key twice.is technique is proposed against insider and outsider's attack due to multilevel security and multicloud utilization.Comparative analysis of the proposed scheme with commonly used algorithms also has been presented in Table 3 Step 1: here, we get actual 56-bit data: Step 2: 56-bit data extension to 64 bits for equation (1): Step 3: temporary data extension from 64 bits to 128 bits for key whitening for equation ( 2): Step 4: data contraction from 128 bits to 64 bits for equation (3): Step 5: permutation based on even or odd round for equation ( 4): Step 6: substitution based on s-box (i.e., lookup table ).Key addition and key subtraction for equation (5): (2) Receiver Step 1: now at the receiver side, key subtraction will first cancel out the effect of key subtraction done at the sender side for equation (6).XOR effect of K Ri on the receiver side will cancel out the XNOR effect of K Ri on the sender side: Step 2: XNOR effect of K Li on the receiver side will cancel out the XOR effect of K Li on the sender side for equation (7): Step 3: inverse substitution on the receiver side will cancel out the effect of substitution on the sender side for equation (8): Step 4: inverse permutation on the receiver side will cancel out the effect of permutation on the sender side for equation ( 9): Step 5: this key addition will actually cancel out the effect of key whitening on the sender side for equation (10): Step 6: contraction of 64-bit data to 56-bit data: Equation ( 12) shows the resulting plaintext.
4.3.9. e Proposed System Hierarchy.Algorithm 1 shows data encryption scheme, Algorithm 2 shows key transformation, and Algorithm 3 shows the key encryption scheme.
(1) Key selection out of five keys based upon identification and classification (2) Entropy-based key generation through the autogenerated key module (3) Extension of 56-bit data chunks to 64-bit through the extender (4) Key transformation (5) Expansion of 64-bit to 128-bit data (6) XNOR key 0 with expanded 128-bit data (7) (1) Key generation (2) e key division into chunks (3) From HCF, get a single chunk (4) Fix polynomial division to get standardized form (5) Four bytes' reduction to a single bit (6) Single-bit processing with the output of step 5 (7) e output of step 7 will XNOR with key obtained from step 1 (8) Resulting key (9) Exit ALGORITHM 2: Key transformation.12 Security and Communication Networks
Performance Analysis
e proposed encryption protocol is implemented based upon the following hardware and software specifications.
is section introduces the outcomes we obtained for the performance analysis of our proposed scheme.We performed MES modular CPU time consumption analysis as well as MES and AES encryption time-based comparative analysis.Experimental setup description is in Table 4.
CPU Consumption. Table 5 shows modular-based CPU consumption in seconds.
e CPU utilization is the calculation of the time that a CPU consumed for a specific computing process.It mirrors the CPU load.e more the CPU is utilized in the encryption procedure, the greater the CPU load will be.e execution time of the major modules of the proposed scheme is provided here.We calculated the CPU cycle time for each round by taking different data sizes as input.Key transformation consumes relatively more CPU cycles than other modules.e obtained time in the execution of the proposed scheme shows its applicability.
Table 6 shows the simulation results of the performance analysis of MES on distinct processor types.is comparison is based on distinct data sizes.We performed this experiment on multiple Intel generations and calculated their CPU consumption time.e experiments are carried out to check the efficiency at distinct platforms for time calculation.Elapsed time calculation of MES and AES encryption is carried out.Figure 20 shows the performance analysis of AES with MES based on CPU cycle consumption.
Encryption time is the time taken by the encryption scheme for the conversion of plaintext to ciphertext.Encryption time for any scheme assists in calculating the throughput.It specifies the encryption speed.Less encryption time denotes more throughput; the higher the throughput, the lower will be the power consumption of the enciphering scheme.We took different input sizes and got the results below.e elapsed time for AES was relatively more than MES elapsed time.e following graph shows the superiority of MES over AES. e following graph Figure 20 shows the results based on seconds.
Memory Consumption.
One of the most significant parameters for performance analysis is memory consumption.
Figure 21 shows the memory consumption graph of MES. e above experiment was done with the help of the visual studio analysis tab.
e diagnostic session of MES was 10.804 seconds with memory consumption in kilobytes (kbs).
Figure 22 shows the memory consumption graph of AES. e diagnostic session of AES was 15.266 seconds, and the memory consumption was up to megabytes (Mbs).
Key Variances.
Figure 23 shows the key variances, range of keys, or options of key utilization based on the user preferences towards attaining security level.MES provides five distinct types of keys against five different classifications of nuclear information.AES provides three types of keys, and DES and IDEA provide a single type of key.erefore, Figure 23 shows the highest degree of key variances of MES.
Single-Round Key
Subsuming.Every single key transforms the data two times (except key whitening).Figure 24 shows the comparative analysis of MES, DES, AES, and IDEA from a single-round key subsuming perspective, where MES transforms the data twice in a single round as compared to AES, DES, and IDEA.
Application of the Proposed Scheme
Few regions inside nuclear systems that could possibly be susceptible to cyberattacks: (i) Information among monitoring stations (ii) Information from monitoring centers to missiles and missile stations (1) From the 4 chunks of the key, we will have 4 bits (2) Convert them to decimal and get a remainder with 26 (3) e remainder will be used for forward and backward intervals (4) Begin (5) Convert each nibble to decimal (6) Match that nibble in the alphabetic table (7) e matched number will be forwarded to the times the number was obtained as the remainder (8) Resulting ciphertext (9) Resulting key (10) Exit ALGORITHM 3: Key encryption.Security and Communication Networks (iii) Telemetry information from the projectile to space-and ground-based monitoring resources (iv) Information from space-based frameworks including navigational, positional, and timing information for the systems that perform worldwide navigation (v) Climate information from ground-, air-, and space-based sensors 14 Security and Communication Networks (vi) Information about positioning to deploy platforms (for example, submarines) (vii) Ground station's information (viii) Information among joined controlling stations [20] Such nuclear information is highly vulnerable to cyberattacks.erefore, we proposed an enciphering scheme against nuclear information confidentiality.
Conclusion
e duty of guaranteeing the presence and efficient functioning of the nuclear security regime of a state relies on the state's government.Guaranteeing the security of confidential information is a fundamental constituent of the state's nuclear security regime that it ought to uphold.Cloud computing is prestigious for providing information technology administrations.
ese days, organizations and communities are keen on moving their enormous computations and data into the cloud.Since it requires to be protected all over its lifetime, data confidentiality in the cloud is a noteworthy issue to be tackled on, in light of the fact that the confidential nuclear information is in outsider's control.e threat against nuclear information confidentiality can be from insiders, i.e., CSP, or from outsiders, i.e., intruder.Applying security at different levels makes your system more secure than at a single level; with this intention, we proposed a protection-based modular encryption scheme (i.e., MES).Performance analysis of the proposed scheme shows its favorable results compared to the other commonly used schemes. | 8,888.6 | 2019-12-18T00:00:00.000 | [
"Computer Science"
] |
Deconstructing flavor anomalously
: Flavor deconstruction refers to ultraviolet completions of the Standard Model where the gauge group is split into multiple factors under which fermions transform non-universally. We propose a mechanism for charging same-family fermions into different factors of a deconstructed gauge theory in a way that gauge anomalies are avoided. The mechanism relies in the inclusion of a strongly-coupled sector, responsible of both anomaly cancellation and the breaking of the non-universal gauge symmetry. As an application, we propose different flavor deconstructions of the Standard Model that, instead of complete families, uniquely identify specific third-family fermions. All these deconstructions allow for a new physics scale that can be as low as few TeV and provide an excellent starting point for the explanation of the Standard Model flavor hierarchies.
Introduction
Flavor universality of the Standard Model (SM) gauge sector could be an emergent property below the TeV scale.The possibility of having new physics (NP) that extends the SM gauge sector in a flavor non-universal manner opens up an interesting path to explain the SM flavor hierarchies while providing a rich phenomenology.Although flavor observables like meson mixing put strong bounds on non-universal physics between first and second families, pushing the scale to the PeV range, the scale of universality breaking between third and light families can be as low as few TeV [1].
This hierarchy of energies suggests that flavor can be addressed by having multiple NP scales [2][3][4][5], something that is achieved by deconstructing flavor.In analogy to the deconstruction of an extra dimension [6][7][8], we can extend some of the factors of the SM gauge group into identical products that break to their diagonal subgroup at lowenergies.Different SM fermions can then be charged under the different factors, hence resulting in violations of flavor universality.Whereas it is phenomenologically viable to break universality between first and second families above the PeV scale, these models are effectively described around the TeV scale by a gauge group that charges the light families universally and the third family differently.Examples of models deconstructing some of the SM factors, or its UV completions, can be found in [9][10][11][12][13][14][15][16][17][18].Among them, a particular interesting realization is the so-called 4321 models [10,11], a color deconstruction based on the G 4321 ≡ SU (4) × SU (3) ′ × SU (2) L × U (1) X gauge group.This class of models has been extensively studied [19][20][21][22][23], as it provides one of the most compelling explanations for the observed hints of deviations from the SM predictions in B decays [24,25].While these hints have become less significant nowadays, 4321 models present multiple theoreticallyappealing features that go beyond the explanation of these anomalies.For instance, they allow for low-scale unification of third-family quarks and leptons à la Pati-Salam [26] and provide a natural explanation for the smallness of the Cabibbo-Kobayashi-Maskawa (CKM) mixing with the third family.
In general, flavor-deconstructed models require a sector responsible for the breaking of the extended symmetry to the SM gauge group.A common approach consists in the inclusion of link scalar fields: fields charged under two group factors that spontaneously break them to their diagonal once they acquire a vacuum expectation value (vev).In such scenarios, if there are no other fermions than those in the SM, the cancellation of gauge anomalies requires to charge full SM generations under each group factor. 1 This imposes some rigidity on the construction of these models that, in certain situations, could be interesting to relax.A radiatively stable alternative for the symmetry breaking, analogous to technicolor theories for electroweak (EW) symmetry breaking, is to add an hyper-sector with strong dynamics that develops condensates breaking the UV symmetry.This avoids the inclusion of extra ad-hoc scalars.Furthermore, if the composite sector responsible of the lowest breaking confines at the TeV scale, it could be connected to composite Higgs solutions of the hierarchy problem, similarly to the 4321 model presented in [29].
In this article, we explore the possibility of charging new fermionic degrees of freedom of an hypothetical strongly-coupled sector so we can split the SM fermions among different factors of a deconstructed SM symmetry while avoiding gauge anomalies.Although these fermionic fields are chiral and have to be massless, the strong dynamics ensures that they do not appear as asymptotic states but only as partons of new hyper-baryons of the composite sector at the confinement scale.A variant of this idea is actually realized in nature: the breaking of SU (N f ) L ×SU (N f ) R ×U (1) B−L → SU (N f ) V ×U (1) B−L triggered by the quark condensate once QCD becomes strongly coupled, yielding a breaking of the electroweak group to electromagnetism at the QCD scale.If we imagine a parallel universe where the Higgs does not take a vev (or it is absent), below the QCD scale we would only see leptons generating anomalies of the EW symmetry which are canceled only when the quark sector is taken into account.We extrapolate this mechanism to beyond-the-SM physics.This paper is structured as follows: in Section 2 we explore the deconstruction of two toy models that encode the main ideas and show how anomaly-cancellation is described both in an effective chiral description and a holographic description of the strong dynamics.The application of these ideas to specific SM deconstructions are discussed in Section 3. As a prototypical example, we present a novel 4321 implementation where the top quark is uniquely identified by the gauge symmetry, as justified by the smallness of the bottom and all other Yukawa couplings.We further discuss alternative SM deconstructions relaying on this mechanism.We conclude in Section 4.
General idea
Before tackling specific SM extensions, we present in this section two toy model examples that encapsulate, in a simpler manner, how we can use a composite sector for flavor deconstructions of a gauge symmetry.
Toy model I
Let us consider the deconstruction of an SU (N ) gauge theory with several families of a fermion field ψ i L,R (i = 1, . . ., N ψ ) transforming in a complex vector-like representation.The vector-like character of the theory makes it trivially anomaly-free.Let us assume we want to extend the original symmetry to SU (N ) 1 × SU (N ) 2 .For fields charged under SU (N ) 1(2) , we will say they belong to the first (second) site.The breaking of this extended gauge group to the diagonal SU (N ) V can be triggered by a composite sector with a gauge symmetry SU (N HC ) and N hyper-quarks ζ L,R transforming in a vector-like complex representation.This sector has then a global symmetry SU (N ) L ×SU (N ) R ×U (1) V , so the SU (N ) factors can be identified with the extended gauge symmetry, SU (N ) 1(2) ≡ SU (N ) L(R) .When the hyper-sector confines, the hyper-quarks form a condensate, ζα L ζ β R ∝ δ αβ , that breaks the symmetry to its diagonal subgroup: so below the scale of this condensate, we recover the starting SU (N ) model.This breaking yields N 2 − 1 would-be-Goldstone bosons, all of them eaten by the gauge bosons that get mass.
In what follows, we assume that all complex representations are the fundamental, which we denote as .For the ψ fermions, this is a natural choice as we intend to identify them with the SM fermions, as for the hyper-quarks we do so for simplicity.We can charge both chiralities of ψ in the fundamental representation of the same SU (N ) X (with X = L, R), or charge each of them under a different SU (N ) X group.In either case, the theory potentially contains gauge anomalies that need to be canceled by introducing new fields.We start considering N ≥ 3. Since all our fields are in fundamental representations of SU (N ) R and SU (N ) L respectively, they generate local gauge anomalies of the type SU (N ) X − SU (N ) X − SU (N ) X .To cancel them, we will add N χ right-handed fermions, χ R , in the fundamental of SU (N ) L , and N χ left-handed fermions, χ L , in the fundamental of SU (N ) R , all of them singlets of SU (N HC ).As we show below, these fermions can get a mass in the broken phase and get integrated out, leaving the same low-energy spectra that our initial model had.The exact value of N χ depends on the charge assignment of the ψ fields.An example is illustrated in Table 1, where we charge ψ L under SU (N ) L and ψ R under SU (N ) R .If we have N χ = N HC + N ψ , all gauge anomalies are canceled.For N = 2, there are no local gauge anomalies, but there could be global ones [30] (also called non-perturbative anomalies).In particular, for every SU (2) component, there should be an even number of fields charged in the fundamental representation [31].If not, the partition function will flip sign under large gauge transformations.If the number of fundamental fields on each site is even, we do not need χ L,R fermions.Otherwise, an odd number of Table 1.Toy model I: to cancel gauge anomalies, we need N χ = N HC + N ψ , where N ψ and N χ are the number of ψ L,R and χ L,R fermions, respectively.Here, i = 1, . . ., N ψ and a = 1, . . ., N χ .
fields χ L,R will be needed.Thus, for Table 1, we would only need one fermion χ L,R if N HC is even and none if N HC is odd.Finally, to give mass to these new fermions, we assume the existence of an extended hyper-color sector that generates the four-fermion operators where a is the flavor index of χ L,R running from 1 to N χ .When the chiral symmetry is broken, this interaction generates the mass terms so χ L,R become vector-like fermions under the SU (N ) V gauge symmetry.
Toy model II
Let us consider now as starting point an IR model with a gauge symmetry SU (N ) × U (1) and multiple fermion families in the complex vector representation ψ i L,R ∼ ( , 1) (with i = 1, . . ., N ψ ).As before, this model is anomaly-free due to its vector-like character.
We can now deconstruct the SU (N )-factor of the gauge symmetry while keeping the U (1) factor universal, so we extend the gauge symmetry to SU (N ) 1 × SU (N ) 2 × U (1).The composite sector can be chosen in the same way as before: we assume an hyper-sector with gauge symmetry SU (N HC ) and N hyper-quarks ζ L,R triggering the same symmetry breaking described in Eq. (2.1).In this case, there are several kinds of anomalies to consider: where ψ denotes all fermions charged under U (1) with charge q ψ and s ψ = 1(0) for LH (RH) fermions.This anomaly trivially cancels because U (1) is universal and it cancels in the IR model.
Table 2. Toy model II (top) and its possible UV-completion (bottom).To cancel gauge anomalies, we need N χ = N HC +N ψ , where N ψ and N χ are the number of ψ L,R and χ L,R fermions, respectively.
(3) Pure SU (N ) i anomalies, both local SU (N ) X − SU (N ) X − SU (N ) X and global ones.
They can be canceled by introducing new fermions χ L,R as in toy model I. ( where the sum is restricted now to all the fermions charged under the same SU (N ) (that we assume that transform in the fundamental representation).In principle, they only cancel if both ψ L and ψ R are located in the same site.
The last type of anomaly appears to be an obstacle to put ψ L and ψ R on different sites, not fixable by introducing new degrees of freedom.However, as shown in the example given in Table 2 (top), these anomalies can be canceled by appropriately charging the hyper-quarks under the U (1) symmetry.Hyper-quarks then also contribute to the sums of Eq. (2.6) and make them vanish without creating other U (1) anomalies due to their vector-like character under U (1).
Interestingly, this solution has a very natural UV completion that avoids fractional charges: we can promote SU (N HC ) to SU (N HC + N ψ ) and arrange the fields as shown in Table 2 (bottom).Then, if SU (N HC + N ψ ) breaks to SU (N HC ) × U (1) at some high scale, we recover the original model.This approach is similar to the Pati-Salam (PS) unification of quarks and leptons [26] but now applied to the hyper-sector.In this case, the ψ fields become the hyper-leptons of the hyper-quarks.
Effective chiral description
When the composite sector confines, its degrees of freedom change from hyper-colored states to Goldstone bosons and resonances.We can describe our models using these degrees of freedom in the same way chiral perturbation theory (χPT) describes the low-energy limit of QCD.In such an effective chiral description, non-hyper-colored fermions generate anomaly contributions that are only canceled by the so-called Wess-Zumino-Witten (WZW) terms in the effective Lagrangian [32][33][34], constructed with the (would-be) Goldstone bosons resulting from the breaking.In this subsection, we explicitly build such terms for our toy models and the models discussed in the next section.This subsection and Section 2.4 are more technical than the rest of the article, and they could be skipped in a first reading.
In this subsection, we will work in Euclidean time and a compactified 4d spacetime, M 4 = S 4 .For field configurations on this spacetime that can be continuously deformed to the constant map, the easiest way to write WZW terms is to follow Witten's construction in [35].WZW terms are the integral of a 5-form ω WZW 5 in a fictitious 5d disk M 5 whose boundary is the physical 4d space M 4 = ∂M 5 , (2.7) The form ω WZW 5 has to be invariant under the symmetries of the theory and closed when it is extended to a higher-dimensional space with d > 5, dω WZW 5 = 0. 2 Then, this term can be written as the integral on the physical 4d space M 4 of the primitive 4-form ω WZW is then necessary [35,36] to ensure that different 5d extensions give the same value of e iS WZW , which is the physically relevant object.Thus, the particular choice to extend the fields to the fictitious 5d space is irrelevant, and therefore, unphysical.However, it is typically easier to write ω WZW 5 which, contrary to ω WZW 4 , explicitly preserves the symmetries of the theory. 3 Before building the WZW terms, it is convenient to introduce the Chern-Simons (CS) forms ω CS 2n+1 and some of their properties [39].Let A ≡ A a µ T a dx µ be a matrix-valued gauge 2 In components, (dω)µ 0 ...µp ≡ (p + 1) 3 An extra more technical assumption on ω WZW 5 called the Manton condition is necessary to properly define G-invariant WZW terms in a coset space, G/H, with H a closed subgroup of G [37,38].This condition is automatically satisfied if the 4-th homology group of the coset space is trivial, H4(G/H) = 0, or if G is semisimple.For the coset spaces we consider here, G/H ∼ = SU (N ) and H4(SU (N )) = 0. connection that, for our purposes, will be a connection of SU (N ).We assume the gauge fields here are normalized such that the kinetic term comes with the inverse squared of the gauge coupling constant.CS terms are defined as 2n + 1-forms such that with ) . (2.9) CS forms can be explicitly built like where F t is the field-strength tensor of the gauge connection A t = tA.Although the CS term is not invariant under gauge transformations, its variation under the infinitesimal gauge transformation α is the differential of a 2n form, with Interestingly, ω 1 2n describes possible anomalies from fermion loops.Indeed, in the case of toy models I and II, non-hyper-colored fermion loops contribute to the SU (N ) 3 X anomaly as where α = (α L , α R ), Γ is the quantum effective action, and B 4 are local counterterms that one may add to shift the anomaly between the different currents [40] and depend on the regularization procedure of the loop integrals [39].We will assume here that the currents associated to all generators are treated symmetrically, and therefore, B 4 = 0. Note that for SU (2), ω CS 5 = 0 because the SU (2) generators satisfy Tr(T a {T b , T c }) = 0, showing that, indeed, SU (2) cannot have pure local anomalies.
For the mixed anomaly of toy model II, it is also convenient to introduce the mixed CS term, satisfying in this case where A U (1) and F U (1) are the gauge field and field strength tensor of the extra U (1).They read with ξ parametrizing an exact form we may add without affecting Eq. (2.14).Similarly to the pure CS form, its variation is an exact form, δ (α,α U (1) ) ωCS 5 = dω 1 4 , such that with α U (1) the gauge parameter of the U (1) group.Thus, the N ψ fermions ψ L,R with charge 1 under U (1) in toy model II contribute to the mixed anomalies SU (N ) X −SU (N ) X −U (1) as where now α = (α L , α R , α U (1) ), and ξ parametrizes how the anomaly is shared between the U (1) current and SU (N ) currents.For instance, ξ = 0 (1) shifts the anomaly completely to the U (1) current (SU (N ) currents).
To construct the WZW terms, we follow [40], where the authors give the general construction for a broad class of G/H coset spaces.We particularize here for the case where here F t is the field strength tensor of the gauge connection A t = (1 − t)A 0 + tA 1 .This form is invariant under simultaneous gauge transformations of A 0 and A 1 , and it satisfies [40] dω Note also that ω CS 2n+1 (A) = ω2n+1 (0, A).We first consider the case of SU (N ) with N ≥ 3. 4 The WZW terms are functions of the gauge fields and the would-be Goldstone bosons.The latter will be written as a matrix Σ ∈ SU (N ), transforming under the total group In the expressions below, this matrix will be written like Σ = g −1 L g R for some choice of g L and g R .A first candidate for the WZW term, up to the appropriate normalization, would be (2.21) The particular way Σ is split into g L and g R is irrelevant, and all of them give the same result due to Eq. (2.19) (a possible choice is, for instance, g L = 1 and g R = Σ).It is easy to check that this form is invariant under gauge transformations, and using Eq.(2.20), that This form is closed if only the unbroken group is gauged so A L = A R .We would like however to gauge the full group.For that, we will add the appropriate CS terms, defining our SU (N ) 3 WZW term to be [41,42]: where, as discussed above, the overall normalization has to be quantized, m ∈ Z. 5 Due to the CS forms, the WZW term is closed but invariant only under gauge transformations in the unbroken group SU (N ) V .The variation under a general gauge transformation is proportional to the integral of dω 1 4 (α L , A L ) − dω 1 4 (α R , A R ) in the 5d space.Using the Gauss theorem, it can be written localized in the boundary, i.e. in the physical space.If m = N χ − N ψ = N HC , this contribution precisely cancels the SU (N ) 3 X anomaly caused by the non-hyper-colored fermions of Eq. (2.13).
For SU (N ) groups with N = 2, the SU (2) 3 WZW term vanishes.However, if N HC = N χ − N ψ is odd, the chiral description of toy models I and II has a non-perturbative WZW-like term (different to the ones discussed here) that reproduces the global-anomaly contribution from the hyper-quarks and cancels the contribution from the non-hypercolored fermions.This is achieved if this term flips the sign of the partition function for configurations of the Σ field that cannot be deformed to the constant map. 6oming back to general values of N ≥ 2, in toy model II there is also a U (1) gauge field that allows to define a new form [42] (2.24) As before, this form is gauge invariant, but not closed under general gauge transformations: To make it closed, we add mixed CS forms and define the SU (N ) 2 − U (1) WZW term: Like in the previous case, the WZW term is now only invariant under the unbroken group but not under the complete group.The normalization here can be chosen for the variation of the CS terms to exactly cancel the anomaly contribution from the N ψ fermions ψ i L,R of Eq. (2.17), m = −N ψ .Thus, this contribution makes the effective theory anomaly free.Remarkably, in this case we can directly write a primitive 4-form dω WZW 4 = ω WZW 5 , avoiding fictitious 5d extensions: (2.27)
Holographic realization
Holographic models are a useful way to describe strongly-coupled sectors in the large-N HC limit.Due to holography or the AdS/CFT correspondence [43,44], one expects that theories living in a slice of a 5d space with metric where σ(y) ∼ y when y → −∞, describe effectively strongly coupled sectors in 4d.This 5d space is an asymptotic AdS space, and it will be bounded by a 4d UV brane at y = y UV and a 4d IR brane at y = y IR > y UV .In this description, the hyper-colored degrees of freedom are replaced by this 5d bulk with weakly-coupled physics.It is instructive to see how this 5d dynamics provides mechanisms to cancel the anomaly contribution of the non-hyper-colored fermions.The holographic dictionary [45] guides us in how to map the features of the stronglycoupled sector into the 5d model.We can then build the holographic version of toy models I and II.In this subsection, we only consider models with SU (N ) groups with N ≥ 3.They consist on a SU (N ) L × SU (N ) R × U (1) V gauge theory in the 5d bulk corresponding to a composite sector with a SU (N ) L × SU (N ) R × U (1) V global symmetry. 7Reproducing the anomalies of the symmetries of the composite sector requires the inclusion of CS terms in the bulk that we discuss below [44,46].Let us focus first on the SU (N ) L × SU (N ) R part, which is common for both toy models.A condensate breaking SU (N ) L × SU (N ) R to the diagonal SU (N ) V can be implemented through the breaking of the symmetry in the IR brane.This is done by selecting as boundary conditions of the gauge fields those that set to zero the 4d components associated to the broken generators, so SU (N ) V is preserved in the IR brane.The UV brane will preserve the full SU (N ) L ×SU (N ) R symmetry because it is gauge in the dual 4d model.The fundamental degrees of freedom χ L,R and ψ L,R appear as fermions localized in the UV brane.In principle, they will create an anomaly localized in the UV brane similar to the one in Eq. (2.13), This anomaly is canceled by the CS terms in the bulk [47,48]: The variation of these terms exactly cancels the anomalous contribution of Eq. (2.29), and does not create a new one in the IR brane because, by boundary conditions, A L (y IR ) = A R (y IR ).Concerning U (1) V ≡ U (1) in the holographic toy model I, it is preserved in the IR brane because it is respected by the condensate, but broken in the UV brane by boundary conditions A U (1) | y UV = 0 on the 4d components, because U (1) is not gauged in the dual theory.However, in the holographic toy model II, the U (1) symmetry is preserved everywhere, including the UV brane.In this case, the mixed anomaly generated in the UV brane similar to Eq. (2.17) is canceled by the mixed CS term in the bulk, with m = −N ψ and which, as before, does not generate any anomaly in the IR brane because A L (y IR ) = A R (y IR ). 8he cancellation of these anomalies is reminiscent of the cancellation of the same anomalies in the effective chiral description with WZW terms.In both cases, the same CS terms are the ones that cancel anomalies from the non-hyper-colored fermions, in one case in the AdS space, and in the other in a fictitious 5d space.The only condition to be able to write such terms is that CS forms must vanish when evaluated in the unbroken gauge fields.In the AdS space description this condition is required to not create anomalies in the IR brane, while in the effective chiral description this is needed to build the WZW term.This is not surprising as it is known that CS terms in the 5d bulk are dual to WZW terms in the 4d description.Indeed, one can show that after integrating out the bulk degrees of freedom of a 5d theory, CS terms in the 5d action generate WZW terms in the so-called holographic action [49].
Deconstructing the Standard Model
We now apply the ideas from the previous section to build realistic models based on SM deconstructions.Before doing so, it is useful to embed the SM gauge group into larger global symmetries present in the SM kinetic terms, which are anomaly free.For instance, U (1) Y = U (1) R ⊕ U (1) B−L , where U (1) R charges with ±1/2 the up (down) fermions, and U (1) B−L charges with −1/2 (1/6) for leptons (quarks).Furthermore, U (1) R ⊂ SU (2) R and SU (3) c × U (1) B−L ⊂ SU (4) P S .We thus arrive to the PS symmetry [26]: if we add one singlet fermion per family, i.e. three right-handed neutrinos ν R , each SM family can be embedded into two multiplets Ψ L ∼ (4, 2, 1) and Ψ R ∼ (4, 1, 2) of the PS group SU (4) P S × SU (2) L × SU (2) R : One advantage of this embedding is that anomaly cancellation for each SM family becomes transparent: neither mixed nor gravitational anomalies can appear because the group is semisimple, SU (4) P S anomaly cancels between Ψ L and Ψ R (as it is a vector-like symmetry), and SU (2) L and SU (2) R are anomaly-safe groups.Furthermore, there are no global anomalies associated to SU (2) L,R because for each group there are 4 doublets per family forming a fundamental of SU (4) P S .Another advantage of considering the PS group embedding compared to other unification groups, such as SU (5) or SO (10), is that the gauging of this symmetry does not introduce proton decay, thus allowing for typically lower (partial) unification scales. 9or our purposes, even when we do not gauge the PS group entirely, it will be convenient to think in terms of this symmetry.When deconstructing the SM gauge symmetry, we distinguish the following scenarios: i) When full PS multiplets are present in a given site, no extra U (1) symmetries are involved and the situation is similar to toy model I.In this case, we only need to appropriately choose the number of vector-like fermions χ L,R , if needed, to make the deconstruction anomaly-free.
ii) When a PS multiplet is distributed among different sites and the only deconstructed groups are semisimple, a universal U (1) symmetry, which can be hypercharge or some component of it, will cause mixed anomalies.In this situation, one can use the mechanisms of toy model II for anomaly cancellation.
iii) A last possibility appears when a PS multiplet is distributed among different sites and the deconstructed symmetry involves U (1) factors.In this class of models, the mechanisms discussed in model I or II are not sufficient to cancel gauge anomalies, so we disregard it in what follows.
We provide phenomenologically relevant examples of SM deconstructions of type i) and ii) in the next subsections.
4321 deconstructions
In this section, we focus in deconstructions of the SM color group based on the gauge symmetry where SU (2) L acts universally on the three SM fermion families, while the SU (4), SU (3) ′ and U (1) X groups act non-universally.Around the TeV scale, this gauge group is assumed to get spontaneously broken to the SM subgroup, such that SU (3 L is the SM group factor.Hence, gauge universality of the SM appears as an emergent property at low-scales for this class of models.Rather than discussing the fermion charges in terms of the 4321 gauge symmetry, it results more convenient to present them in terms of the larger (global) symmetry with SU (4) ′ ⊃ SU (3) ′ × U (1) ′ being non-universal on the SM families and, similarly to SU (2) L , the SU (2) R ⊃ U (1) R symmetry assumed to be universal.The U (1) X factor in the 4321 symmetry is obtained from the diagonal combination of the U (1) factors in SU (4 In analogy with the toy model examples from Section 2, we will say that fields charged under SU (4) (SU (4) ′ ) belong to the first (second) site.Different charge assignments for the SM fermions correspond to different 4321 implementations.The most common one locates first-and second-family fermions in the second site by charging them as in the SM under SU (3) ′ × SU (2) L × U (1) X , whereas third-generation quarks and leptons (together with a right-handed neutrino) are unified into SU (4) fourplets,10 thus belonging to the first site.The breaking to the SM group is typically done through a set of scalar fields charged under SU (4) that acquire a vev.An alternative possibility consists in breaking the 4321 symmetry through the condensate of hyper-fermions from a strongly-coupled sector [29]. 11he strongly-coupled option follows a similar structure to the one exemplified by toy model I, with new will-be-vector-like fermions being the ones responsible for compensating the gauge anomalies introduced by the hyper-fermions (see Table 3). 12This choice of charges for the SM fermions singularizes the third family, resulting in a U (2) q × U (2) u × U (2) d × U (2) ℓ × U (2) e flavor symmetry before 4321 symmetry breaking.Among other things, this symmetry provides an explanation for the smallness of the 2-3 CKM matrix elements and yields an extra protection for the NP sector against flavor constraints.Interestingly, if there is a mass term between χ i R and q 1,2 L , the will-be-vector-like fermions induce the Yukawa couplings which are a priori forbidden by the gauge symmetry when they are integrated out.Moreover, it becomes easy to embed this construction into a more complete setup where the Higgs is also localized in the first site, hence explaining the smallness of first-and second-generation Yukawas.However, the smallness of the bottom Field SU (N HC ) SU (4) SU (4 Table 3. 4321 deconstruction from [29].To cancel the anomalies, we need N HC = 2N χ .
and tau Yukawas typically remains unexplained by this implementation, as all third-family fermions are put in the same footing.
Changing the number of will-be-vector-like fermions with respect to N HC leads to different arrangements of the SM fields.If N HC = 2N χ + 2, anomalies would cancel if, for instance, third-family right-handed fields are charged under SU (4) ′ .This possibility results in a U (2 3) e flavor symmetry before 4321 symmetry breaking, which is enough to explain the suppression of light Yukawas and 2-3 CKM matrix elements [54].Indeed, if the SM Higgs is embedded in a scalar field charged under G 4321 like (4, 3, 2) − 1 2 , only the top Yukawa coupling can be written at the renormalizable level.A perhaps more natural choice for fermionic charges would be to isolate the top quark by locating ψ 3 L = (q 3 L ℓ 3 L ) and ψ 3u R = (t R ν 3 R ) in the first site and ψ 3d R = (b R τ R ), together with first-and second-family fermions, in the second.This choice gives rise to a U (2) q × U (2) u ×U (3) d ×U (2) ℓ ×U (3) e flavor symmetry in the gauge sector, which coincides with the approximate symmetry of the Yukawa sector and thus offers a good starting point to explain its structure dynamically. 13As before, it is possible to argue that the Higgs field is located in the first site and thus only the top Yukawa is generated in first approximation, whereas all other Yukawas are generated through subleading mass mixing effects.A dedicated study of the corresponding flavor symmetry spurions as well as their possible dynamical origin is beyond the scope of this work and will be discussed elsewhere.Possible gauge anomalies are more clearly seen using the bigger global group where SU (2) L × U (1) R acts universally.Anomaly cancellation of the gauge group G 4321 will be inherited from anomaly cancellation of G 4421 .Cubic and gravitational anomalies of U (1) R cancel like in the SM because U (1) R is universal.Cubic anomalies of SU (4) or SU (4) ′ are canceled by choosing appropriately the number of will-be-vector-like fermions, Table 4. 4321 deconstruction (top) and possible UV completion (bottom).To cancel the anomalies, we need N HC = 2N χ + 1.At some high scale, SU (N HC + 1) → SU (N HC ) × U (1) , and U (1) HC × SU (2) R → U (1) X .At this scale, U L and U R can get a mass and be integrated out, recovering the top table .N HC = 2N χ + 1.The splitting of top and bottom fermions also causes mixed anomalies between SU (4) ′ or SU (4) and U (1) R : where s (s ′ ) represents the set of elementary fields charged under SU (4) (SU (4) ′ ) and q (R) ψ is their charge under U (1) R (all of them are RH fermions).However, these anomaly contributions can be compensated by charging hyper-quarks under U (1) R , or equivalently, U (1) X , following a similar structure to that of toy model II.The complete implementation is described in Table 4 (top).Thus, hyper-quarks also contribute to the sums of Eq. (3.5), making them vanish.Furthermore, hyper-quarks do not create other U (1) R anomalies due to their vector-like character under U (1) R .
As with toy model II, it is possible to extend this model to a larger UV symmetry where the U (1) symmetry is embedded into other SU (N ) factors, see Table 4 (bottom).In this case, the right-handed top appears as the hyper-lepton associated to the hyper-quarks in the hyper-Pati-Salam extension.If the Higgs is realized as a pseudo-Nambu-Goldstone boson by extending the composite sector in a similar manner to [29], four-fermion operators that induce the top Yukawa coupling after confinement could be generated by the same NP responsible of breaking SU (N HC + 1) × SU (2) R → SU (N HC ) × U (1) X .Depending on the details of this NP sector, which should lie around the 10 TeV scale, the largeness of the top-Yukawa compared to the others Yukawas could be dynamically explained given the different charges of the right-handed top in this UV completion.Notice that this completion with a semisimple group makes the cancellation of mixed anomalies of our 4321 model more transparent, in a similar way than the PS completion does for the SM.
At energies below the confinement scale, the hyper-sector is better described by an effective chiral action of the would-be Goldstone bosons containing the WZW terms of Section 2.3.Identifying SU (4) with SU (N ) R and SU (4) ′ with SU (N ) L , the effective description for all these 4321 models has the would-be-Goldstone bosons of the coset SU (4) L × SU (4) R /SU (4) V , all of them eaten by the gauge bosons.With them we can write the pure WZW term of Eq. (2.23) and, for the model of Table 4 (top), also the mixed WZW term of Eqs.(2.26) and (2.27) with m = −1/2 and U (1) ≡ U (1) R .Since only G 4321 is gauged, the gauge fields appearing on these WZW terms should be taken to be and, for the model of Table 4 (top), also where A 1,..., 15 4 , A 1,..., 8 3 and A X correspond to the gauge fields of SU (4), SU (3) and U (1) X respectively, and T a are the usual SU (4) generators with the canonical normalization Tr(T a T b ) = 1 2 δ ab , so T a with a = 1, . . ., 8 expand SU (3) and T 15 = 1 2 √ 6 diag(1, 1, 1, −3).Holographic realizations are also possible if we include the CS terms as in Section 2.4.Elementary fermions are then localized in the UV brane, and the 5d bulk with the geometry of Eq. (2.28) effectively describes the composite sector.Its global symmetries, SU (4) L × SU (4) R × U (1) V where SU (4) R ≡ SU (4) and SU (4) L ≡ SU (4) ′ , are implemented in the holographic description as gauge symmetries in the 5d bulk.As for the boundary conditions of these 5d gauge fields, while in the IR brane they describe the appearance of a condensate breaking the symmetry SU (4) L × SU (4) R → SU (4) V , A L = A R , in the UV brane they are dictated by the gauging and are given by Eq. (3.6) and also Eq. (3.7) with U (1) ≡ U (1) V for the model of Table 4 (top). 14The 5d actions for all the 4321 models discussed have the CS forms of Eqs.(2.30) and (2.31).They cancel all anomaly contributions of the elementary fermions in the UV brane.In particular, for the model of Table 4 (top), the CS form of Eq. (2.31) cancels the mixed anomalies.Using the identification U (1) ≡ U (1) R , it requires m = −1/2.
Regarding the phenomenological implications of these models, all 4321 implementations discussed here predict a massive vector leptoquark that is able to address the tensions observed in B-decays [24,25], particularly in the R D ( * ) measurements [56].However, an added advantage of the one where b R and τ R are charged under SU (4) ′ (c.f.Table 4) is that the absence of leptoquark couplings to the right-handed bottom quark weakens the bounds from LHC searches [57,58].This implementation further predicts R D /R SM D = R D * /R SM D * , which might be experimentally testable if the anomaly persists.
SU (2) L deconstructions
A similar approach can be used for deconstructing other simple factors of the SM gauge group.For instance, the deconstruction of SU (2) L has been extensively discussed (see for instance [16,18,[59][60][61][62][63][64][65][66]).This is, the extension of G SM to SU (3) c ×SU (2) l ×SU (2) h ×U (1) Y , where SU (2) l charges light families and SU (2) h the third family. 15This deconstruction realizes the U 3) e flavor symmetry, which only allows for third-family Yukawas at the renormalizable level if the Higgs is charged under SU (2) h .However, as pointed out in [28], this flavor symmetry suggests a wrong pattern for the PMNS matrix because it imposes selection rules on the Weinberg operator.We can then be less ambitious and use SU (2) L deconstructions to address the flavor hierarchies only in the quark sector.Keeping the same structure for quarks but charging all lepton doublets under the same group factor would promote U (2) ℓ to U (3) ℓ as in the SM. 16Such models do not address the hierarchy between the τ and light-lepton masses, but can have interesting phenomenological implications [18].Of course, like in toy model II, these charge assignments create mixed anomalies between SU (2) l,h and U (1) Y , or more specifically, the U (1) B−L component of hypercharge.Then, we can use them to illustrate how the anomalycancellation mechanism is implemented.For concreteness, we will assume that we want to charge all leptons under SU (2) h (other possibilities can be built following the same logic). 17Let us say the breaking SU (2) l × SU (2) h → SU (2) L is triggered by a condensate of a strong sector with two flavors of hyper-quarks in the fundamental representation of SU (N HC ).If right-handed hyper-quarks are arranged into doublets of the SU (2) l symmetry, and left-handed hyper-quarks into doublets of SU (2) h , all local anomalies are canceled provided the hyper-quarks also carry hypercharge 1/N HC (see Table 5 (top)).Moreover, global anomalies require N HC to be even.Interestingly, this condition ensures that the hyper-baryons of the strongly-coupled sector have integer electric charge: they will fit in real representations of SU (2) L (and have hypercharge 1).L and L 1,2 R form a Dirac pair after this breaking and can naturally get a mass at the high scale.After integrating out these extra fermions, we recover the model of Table 5 (top).
Multiple-factor deconstructions
So far, we have only considered UV completions where just one simple factor of the SM gauge group is deconstructed, but several group factors can be deconstructed simultaneously.For instance, in [12] it has been suggested that the deconstruction of SU (4) P S × SU (2) R while SU (2) L remains universal has particularly interesting properties to explain the SM flavor structure.In those cases, there are several choices for the strongly-coupled sector triggering the breaking: • One single representation of hyper-quarks that realizes a hyper-colored flavor symmetry containing all the factors that we want to deconstruct.For instance, to deconstruct SU (4) P S × SU (2) R , we can have 8 flavors of hyper-quarks realizing SU (8) L × SU (8) R so SU (4) P S × SU (2) R ⊂ SU (8) V .
• Several complex representations of hyper-quarks, with each realizing as flavor symmetry one of the group factors that we want to deconstruct.In this case, each representation will develop a condensate responsible of the breaking to the diagonal of each factor.
• Several strong sectors, each of which responsible for triggering the breaking of one of the deconstructed group factors.
In analogy to the model examples described above, it is possible to employ different variants of toy models I and II to implement various anomalous arrangements of the SM fermions in some multiple-factor deconstructions.A dedicated study of these possibilities is however beyond the scope of this paper.
Conclusions
In this article, we have proposed new possibilities to build NP models in the context of flavor deconstructions: UV completions of the SM where the gauge group is extended to multiple copies that act non-universally on the fermions.These constructions offer an interesting approach to address the flavour puzzle, as they forbid at the renormalizable level some of the SM Yukawa couplings.These couplings are then generated dynamically from NP contributions at higher scales, thus providing a multi-scale explanation of the flavor hierarchies.Since, at the lowest energy scale, these explanations share most of the accidental global symmetries of the SM Yukawa sector, the corresponding NP at that scale is typically protected from sensitive flavor observables and can lie around the TeV scale.
A new sector that breaks the extended gauge symmetry to the SM one is an essential ingredient in any flavor deconstruction.If this sector consists on scalar fields that acquire a vev and no new fermions are added beside the SM ones, gauge-anomaly cancellation requires charging complete SM families under the same factors.Thus, when the deconstructed group is (semi)simple, the splitting of SM families among the deconstructed factors typically creates mixed anomalies between these factors and some U (1) group related to hypercharge.In this article, we have shown that, if the breaking is triggered by the condensate of a new strongly coupled hyper-sector, the fundamental fermionic degrees of freedom of the hypersector (hyper-quarks) can be charged fractionally under this U (1) group in a way that the anomaly-cancellation condition is relaxed.Fermions of the same generation can thus be split among different group factors.We also identified UV completions of these models that avoid fractional charges.These are analogous to Pati-Salam unification, but now applied to the new hyper-sector.Besides relaxing the anomaly-cancellation condition, other advantages of using a strongly-coupled sector for the symmetry breaking are their radiative stability and the possibility to extend them to incorporate a composite Higgs, linking the multi-scale picture to solutions of the Higgs hierarchy problem.We have provided the fundamental description of these models in terms of hyper-quarks, but also reviewed the cancellation of anomalies in their effective chiral description through WZW terms and in holographic realizations.Interestingly, anomaly cancellation in the chiral description and in the holographic picture share some formal resemblance, with Chern-Simons forms playing similar roles.
The application of these ideas open new ways to build novel and well-motivated SM deconstructions.As a practical example, we concentrated on the explanation of the hierarchy between top and bottom quarks, which normally remains unexplained by standard deconstructions.With the new mechanisms explored in this paper, models with a gauge structure that uniquely identify the top quark from other SM fermions become possible and natural.Following this logic, a 4321 model featuring quark-lepton unification of the third family, but excluding the right-handed bottom and tau fields has been proposed.Other possibilities that we have discussed are an SU (2) L deconstruction breaking universality in the quark but not in the lepton sector, or the deconstruction of multiple semisimple groups.
To conclude, deconstructing flavor anomalously offers the possibility of exploring NP models at the TeV scale that realise different flavor symmetries than those found in standard flavor deconstructions.In some cases, these could be more convenient to describe the SM flavor patterns.For instance, a gauge sector realizing a U (2) q × U (2) u × U (3) d flavor symmetry in the quark sector naturally addresses the top-bottom hierarchy and could be a better starting point for a multi-scale explanation of the flavor hierarchies.
5 ,
which does not change under continuous deformations of the 5d extension.If several classes of 5d extensions that are not connected by a continuous deformation exist, they may give different values of S WZW .A quantization condition for the normalization of ω WZW 5 As it happened with the 4321 deconstruction, we can UV-complete the model to explain the fractional charges of the hyper-quarks.Let us take SU (N HC + 2)× SU (3) c × SU (2) l × SU (2) h × U (1) ′Y and arrange the fields as in Table5(bottom).Then, at some high scale above the confinement scale of SU (N HC ), SU(N HC + 2) × U (1) ′ Y breaks to SU (N HC ) × U (1) Y , with U (1) Y = U (1) HC ⊕ U (1) ′Y and the charges of {ζ L,R } and {ℓ 1,2 L , L 1,2 R } under U (1) HC ⊂ SU (N HC + 2) being 1/N HC and −1/2, respectively.Furthermore, L1,2
Table 5 .
Anomalous SU (2) L deconstruction (top) and UV completion (bottom).Only fields charged under SU (2) l,h are shown.Other charge assignments are the same as in the SM. | 11,323.2 | 2024-02-14T00:00:00.000 | [
"Physics"
] |
TOWARDS NEUTRON DRIP LINE VIA TRANSFER-TYPE REACTIONS
Abstract Possibilities of production of light neutron-rich isotopes 24,26 O, 32 Ne, 36,38 Mg, 42 Si and 56,58,60 Ca in transfer-type reactions are analyzed. The optimal conditions for their production are suggested. The measurement of the excitation function can allow us to estimate the binding energy of exotic nuclei.
Successes of accelerator technique and experimental methodologies in the last several years have made it possible to produce light neutron-rich nuclei with Z 30 close to the nucleon stability line. New phenomena have been discovered that enable us to review our understanding of magic numbers and of the stabilizing role of shell effects. It has been demonstrated that for neutron superrich nuclei there appear new magic numbers with N = 16 and N = 26 (instead of N = 20 and N = 28). It turned out that nuclei around these magic numbers are strongly deformed and their stability is determined by the deformation. There has been also discovered a region where two shapes of nuclei (spherical and deformed) coexist (shape-coexistence region). All this imposes certain restrictions on the possibility of predicting the stability of nuclei that are close to the nucleon stability line. Thus, they have not been registered in experiments on direct identication of magic nuclei of 28 O and 40 Mg, although, in line with some theoretical predictions they should be bound. The very method of synthesizing these nuclei has also turned to be problematic. Since they are rather short-lived (10 −3 s), fragment separators and fragmentation reactions at intermediate energies are usually used for their identiˇcation. But this direct method of nuclei production and identiˇcation is not always efˇcient for synthesizing nuclei that are far from the nucleon stability line. Firstly, the cross section of this nuclei formation is small, secondly, the excitation energy of fragments is rather large, which reduces to minimum the probability of survival for weakly-bound nuclei. Besides, it is impossible to apply the missing-mass method for fragmentation reaction products (it works only for two-body processes) in order to obtain information on the stability of the sought for nucleus. In this connection, there has been actively discussed lately the possibility of multinucleon transfer reactions for the synthesis of nuclei far from the stability line.
Multinucleon transfer reactions have been known for producing exotic nuclei for many years [1Ä3]. The reactions of fragmentation [4Ä7] are widely used for this purpose as well because of the larger experimental efˇciency in the collection of exotic products. While in the fragmentation reactions the products are focused in forward angles, the products of multinucleon transfer reactions have wider angular distributions. However, in the fragmentation process the control of excitation energy of the produced exotic isotopes is difˇcult because of large uctuations and considerable excitation available in the system. In the transfer reactions the total excitation energy is smaller and only binary processes are possible in which the control of excitation energy of the reaction products is simpler. One can produce the certain exotic isotope within narrow interval of excitation in the transfer-type reactions which probably have an advantage for producing the nuclei near the neutron drip line. These primary nuclei should be as cold as possible, otherwise they will be transformed into the secondary nuclei with less number of neutrons because of the deexcitation by neutron emission. While the excited primary nuclei at the neutron drip line feed the yield of the isotopes with less number of neutrons, nothing can feed the yield of the heaviest isotopes. This is opposite to the situation near the proton drip line [8]. The cross sections for exotic nuclei production can be much larger in the reactions in which the binary mechanism dominates [8] than the cross sections in high-energy fragmentation reactions.
The purpose of the present study is to show the possibility of using heavyion transfer reactions to produce the neutron-rich isotopes 24,26 O, 32 Ne, 36,38 Mg, 42 Si and 56,58,60 Ca. As was shown in [1, 2, 9Ä13], quasiˇssion and fusion as well as transfer-type reactions can be described as an evolution of a dinuclear system (DNS) which is formed in the entrance channel during the capture stage of the reaction after dissipation of the kinetic energy of the collision. The dynamics of these processes is considered as a diffusion of the DNS in the charge and mass asymmetry coordinates, which are here deˇned by the charge and mass numbers Z and A of the light nucleus of the DNS. During the evolution in mass and charge asymmetry coordinates, the excited DNS can decay into the two fragments by diffusion in the relative distance R between the centers of nuclei. The charge, mass and kinetic energy distributions of the transfer reactions and quasiˇssion process were successfully treated with the DNS model in the microscopical transport approach [9,13]. The quasiˇssion and multinucleon transfer processes are ruled by the same mechanism in the sense that both are diffusion processes in the same relevant collective coordinates: charge (mass) asymmetry and relative distance. The reaction products resulting from the decay of the DNS much more symmetric than the initial (entrance) DNS are usually called quasiˇssion products. The multinucleon transfer reactions are usually related to the smaller changes of charge (mass) asymmetry in the products with respect to the initial DNS. The DNS evolution in charge (mass) asymmetry competes with the DNS decay in R.
Here, we consider the multinucleon transfers which transform the initial DNS to the DNS with smaller Z or to the DNS with larger N atˇxed Z.
The cross section σ Z,N of the production of primary light nucleus in transfer reaction is the product of the capture cross section σ cap in the entrance reaction channel and formation-decay probability Y Z,N of the DNS conˇguration with charge and mass asymmetries given by Z and N (1) The considered primary light neutron-rich nuclei are mainly deexcited by the neutron or gamma emissions. We treat only the reactions leading to the excitation energies of light neutron-rich nuclei smaller than their neutron separation energies S n (Z, N ). In this case the primary and secondary yields coincide. If the projectiles and targets are deformed, the value of E min c.m. , at which the collisions of nuclei at all orientations become possible, is larger than the Coulomb barrier calculated for the spherical nuclei. In the collisions with smaller E c.m. the formation of the DNS is expected to be suppressed. Therefore, we treat E c.m. E min c.m. for which the capture cross section is estimated as where µ is the reduced mass for projectile and target. Indeed, in the considered reactions E c.m. are always larger than E min c.m. to have enough energy for the formation of the DNS with very neutron-rich nucleus. The stability of the light neutron-rich nucleus is expected to be smaller in the excited rotational states than in the ground state. In order to be sure that the exotic nucleus is produced with almost zero angular momentum, only the partial waves with J 30 should be considered. Here, we assume that the total angular momentum J is distributed in the DNS proportionally to the corresponding moments of inertia. For the calculations of σ cap with (2), we set J cap = 30.
The primary charge and mass yield Y Z,N of decay fragments can be expressed as in [11,12] where P Z,N is the probability of formation of the corresponding DNS conˇguration in the multinucleon transfer process and the decay rate of this conˇguration in R is associated with the one-dimensional Kramers rate [14,15]. The time of reaction t 0 is deˇned as in [13] from the normalization condition Z,N Y Z,N = 1.
For J 30, the value of P Z,N weakly dependents on J and factorization (1) is justiˇed.
Using the microscopical method suggested in Ref. [13], one canˇnd P Z,N (t) from the master equation with initial condition P Z,N (0) = δ Z,Zi δ N,Ni and the microscopically deˇned transport coefˇcients for proton (∆ (±,0) Z,N ) and neutron (∆ (0,±) Z,N ) transfers between the DNS nuclei. In order to determine the transport coefˇcients for the DNS containing the neutron-rich nuclei, one needs the single-particle level schemes for the nuclei from this region. The calculation of these schemes suffers by uncertainties. Therefore, in the present paper we use simple statistical method to calculate Y Z,N . As was shown in [16], this method and the method based on Eq. (4) lead to close results when the yields of the nuclei not far from the line of stability are treated.
The statistical method forˇnding Y Z,N uses the DNS potential energy calculated as in [11]: where B L and B H are the mass excesses of the light and heavy fragments, respectively, which are taken from [17] for known nuclei and from [18] for unknown nuclei. The nucleusÄnucleus potential [11] V (R, Z, N, in (5) N ) is weak and can be disregarded. The decaying DNS with given Z and N has to overcome the potential barrier in R at R b = R m + 1 fm on the potential energy surface.
One can conclude from the calculations with Eq. (4) that the quasistationary regime is established quite fast in the considered DNS, specially along the trajectory in charge (mass) asymmetry corresponding to N/Z equilibrium in the DNS. For this trajectory, N = N 0 (Z), i.e. the neutron number follows Z. Therefore, the formation probability for the conˇguration with Z and N 0 (Z) is estimated as where C is normalized constant. The temperature Θ(Z i , N i ) is calculated by using the Fermi-gas expression Θ = E * /a with the excitation energy E * (Z i , N i ) of the initial DNS and with the level-density parameter a = A tot /12 MeV −1 , where A tot is the total mass number of the system. The formation of the DNS containing the light neutron-rich nucleus with given Z is considered as a two-step process. The formation of the DNS with Z and N 0 isˇrstly treated. Then one should calculate the probability G Z,N = Λ R Z,N,N0 t 0 of the formation and decay of the DNS with exotic nucleus. Since the DNS with Z and N 0 is in the conditional minimum of potential energy surface, we use the Kramers-type expressions for the quasistationary rate Λ R Z,N,N0 of decay through the barrier B R (Z, N ) = U (R b , Z, N, J) − U (R m , Z, N 0 , J) which this DNS should overcome to observe the decay of the DNS with Z and N : where pre-exponential factor depends on the friction and stiffness of the potential at the minimum and on the barrier. The temperature Θ(Z, N 0 ) is calculated for the excitation energy The main factor which restricts the time t 0 of the reaction and prohibits the formation of DNS containing the exotic nuclei is the evolution of the initial DNS to more symmetric conˇgurations and decay of the DNS during this process. Therefore, the time of decay in R from the initial conˇguration or from more symmetric conˇgurations mainly determines t 0 . We use again the Kramerstype expression for the quasistationary rate Λ R Zi,Ni of decay through the barrier Zi,Ni of symmetrization of the initial DNS through the barrier B ηsym in the direction to more symmetric conˇgurations: Zi,Ni and t 0 ≈ 1/Λ ηsym Zi,Ni . Therefore, we can calculate Y Z,N as where κ = Cκ R (Z, N, N 0 )/κ ηsym (Z i , N i ) ≈ 0.5 from the comparison with the results obtained with Eq. (4) for the formation of different DNS with Z and N 0 . For example, Eq. (10) leads to Y Zi,Ni ≈ 0.05 that is consistent with our previous results [13]. In the calculation of Y Z,N the uncertainty related to the deˇnition of κ is estimated within the factor of 1.5. The suggested simpliˇed approach is suitable if the initial DNS point in the reaction is located close to the N/Z equilibrium that is true for the reactions considered. If the injection point is considerably displaced from the N/Z equilibrium, the dynamical effects mainly contribute to the production of nuclei near the injection point and our statistical approach underestimates their yields.
In the present paper we consider such multinucleon transfers which transform the initial DNS to smaller Z and/or to larger N . For larger yields of neutron-rich nuclei with Z and N , the potential energies of the DNS containing these nuclei should be closer to the potential energy of the initial DNS. This is the main criteria to select the projectile and target. The excitation energy of the initial DNS should not exceed the threshold above which the excitation energy of the neutron-rich product is larger than the neutron separation energy. As was mentioned above, the nuclei produced near the neutron drip line should be quite cold to avoid their loose in the deexcitation process.
The exotic nuclei as well as any nuclei far from the entrance channel of the reaction are the result of multinucleon transfers between the projectile-like and target-like parts of the DNS. Thus, we can assume that the thermal equilibrium in the DNS containing the exotic nucleus or in the DNS which is quite far from the initial DNS in the space (N, Z). Indeed, for the formation of these DNS one needs quite a long time t 0 ≈ 10 −20 s at J 30. This allows us to assume the same temperature in the DNS nuclei and to dene the excitation energy of light nucleus with the mass The deviation from the thermal equilibrium is expected only for the DNS decays near the injection point where the temperature of heavy nucleus is smaller than the temperature of light nucleus. Thus, assuming the thermal equilibrium in the DNS, we can underestimate the excitation of light primary nucleus and predict upper limit for E * (Z i , N i ). Note that the partition of excitation energy in the DNS weakly in uences Y Z,N . Since in our calculations of the DNS potential energy the deformations of the nuclei [19] are close to their values in the ground states, the excitation energies of the DNS nuclei remain almost without changes after the DNS decays.
In order to test our method of calculation of σ Z,N , we treat the production of Ti in the multinucleon transfer reactions 58 Ni+ 208 Pb (E c.m. = 256.8 MeV) and 64 Ni+ 238 U (E c.m. = 307.4 MeV) [20,21]. The excitation energies available in these reactions supply two neutron evaporation from the primary Ti isotopes having the maximal yields. In the 58 Ni+ 208 Pb reaction 50 Ti and 52 Ti are produced with the cross sections 1 and 0.2 mb [20], respectively, which are consistent with our calculated cross sections of 0.6 and 0.35 mb, respectively. In the 64 Ni+ 238 U reaction the experimental [21] and theoretical cross sections for 52 Ti are 0.5 and 1.6 mb, respectively. Therefore, the suggested method is suitable for prediction of the cross sections for the products of multinucleon transfer reactions.
Since the value of Y Z,N increases with E * (Z i , N i ), the cross section σ Z,N for the production of exotic nucleus (Z, N ) increases as well up to the moment when E * L (Z, N ) becomes equal to S n (Z, N ). at which E * L (Z, N ) reaches S n (Z, N ) taken fromˇnite range liquid drop model [18]. Since the predictions of S n (Z, N ) have some uncertainties, we slightly continue the excitation function to the right from the arrows. If the predicted value of S n (Z, N ) would be smaller by δ, the arrows in Figs. 1 and 2 are shifted to the left on δA tot /A L . The excitation functions are quite steep on the left side. In order to produce the neutron-rich isotopes of Ca, one can use the reactions 48 Ca+ 232 Th and 248 Cm. These reactions are more favorable than the reaction 48 Ca+ 124 Sn. | 3,826.2 | 2005-08-11T00:00:00.000 | [
"Physics",
"Chemistry"
] |
A Comprehensive Approach for Detecting Brake Pad Defects Using Histogram and Wavelet Features with Nested Dichotomy Family Classifiers
The brake system requires careful attention for continuous monitoring as a vital module. This study specifically focuses on monitoring the hydraulic brake system using vibration signals through experimentation. Vibration signals from the brake pad assembly of commercial vehicles were captured under both good and defective conditions. Relevant histograms and wavelet features were extracted from these signals. The selected features were then categorized using Nested dichotomy family classifiers. The accuracy of all the algorithms during categorization was evaluated. Among the algorithms tested, the class-balanced nested dichotomy algorithm with a wavelet filter achieved a maximum accuracy of 99.45%. This indicates a highly effective method for accurately categorizing the brake system based on vibration signals. By implementing such a monitoring system, the reliability of the hydraulic brake system can be ensured, which is crucial for the safe and efficient operation of commercial vehicles in the market.
Introduction
The hydraulic brake system is a critical component of an automobile and is responsible for controlling the vehicle's speed and reducing the stopping distance.Any malfunction in the brake system can have a detrimental impact on the vehicle's stability and the safety of its passengers.According to the National Motor Vehicle Causation Survey, it has been revealed that 22% of accidents occur as a result of brake system failures [1].In 2007, BMW recalled over 155,000 Sports Utility Vehicles (SUVs) due to brake fluid issues.This recall was initiated to address concerns regarding the performance and functionality of the brake system in these vehicles.The recall aimed to ensure the safety of the vehicles and their occupants by rectifying the brake-fluid-related issues.Similarly, Chrysler also faced a recall in 2007 involving approximately 60,000 vehicles.The recall was due to braking issues, specifically, a lack of success in the braking system.Chrysler undertook this recall to address the identified deficiencies and ensure the proper functioning of the brakes in the affected vehicles [2].
These recalls demonstrate the commitment of automotive manufacturers to prioritize safety and take prompt action to rectify any issues related to braking systems.This statistic underscores the significance of maintaining a reliable and properly functioning brake system to prevent accidents and ensure the safety of both drivers and passengers.Regular maintenance, inspections, and testing are crucial for vehicle owners, manufacturers, and regulatory authorities to mitigate the risk of brake system failures.By implementing effective monitoring systems and taking preventive measures, potential issues can be identified early on, allowing for timely corrective actions and reducing the occurrence of accidents stemming from brake system failures.Examining such systems for defects will decrease the total crashes and enhance safety to a certain extent.Currently, a specific method for monitoring brake system failure is unavailable.Therefore, it is essential to develop a suitable method to monitor failures related to the brake system.The brake system may become flawed owing to pad wear, mechanical fade, inadequate oil pressure in the control cylinder, and oil leakage.If these flaws are not observed, accidents will take place.Hence, proactive maintenance is recommended to prevent accidents and can be attained through continuous monitoring [3,4].
Condition monitoring (CM) is continuously observing a system's operational attributes to initiate necessary steps before the occurrence of a breakdown.Many CM methods utilize devoted sensors and data assessment tools to examine a specific type of disparity in functional attributes [5].Thomas et al. outlined a new communications module for identifying brake faults through sensors.The module detects the malfunction through the recorded signals from the brake system [6].A method was patented for monitoring the vehicle condition and generating a maintenance plan according to the vehicle condition [7][8][9].A new approach for monitoring the aircraft brake was proposed to determine the brake condition [10].
Nevertheless, to the best knowledge of the authors, there is no available solution for monitoring the brake pad conditions.Hence, a new method is required to diagnose brake pad-related failure.Fault diagnosis is a sub-area of CM in which specific system conditions are continuously studied.Fault identification can be performed in numerous ways, like visual testing, acoustic emission (AE) [11], thermal imagers, ultrasonic, and vibration signals [12].Visual testing is the most commonly used technique in the industry.However, visual inspection has drawbacks since it requires skilled laborers who can identify the problems through experience.Hence, it is unsuitable for monitoring hidden components such as the brake system [13].
A passive infrared thermal imager was utilized to monitor the weld quality of the specimen, but sophisticated equipment was needed to measure the temperature variance [14].Since it is an offline process, it is also unsuitable for brake fault diagnosis.AE can be used to monitor the machining process [15].The bearing fault in a helicopter gearbox was identified by measuring the AE and vibration signals [16].The spur gear condition was monitored using AE and vibration signals [17].The study proved that vibration analysis is more appropriate for fault than AE signal analysis.However, the AE signal acquisition is complicated in machine components such as a brake.
Recently, most of the fault diagnostic studies focused on the vibration signal.The vibration signals are examined through frequency domain analysis, wavelet analysis, waveform examination, spectral analysis, statistical learning, histogram learning, etc.These examinations gave the data needed to decide on the maintenance plan.The consequences of such examinations are utilized for fault investigation to determine the fault's original cause.The brake system produces vibrations under various working circumstances.The measured vibration signals correlate precisely with the fault conditions, which are used to identify the defects in the system.Therefore, vibration signals can be considered appropriate for condition monitoring through feature-based analysis.
The feature is a specific, measurable characteristic of an anomaly being examined.Selecting data, separating, and categorizing independent features is essential in classification, pattern recognition, and regression.Numerous features, specifically statistical [18], histogram [19], and wavelet [20], can be obtained from the acquired signals.The application of a wavelet plays a crucial role in CM.The main advantage of the wavelet transform (WT) is to denoise or compress a signal without reducing the original signal and computational time.WT could be successfully applied to the different fault diagnosis studies.Hence, WT has been used for diagnosing brake failure in this study.
After extracting the features, the most significant features have to be selected.The attribute evaluator, the influence of the number of features, and Decision Tree (DT) are the three most important methods for selecting the features.DT utilizes a graphical structure to select the most significant features [21].The attribute evaluator calculates the value of a characteristic subset by studying each feature's analytical capacity and severance level.The influence of the number of features is one of the most helpful techniques in feature selection.Instead of taking all the features, the performance of the specific features is tested using this examination.The influence of several features study was applied to diagnose the brake faults and identify the best feature combination that produces the maximum classification accuracy [22].
Feature classification is the last phase in the classifier.The selected features should be classified for predicting performance.The commonly used classifiers are Decision Tree [23,24], Fuzzy [25], Artificial Neural Network [20], Bayes Net [26], Naive Bayes [20], Support Vector Machine (SVM) [27], Convolutional Neural Network (CNN) [28] and the Hidden Markov Model [29].Different classifiers monitored the tool wear using vibration signals with the discrete wavelet feature [20].So, there is considerable scope for the wavelet features with other machine learning algorithms in the brake monitoring system.The nested dichotomy (ND) classifier was used for fault diagnosis in the hydraulic brake system.They proved that the class-balanced nested dichotomy (CBND) prediction accuracy was more than that of the SVM and DT classifiers.The authors used the statistical feature alone for classification.The other features can also be considered to make a more detailed study.
To the authors' best knowledge, there is no literature on CM of a brake system using histogram and wavelet features.Hence, histogram and wavelet features are considered to monitor the hydraulic brake system.An attribute evaluator is used for feature selection.The feature selection through the attribute evaluator is validated using the influence of the number of features studied.The preferred wavelet and histogram features were categorized using the family of ND classifiers, and the results were compared and discussed.
Experimental Procedure
Experimentation was conducted to supervise the brake system using histogram and wavelet features extracted from the vibration signals.A brake system test rig was used to perform the experiments.The experimental test rig and procedure are briefed in the subsequent sections.
Experimental Test Rig
A commercial vehicle, as depicted in Figure 1, was utilized for the experiment.Its drive shaft consistently operated at 331 rpm on a test bed equipped with rollers.A piezo-electric uni-axial accelerometer (Make: Dytran, Sensitivity: 10 mV/g) was securely positioned at the center of the flat surface of the brake caliper, as illustrated in Figure 1c.As the drive shaft rotated, various dynamic forces and loads impacted the brake pad and caliper assembly.These forces resulted from the interaction between the brake pad and the brake rotor, leading to vibrations within the brake pad and caliper assembly.As the brake pad wears, it undergoes changes in both rigidity and mass distribution, which, in turn, cause fluctuations in the assembly's stiffness and mass.These variations in vibration within the brake pad assembly, associated with the brake pad's stiffness, are crucial parameters for understanding and identifying brake pad defects.The attached piezo-electric accelerometer captures these variations in vibration signals, which are then processed using the NI 9234 data acquisition (DAQ) system, converting them into digital data.The encoded data from the DAQ system is subsequently processed and presented in the LabVIEW 2021 software, providing results in the time domain as presented in Figure 1b.The experiments were performed with different brake pad conditions, as depicted in Figure 1d.
Experimental Procedure
The frequently occurring failures, such as Brake Oil Spill (BOS), Drum Brake Mechanical Fade (DBM), Disc Pad Wear (Uneven) Inner (DBP(UE)I), Disc Pad Wear (Uneven) Inner and Outer (DBP(UE)IO), Disc Pad Wear (Even) Inner (DBPI), Disc Pad Wear (Even) Inner and Outer (DBPIO), Drum Brake Pad Wear (DBPW), Air in Brake Fluid (ABF), and Reservoir Leak (RL) were considered for simulating the fault conditions.The vibration signals were acquired from the test setup with good and various simulated faulty brake conditions.The LabVIEW program used for vibration signal acquisition is shown in Figure 2. The data acquisition was performed with the following specifications: Sample length and no. of samples: 2 13 and 55 (arbitrarily chosen); Sampling frequency: 24 kHz (as per the Nyquist sampling theorem); Wheel speed and brake force: 331 rpm (~30 KMPH) and 68.7 N.
The wheel speed, which was 331 rpm, was measured and controlled using a tachometer.The brakes were applied gradually and evenly to prevent RPM fluctuations, and, importantly, as the brakes were applied, minor manual adjustments were made to the throttle to ensure a consistent RPM was maintained.
Histogram Features
The histogram data is used as a feature in the brake CM.The magnitude range of the vibration is split into various sub-ranges (bins) from the smaller to the larger value of the vibration signal.The quantity of information focuses on whose magnitude of vibration value cataracts inside a specific bin are calculated.The bin and count frame the histogram's x and y axes.The histogram for each defect is plotted utilizing vibration motions as an independent graph.The goal is to discover the bins whose y-axis values are the same for a specific defect but not the same for another defect.The histogram for a specific brake defect might be small; however, it might be huge for other defects.The bin range chosen should suit all defects.The strategy for estimating the base of least magnitude is as follows: a.
Compute the least magnitude of each signal in a specific condition; b.
Repetition of step (i) for all conditions; c.
Identify the least of those least magnitudes.
Similarly, the base of the largest magnitude is computed.The bin width should be fixed so that the bin height is diverse for various brake conditions.It requires not to be valid for all widths of the bin, but rather, at least a few of them should be taken after this measure is utilized as a feature for recognizing different conditions.This study used the bin width and range to plot the histogram, as shown in Figure 3a.The vibration value between the larger and smaller values was divided by a number of bins.The larger and smaller value range is divided into 69 bins (2 to 70). Figure 3a-j shows the histogram obtained from the vibration signals.
Wavelet Filter
The wavelet transform (WT) is a prevailing time and frequency domain signal tool.WT can be continuous or discrete, permitting the investigation of fleeting, irregular, or local parts.The continuous WT can expose more insights about a signal but is computationally intensive.For fault analysis, discrete wavelet transform (DWT) aims to characterize a signal proficiently with fewer parameters and computation time.DWT is an appropriate method for this study [30].
Wavelet Transform
The WT is drawn as where ψ p,q (t) is the alternative of the "Mother wavelet", which is considered as where t, p, q R, and p = 0 are continuously varying quantities, p is the scale factor, q is a shift factor, and 1 √ p is used for energy conservation.If p is small, extreme-frequency components are analyzed, and vice versa.
Discrete Wavelet Transform
DW performs scaling and translation procedures in distinct phases, restraining the selection of scales (a) and translation (τ) to specific numbers.However, the examination is satisfactorily precise and articulated in the following equation: where j and k are integers.The time scale is tested at a distinct interval due to discretizing the wavelets, called decomposition, and succeeds a sequence of wavelet coefficients [31].The wavelet features were obtained for every condition utilizing the Daubechies wavelets "db1" to "db10".The wavelets studied for this examination are provided below: The WT removed the noise from the raw vibration signal and reconstructed the signals.The successive approximation filters certain more high-frequency data from the raw signal.Using this method, substantial data loss is possible.Therefore, a new method is required for denoising.Optimal denoising demands a more sensitive approach known as thresholding, which includes discarding a small number of progressive elements that exceed a specific threshold.Thresholding is reformed from level to level.For the current study, level 5 approximation was applied.The eight types of wavelet features were extracted for every condition.Figure 4a-j shows the denoised sample signal attained through DWT for the Coiflets wavelet.From the denoised signals, relevant statistical features were collected.
Feature Selection
The attribute evaluator, DT, and the influence of the number of features studied were used to select the most significant features.This process is called feature selection.The attribute evaluator was utilized to find the best feature set from the obtained features.In this study, the two different search methods, Correlation-based Feature Selection Subset Eval and Best First Search were applied to select the sequence of the significant features.The chosen feature was tested through the effect of the number of features studied.The first element suggested by the attribute evaluator was classified first, and the accuracy was recorded.Then, the second feature recommended by the evaluator was combined with the first feature and classified.The classification accuracy with two feature combinations was also recorded.The process was repeated until all the features were combined and classified.The best feature combination that only provides maximum accuracy was selected for further study.The DT was used for choosing the most significant features from the histogram feature.
Feature Classification
The selected features should be classified to obtain the classification result.A system of ND is employed to disintegrate a multinomial problem into multiple binary problems.The disintegration is described using a binary tree.A set of class labels is stored in each tree node, and a binary classifier from the training data.The training data is divided into two subsets conforming to the two meta-classes, and one subset of training data is considered a true example.
In contrast, the other subset of testing data is considered a false example.The two next-in-line root nodes receive two subsets of the true class labels with their conforming training data.A tree is constructed by continuously employing this process.Lastly, this process succeeds a leaf node if the node has only one label.The choice of tree structure significantly impacts the accuracy.
The CBND method depends on regulating the number of conditions at every node.As an alternative to examining each possible tree's space, CBND examined the space of each balanced tree.The quantity of possible trees in CBND is lesser than the entire tree of ND.The detailed algorithm for developing a system of CBND is given in [32].The usual difficulty with the CBND method is that few multinomial issues are not balanced and more populated than others.A CBND does not have additional data balanced.This can undesirably affect the computational time of the algorithm.DNBND arbitrarily assigns the classes to two subsets until the training data size in one subset surpasses 50% of training data at the node.With the DNBND algorithm, assortment suffers when class dispersal is disturbed.The number of classes depends on the class distribution in the data.
Results and Discussion
The wavelet and histogram features were extracted, and the most significant features were chosen.The chosen features were classified using ND family classifiers.
Histogram Feature Classification Using Nested Dichotomy Family Classifiers
The DT algorithm chose the most significant features [23].All the 69 bins ranging from 2 to 70 were classified using the DT algorithm one after another.The classification accuracy for different bin ranges is shown in Figure 5.The ND classifier delivers a maximum classification accuracy for the 59th bin of 97.64%.The algorithm uses decision rules for classification.These decision rules are graphically represented as a DT, as shown in Figure 6.Hence, the graphical DT shows the contributing bin ranges.Even though 59 bin ranges were used as input in the algorithm, it uses only 7 for classification.Figure 6 selected the contributing bin range (59 th ) alone which have the maximum classification accuracy (97.64%).The features were categorized using various ND-family algorithms, including ND, CBND, and DNBND.The parameters used for the classification are given below:
•
Base learner: best first tree; • Tree depth: unlimited.The categorization accuracy is displayed in Figure 7.In Figure 7, the first cell signifies the number of data conforming to the "GOOD" class.The sum of data points in the first row is 55.All are correctly categorized and yield a classification accuracy of 100%.Likewise, the other element in the first row represents the false categorization.There are no data points on the first row beside the first column.This means that there is no false classification.Comparably, the sixth row sixth column signifies the DBPI condition.Among the 55 datapoints, 53 were categorized accurately, and two were wrongly classified under the DBP(UE)I condition.The classification accuracy was estimated as 96.4%.Among the 550 datapoints, 538 were accurately classified, with an overall classification accuracy of 97.5%.The rows and columns represent the anticipated and actual conditions, respectively.The diagonal cells correspond to data points that are appropriately categorized.The offdiagonal cells represent wrongly categorized data.The no. of data is shown in each cell, and the exact prediction accuracy is shown in the last cell.The last two columns on the right show the % of all the data predicted to belong to each appropriate (precision) and wrongly classified (false discovery rate) condition.The last two rows at the bottom show the % of the data predicted appropriately (True Positive Rate (TPR)) and wrongly (False Negative Rate (FNR)) classified condition.The cell in the bottom right shows the overall accuracy.
Similarly, the selected features were classified using CBND and DNBND classifiers, and the corresponding classification accuracies were found.The overall accuracy for CBND and DNBND was 97% and 99%, respectively.Figures 8 and 9 represent the confusion matrix obtained for CBND and DNBND with histogram features.
Wavelet Feature Classification Using ND Family Classifiers
Eight families of wavelets were selected, and a denoised signal was obtained from each family.From the denoised signal, twelve statistical features were extracted.The attribute evaluator was used to select the most significant features, and nine statistical features were acknowledged and considered as input for the classifier.The nominated nine features were appraised using the influence of the no. of feature study.Table 1 indicates the influence of the no. of features studied performed on the Coiflets wavelet.The CBND produces a better accuracy (99.45%) with the top eight features.
The feature assortment process was applied for all wavelet families, and the resultant accuracy was recorded and presented in Table 2.In Table 2, the Coiflets wavelet (Coif) produced a maximum classification accuracy compared to other wavelets.Therefore, the Coiflets wavelet was selected.From the influence of the no. of feature study, the eight features that contributed to the classification were chosen for the study.They were as follows: (1) Mean, (2) Standard Error, (3) Median, (4) Standard Deviation, (5) Kurtosis, (6) Skewness, (7) Range, and (8) Minimum.The classification accuracy for the ND family (ND, CBND, and DNBND) algorithms was found.Figure 10 represents the confusion matrix for ND. Figure 10 reveals the overall classification accuracy of 99%.Comparably, the remaining data points in the first row signify the false classification details.All other values are zero in the first row other than the first element, revealing no false classification.Similarly, the last element in the last row represents RL.Among the 55 datapoints, 53 belong to RL and are accurately categorized, and 2were wrongly categorized as the DBM condition.Among the 550 datapoints, 543 were categorized accurately, with an accuracy of 99%.
Likewise, the classification accuracy was computed for CBND and DNBND.Both the classifier produced 99.45% and 99.27% as classification accuracy, respectively.Figures 11 and 12 show the graphical representation of the confusion matrix attained for the CBND and DNBND.The overall accuracy of the ND classifier is presented in Table 3.The CBND gives an enhanced accuracy of 99.45% among the three ND family classifiers.Among the 550 datapoints, 547 were accurately classified.Denoising using wavelet significantly reduced noise and increased classification accuracy.
Comprehensive Accuracy by Condition
The specific condition implementation of the ND can be studied using comprehensive accuracy by condition study, termed as TPR, FPR, Precision, and ROC.The TPR corresponds to the accurately identified conditional class, and the FPR is the rate at which the classes are wrongly classified.For a machine learning algorithm, TPR must be 1, and FPR should be 0.This study's TPR and FPR are closer to 1 and 0, respectively.Similarly, the other parameter values are also closer.Table 4 presents the classification accuracy for CBND for wavelet feature extraction.The captured raw vibration signals were denoised through WT.The carefully chosen features were categorized using ND, CBND, and DNBND, and the overall accuracy is shown in Table 5.The results of wavelet features were compared with histogram features.Table 5 shows the WT of the vibration signal with CBND, revealing a maximum overall accuracy of 99.45%.The comparative results show that the wavelet features and ND outperform the statistical features and all other classifiers considered in this study.Hence, the wavelets with CBND can be considered an appropriate classifier for diagnosing brake faults.
Conclusions
The paper discussed the wavelet and histogram features for supervising the brake system using vibration signals.The present study considered nine good simulated faults of the brake system.The vibration signals were recorded under all simulated defects.The wavelet and histogram features were extracted from the vibration signal, and the most significant features were selected.Then, the preferred features were classified using ND, CBND, and DNBND.The results reveal that the CBND classifier performed better than the ND and DNBND in classification accuracy.The competency of the wavelet features was evaluated with the histogram features, and the wavelet feature yielded an overall prediction accuracy of 99.45%.Henceforth, the CBND with wavelet features can be recommended for online brake fault diagnostics.This research is protracted with actual road conditions for monitoring the hydraulic brake system.
Figure 2 .
Figure 2. LabVIEW program for data acquisition.
Figure 3 .
Figure 3. Histogram of different faulty conditions for 7 bin range.
Figure 4 .
Figure 4. Original and denoised signals for various faulty conditions.
Figure 7 .
Figure 7. Confusion matrix for ND with histogram features.
Table 1 .
Influence of no. of feature study with Coiflets wavelet.
Table 2 .
Classification accuracy for various wavelet families.
Table 3 .
Overall classification accuracy of ND family classifiers.
Table 5 .
Classification accuracy of ND classifiers. | 5,586 | 2023-11-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Photoelectron Holographic Atomic Arrangement Imaging of Cleaved Bimetal-intercalated Graphite Superconductor Surface
From the C 1s and K 2p photoelectron holograms, we directly reconstructed atomic images of the cleaved surface of a bimetal-intercalated graphite superconductor, (Ca, K)C8, which differed substantially from the expected bulk crystal structure based on x-ray diffraction (XRD) measurements. Graphene atomic images were collected in the in-plane cross sections of the layers 3.3 Å and 5.7 Å above the photoelectron emitter C atom and the stacking structures were determined as AB- and AA-type, respectively. The intercalant metal atom layer was found between two AA-stacked graphenes. The K atomic image revealing 2 × 2 periodicity, occupying every second centre site of C hexagonal columns, was reconstructed, and the Ca 2p peak intensity in the photoelectron spectra of (Ca, K)C8 from the cleaved surface was less than a few hundredths of the K 2p peak intensity. These observations indicated that cleavage preferentially occurs at the KC8 layers containing no Ca atoms.
Bulk structures
The bulk crystal structure of KC8 refers to the face-centered orthorhombic structure (fco) and the space group is Fddd. The arrangements of metal atom and graphene sheet is shown as 'AαAβAγAδ'; 'A' corresponds to graphene sheet, while α, β, γ and δ refer to the sites occupied by metal atoms. The lattice constants, a, b and c, of KC8 are 4.92,8.52 and 21.4 Å,respectively. The bulk crystal structure of CaC6 refers to the trigonal structure and the space group is R-3m.
The arrangements of metal atom and graphene sheet is shown as 'AαAβAγ'. The lattice constants, a and c, of KC8 are 4.434 and 13.572 Å, respectively. Figure S1: Bulk structure model of KC8 and CaC6.
2π-steradian photoelectron intensity angular distribution
Soft X-ray was incident from surface normal direction. Sample azimuthal orientation was varied from 0° to 360° by 9° step. Dwell time was 3 min for each PIAD acquisition. Total acquisition time was about 2 hours. However, this pattern contains artifacts due to detection inhomogeneity.
Transmission function of the two-dimensional detector, i.e., inhomogeneous detection efficiency distribution was obtained by simply averaging all the measured data. Position of photelectron diffraction pattern on the screen varies as the sample orientation changes, but the position of artifact pattern due to inhomogeneous detection efficiency does not change its position. By normalizing measured patterns using the averaged pattern, inhomogeneous detection efficiency was removed as shown in Fig. S2. They are assembled to one 2π-steradian PIAD. Note that the polar angle dependence information, especially the diffraction pattern at [001] direction (center of the pattern) disappeared due to this normalization process. Six-fold symmetry operation was applied to the obtained 2πsteradian PIAD patterns. Figure S2: A series of 41 C 1s photoelectron intensity angular distributions (PIAD) normalized by transmission function of the detector and plotted using the azimuthal equidistance projection.
The center and periphery of the circle correspond to the surface normal and horizon directions, respectively. They are combined into 2π-steradian C 1s PIAD pattern shown in right bottom.
Photoelectron pattern simultaneous equation Figure S3a and b show the measured full-hemisphere photoelectron holograms for C 1s ( C ) and K 2p ( K ), respectively. The diffraction feature resembling the C 1s hologram also appeared as a background in the measured K 2p hologram because of the energy-loss components of C 1s photoelectrons. The K 2p peak was partially overlapping with the C 1s peak. Therefore the measured photoelectron holograms C 1s ( C ) and K 2p ( K ) can be expressed as the linear combination of the intrinsic C 1s ( C ) and K 2p ( K ) holograms.
The intrinsic C 1s and K 2p holograms ( C and K ) shown in Fig. 2b and 2c, respectively, were obtained by solving a simultaneous equation.
The coefficients was determined to be 0.05 by considering the graphite spectrum shown in Fig. 2a. The coefficients was determined to be 0.475 inspecting the possible combinations of and and judging from the residual C 1s background pattern in the K 2p hologram as shown in Fig.S3c.
The raw C 1s and K 2p holograms ( C and K ) consisted of 0.95 C + 0.05 K and 0.475 C + 0.525 K , respectively.
Scattering pattern matrix
Photoelectron holography is a method for atomic structural analysis. The first reconstruction algorithm proposed by Barton in 1988 was based on the Fourier transformation; however, a clear atomic image could not be obtained by this algorithm because the effect of the phase shift of the scattering process was neglected. In addition, a conjugate image, which is a virtual image located on the point symmetric position, appears. In order to solve the problem of the conjugate image, using a series of photoelectron hologram with different kinetic energy was effective. Although this enabled the conjugate image problem to be solved, the effect of the phase shift could not be solved. The typical phase shift problem is the forward focusing effect, a phenomenon in which electrons are focused toward the direction of the scatterer atom. The forward focusing peak caused image aberrations in the reconstructed atomic image.
On the other hand, we developed a reconstruction algorithm, SPEA-MEM, which does not utilize the Fourier transformation. SPEA-MEM is a holography transformation code based on scattering pattern extraction algorithm and maximum entropy method. This algorithm solves the problem of the phase shift and the conjugate image, and the atomic arrangement can be obtained without using the multi-energy format. We succeeded in reconstructing a three-dimensional atomic arrangement from a single-energy hologram with a resolution of 0.02 nm. This algorithm enables the atomic arrangement to be measured from a photoelectron hologram using an X-ray tube or an Auger electron hologram.
SPEA-MEM was used for the real-space atomic arrangement image reconstruction for the present study. One-dimensional polar angle photoelectron intensity profiles , ( scan ) were derived for every direction, ( , ), from the measured holograms. Oscillatory structures resulting from the diffraction rings appeared in the neighbouring atom directions but not in the other directions where no atoms existed. We then calculated the scattering pattern matrix at a kinetic energy of 600 eV. Figure S3 shows the example of scattering pattern matrix for C 1s core level emission consisted of a set of fundamental diffraction patterns for polar angles scan of 0 to 180º (vertical abscissa) and C to C atomic distances of 0.1 to 1.1 nm (horizontal abscissa). One-dimensional profiles , ( scan ) were fitted using the maximum entropy method with these fundamental diffraction patterns corresponding to the positions of 10-pm cubic voxels within 0.7 nm of the emitter atom.
The three-dimensional voxels that represent atomic distributions can be linearly converted to the photoelectron hologram pixel using the scattering pattern matrix. When photoelectron hologram image is given, the solution of this linear equation may give the atomic distribution. It is, however, difficult to solve because the amount of the voxels is quite larger than that of the hologram pixels. To solve this, SPEA-MEM has been adopted a fitting-based algorithm using the maximum entropy method.
The further details of SPEA-MEM algorithm were explained in the published references | 1,659 | 0001-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Anisotropic Electron–Phonon Interactions in 2D Lead-Halide Perovskites
Two-dimensional (2D) hybrid organic–inorganic metal halide perovskites offer enhanced stability for perovskite-based applications. Their crystal structure’s soft and ionic nature gives rise to strong interaction between charge carriers and ionic rearrangements. Here, we investigate the interaction of photogenerated electrons and ionic polarizations in single-crystal 2D perovskite butylammonium lead iodide (BAPI), varying the inorganic lamellae thickness in the 2D single crystals. We determine the directionality of the transition dipole moments (TDMs) of the relevant phonon modes (in the 0.3–3 THz range) by the angle- and polarization-dependent THz transmission measurements. We find a clear anisotropy of the in-plane photoconductivity, with a ∼10% reduction along the axis parallel with the transition dipole moment of the most strongly coupled phonon. Detailed calculations, based on Feynman polaron theory, indicate that the anisotropy originates from directional electron–phonon interactions.
INTRODUCTION
Hybrid lead-halide perovskites are one of the first ionic semiconductors where the diffusion of the photoexcited charge carriers exceeds 1 micron 1 .Several of the advantageous properties of hybrid perovskites, including reduced carrier-impurity interactions 2 , long charge carrier lifetimes 3 , and high defect tolerance 4 have been associated with the material's distinctive electron-phonon interactions.These interactions also set a limit for the maximum carrier density and mobility in perovskites 5 .Despite outstanding characteristics, the instability of the halide perovskites in an ambient environment has hindered its applicability.One developmental direction is reducing the connectivity of the octahedral networks in the perovskite structure to two dimensions (2D).Since 2D perovskites exhibit enhanced stability and enable improved performance of 2D/3D heterostructures [6][7][8][9] , it would be ideal to systematically investigate potentially interesting optoelectronic properties arising from the confinement.
The perovskite structure can be transformed from 3D into 2D by replacing some of the smaller A-site cations (e.g.methylammonium or formamidinium) with organic molecules with longer C-chains 10 , such as butylammonium (BA) 11 , phenylethylammonium and the chiral methylbenzylammonium 12,13 .These longer spacer molecules push the lattice apart into twodimensional metal-halide lamellae with tunable thicknesses, which alters the energetic landscape of photoexcited carriers due to quantum-and dielectric confinement 14 .The vibrational properties of low-dimensional perovskites also differ from their 3D counterparts.
Due to a large impedance mismatch between the inorganic and organic layers in 2D perovskites, acoustic phonon propagation, i.e., heat transport, is two times slower compared to 3D perovskites 15 .Butylammonium lead iodide (BAPI) perovskites have been studied extensively 16,17 , but especially when synthesized through a synthesis protocol involving spin coating 11 , it is hard to directly obtain large-area crystals which are phase pure in terms of the thickness of the inorganic structure n.In small flakes, room-temperature Raman microspectroscopy measurements in literature have revealed in-plane anisotropy of the Ramanactive modes 18 , which were shown to be strongly heterogeneously broadened upon lowering the temperature.The broadening of the individual modes as a function of temperature is heavily related to the strong anharmonicity of the halide perovskite phonon modes, in particular via coupling to octahedral tilting 19 .Due to the small size of the perovskite crystal flakes, these experiments from literature were done microscopically, hampering direct access to optically active (absorptive) modes, and their anisotropy, in the THz frequency range.
The vibrational properties of 2D perovskites are expected to play the same crucial role in determining electronic properties as in 3D halide perovskites [20][21][22] .In particular, thermally accessible vibrational modes (up to 25 meV (6 THz) at room temperature) of perovskites originate mostly from displacements of the inorganic framework 23 .Previous studies have shown that these low-frequency phonon modes couple strongly to electronic states around the band extrema of the 3D perovskite 24 , which can be rationalized by the fact that these electronic bands are made from a basis of lead-and halide atomic orbitals.Specifically, in 3D methylammonium lead iodide (MAPI), the displacement of the structure along the 1 THz phonon coordinate was revealed as the mode that dominates electron-phonon coupling and has been used to explain the temperature dependence of the bandgap of MAPI [24][25][26] .Although the exact structure of the polaron in metal halide perovskites is still under debate 27 , it is clear that the coupling between vibrations and charge carriers in these materials impacts their static optoelectronic properties and dynamic phenomena, e.g., carrier cooling 28,29 .Anisotropic electron mobilities have been observed in, and predicted for, various semiconductor and metallic materials, such as TiO2 30,31 , phosphorus carbide 32 , 2D niobium selenide 33 , materials with tilted Dirac cones (Borophene) 34 and the organic semiconductor tetracene 35 .In tetracene, the anisotropy of the electron mobility is directly correlated with the vibrational properties of the material.In 2D perovskites, depending on the exciton-binding energy, there is a subtle balance between the formation of excitons and free carriers 36 .Perhaps counter-intuitively, the diffusivity of excitons in-between the inorganic layers is efficient due to dipolar energy-transfer mechanism 37,38 .While five times slower than intra-layer transport, this is less anisotropic than charge-carrier transport, which proceeds via exponentially decaying electron wavefunction overlap.While previous studies on transport mechanisms in 2D perovskites have focused on transport anisotropy between the inorganic layers 37,38 , little is known about anisotropy in carrier diffusion inside the inorganic layers 39 and its direct relation with electron-phonon interactions.
It is therefore important to investigate the anisotropic nature of electron transport inside the inorganic lamella, whether or not it is inherited from coupling with directional phonon TDMs, as well as the effect of confinement on both electronic and vibrational properties.In this work, we systematically study the anisotropy of electron transport within the inorganic lamella, the potential contribution from coupling with specific phonons, as well as the effect of confinement on both electronic and vibrational properties.To this end, we synthesize large-area butylammonium lead iodide single crystals, with precise control over the inorganic layer thickness, n, and investigate the electron conduction mechanisms and optically active vibrational modes in the 0-3 THz spectral range with polarized THz time-domain spectroscopy (TDS) at a fixed level of confinement.We identify the crystallographic direction of the TDMs of all optically active phonon modes, and observe the time/frequency-resolved photoconductivity along, and perpendicular to, the direction of the 1 THz phonon TDM (i.e. the [102] and [201] directions).We observe a clear anisotropy in the photoconductivity, where the mobility is ~5-10% larger/smaller in perpendicular/parallel direction.Combining density functional theory (DFT) simulations and Feynman polaron theory, we unveil that the apparent anisotropy in photoconductivity originates from anisotropic electron-vibration interactions with the 1 THz phonon mode.
The results here shed light on anisotropy in optically active vibrational modes and carriertransport properties in single-crystalline 2D perovskite materials and open the door to designing devices relying on enhanced electron mobilities along specific crystallographic directions.
Steady-state structural and optical characterization.
We used a synthesis protocol adapted from literature 40 (see methods section SI1 in the SI) to grow BAPI single crystals with a large lateral area (> 1cm 2 ) and high n purity, sufficiently thin (~10-200 m) to transmit photons in the THz frequency range.Large-area single crystals are a prerequisite for probing anisotropy through THz spectroscopy due to the diffraction-limited size of our THz pulse (~1 mm diameter).Figure 1(a) shows photographs of representative crystals at the liquid-air interface of the precursor solution.BAPI crystals consist of layered, primarily inorganic sheets, electronically decoupled by BA.By changing the ratio of BA and MA ions, we control and vary the thickness of the inorganic structure between n = 1 -4.The changing bandgap in the visible spectral range going from n = 1 to n = 4, due to the weakening of both quantum-and dielectric confinement, can be observed by eye.The bottom part of the figure shows the extended unit cells of the BAPI crystals, where we have used the convention to denote directions <h 0 l> in the inorganic planes of the structure.The high phase purity of the BAPI crystals is shown both by the powder X-ray diffraction (pXRD) measurements in Figure 1(b) as well as the reflectivity spectra, shown in Figure 1(c).From the pXRD, we determine the inter-lamellar spacing, d010, to increase from 13.77 ± 0.04 Å for n = 1 to 32.17 ± 0.05 Å for n = 4, an increase of 6.1 ± 0.2 Å/n, i.e., precisely the size of one methylammonium lead iodide octahedron (see SI section SI2).
The reflectivity spectra in Figure 1(c) for all different layer thicknesses show two features.A transition at higher energy, which corresponds to the excitonic absorption line, and a subbandgap feature at longer wavelengths, both of which are indicated by vertical dashed lines in the spectra.The sub-bandgap absorption line, which can also be observed in emission, has been ascribed to various origins, from states localized at the edges of crystals to magnetic dipole emission [41][42][43][44] .A full evaluation and assignment of the reflectivity curves is beyond the scope of this work.The bottom panel shows a transmitted THz pulse through an n = 1 BAPI single crystal, as a function of the azimuthal angle, q, of the crystal.In OPTP experiments, the THz field will accelerate photoexcited carriers along its electric field polarization, probing the mobility along that axis.We measure the transmission of a linearly polarized THz pulse (with frequencies between 0.3 -3 THz) through our sample, where the beam impinges on the BAPI crystals along the surface normal of the inorganic lamellae.By rotating our sample, we effectively align the transition dipole moments (TDMs, ) in the plane of the crystal w.r.t. the electric field vector of our THz photons (with q being the azimuthal angle between ETHz and ).When ETHz || (q = 0 o ), the field couples to this mode and gets attenuated; when they are perpendicular (q = 0 o ), they cannot interact and the field is simply transmitted.An example of transmitted THz fields through an n = 1 BAPI crystal for different q is shown in Figure 1(d).We also performed optical-pump/THzprobe (OPTP) measurements, where we photoexcite our samples at 400 nm, and probe the optical conductivity via the change in the transmitted THz electric field, -DE/E0, with DE the difference between the photoexcited and unexcited BAPI crystal, and E0 the steady-state transmittance of the THz pulse.Here, the charge carriers are accelerated by the THz field in a specific crystallographic direction in the crystals, given by q.As explained further throughout the text, we measured the OPTP transients parallel and perpendicular to the 1 THz TDM, as the modes at this frequency have been shown to dominate electron-phonon interaction [24][25][26]45 .
Linear THz absorption -TDS measurements of low-frequency vibrations in BAPI.
We first discuss linear TDS measurements through the BAPI crystals.Figure 2(a-d) show the absorption spectra in the 0.3 -3 THz frequency range.For n = 1 BAPI, shown in Figure 2(a), there are four distinct modes.Compared to the THz absorption spectrum of bulk MAPI, which has two modes at 0.9 THz (octahedral rocking mode) and 2 THz (Pb-I-Pb stretch) 23,46 , it seems the modes are split.This can be rationalized due to the lower symmetry of the n = 1 BAPI crystal; the in-plane and out-of-plane vibrations in BAPI sample different potential energy surfaces compared to bulk MAPI.Indeed, below the tetragonal-to-orthorhombic phase transition temperature, the two vibrational modes of bulk MAPI split into four, a result of the reduced crystal symmetry, lifting the degeneracy of the modes 47 .Furthermore, the absorption of the modes 0.9, 1.2 and 2.5 THz can be switched on and off by rotation over q, where the former two and the latter are out of phase (i.e., they have perpendicular TDMs).The mode at 2 THz shows no dependence on q, which can be the case if there is a high apparent degeneracy.
Indeed, according to the theory presented below, this peak in the absorption spectrum comprises many different eigenmodes.As these eigenmodes all have different directionalities of their TDMs, this mode can be seen as quasi-degenerate.
When increasing n in Figures 2(b-d), the peak splitting we observed in n = 1 is reduced.For n = 2, we observe that the mode at 1.1 THz has a q-dependent region at the lower frequency side of the mode, and a q-independent part at the high frequency side of the mode.The 'on-off ratio' of this 1 THz mode is strongly reduced compared to n = 1, and decreases further for n = 3 and 4. For all n, the intense mode around 2 THz shows no q-dependence.Note that for n = 4, we could not synthesize sufficiently thin single crystals to achieve reasonable transmission of THz photons with frequencies above 1.6 THz.To understand the drastic change in the angle-dependent behavior of the vibrational spectra observed when changing the layer thickness microscopically, we computed the THz absorption spectra for n = 1 from DFT-based harmonic phonon theory (computational details [48][49][50][51][52][53][54][55][56][57] can be found in the SI).We separately compute spectra for the high-temperature (HT) and lowtemperature (LT) phases (Tc » 274K) of n=1 BAPI based on resolved crystal structures from Ref. [19] 19 , (see SI fig S10).Surprisingly, the calculated spectrum of the LT phase matches the room temperature, i.e. above Tc, experimental data well.We believe this is due to the proposed disordered nature of the HT phase 19 , i.e., that the HT phase locally resembles the LT phase, and we thus choose to use the LT structural model in the following analysis.Figure 2(e) shows the calculated IR spectrum for the LT phase convoluted with a 0.1 THz Gaussian broadening (blue line) and when a small broadening is applied to each phonon mode (orange line).The overall shape of the broadened spectrum, barring a slight blue shift, agrees well with our experimental measurements.In particular, there are four clear peaks with obvious correspondence to the four peaks of the measured extinction spectrum.It thus becomes clear that each of these peaks, in fact, contains various numbers of eigenmodes that all have significant TDMs: the peaks are heterogeneously broadened.The eigendisplacements of the phonon modes with the strongest TDMs for each peak, indicated with an asterisk in the calculated spectrum, are shown on the right-hand side of Figure 2(e).These displacements are made up of various distortions of the PbI6 octahedra, but also significant translational motion of the butylammonium cations, especially for the two higher frequency modes.
Since we perform phase-and amplitude-resolved THz transmission measurements, we can directly obtain the complex refractive indices of the samples without using the Tinkham (thinfilm) approximation 58 .As outlined in the SI (section SI4-5) we numerically fit the transmission functions based on Fresnel equations to those obtained from the experimental data, with the samples' refractive indices as free parameters.The results are shown in Figure S8.Through this, we can separate the absorptive component, the imaginary part of the refractive index, from the dispersive component, the real part of the refractive index.Similar to three-dimensional perovskite (MAPI) 47 , the refractive indices are relatively large compared to standard semiconductors such as Si, due to the ionic and soft nature of the perovskite structure.We have also calculated an angle-resolved IR-spectrum from our phonon simulations as shown in the SI (section SI6).We note that, since we employ the harmonic phonon approximation, the symmetry of the structure implies that the TDMs of all modes lie within the <100>, <010> or <001> family of directions.This does not match our experimental observation, and it is thus likely that higher-order effects, e.g.anharmonic phonon-phonon coupling, result in overall TDM moments in the off-diagonal <1 0 2> directions.
Polarization resolved optical pump/THz probe measurements of the photoconductivity.
Now that we have determined the directions of phonon TDMs of the inorganic substructures of the BAPI crystals, we compare the complex photoconductivity between directions parallel and perpendicular to the directions of the 1 THz TDMs. Figure 4(a-d) shows the real and imaginary parts of the OPTP transients along both directions after photoexcitation at 400 nm.Noticeably, the transient amplitudes exhibit anisotropy persistently for all the measured thicknesses (n = 1-4) over the experimental time window (~20 ps).We note that all the OPTP measurements are performed in the low-fluence, i.e. linear, regime, to exclude the presence of higher-order recombination processes originating from carrier-carrier interactions, and to prevent photodegradation of the crystals.Furthermore, for each n, we kept the same excitation fluence (within 1%) for measurements parallel and perpendicular to the 1 THz TDM.
To dissect the overall carrier dynamics and conduction mechanism as well as their anisotropies, we first consider the decaying components for each thickness, n.For n = 1-3, shown in Figure 4(a-c), the OPTP signal shows a fast decay over the first couple of picoseconds, followed by slower decay at later times.To elucidate the origin of this fast decay, we compare resonant excitation of the crystals to excitation at 400 nm, i.e. above the bandgap energy (see SI Figure S13).Upon resonant excitation, there is only an ingrowth visible, limited by the instrument response function, followed by a slowly decaying component.The OPTP traces obtained using 400 nm excitation decay over the first 1-2 ps to the same level (normalized by the absorbed photon fluence) as for the OPTP traces using resonant excitation, after which both photoconductivity traces are identical.Since the photoexcited densities are in the linear regime, the initial decay upon 400 nm excitation comes from highly mobile hot carriers, which cool within a few ps to less mobile states at the band edges, an effect previously observed for hot holes in the Cs2AgBiBr6 double perovskite 59 .A similar effect was also observed by Hutter et al. for methylammonium lead iodide, but ascribed to a direct-indirect character of the bandgap 60 .The comparison of resonant-and above-bandgap excitation experiments indicates that hot carriers have a higher mobility than cold carriers in BAPI, a comprehensive study of which is beyond the scope of this work.As the layer thickness increases, we expect the decay of the photoconductivity to become second-order-like, as in MAPI, where the photoconductivity does not decay over the first nanoseconds 61,62 .
Next, we examined the in-plane photoconduction mechanism by recording photoconductivity spectra at a pump-probe delay time of 10 ps.We recorded photoconductivity spectra, where we measured the change in the transmitted THz pulse in the time domain and Fourier transformed it into a conductivity spectrum.Since the BAPI crystals are relatively thick (tens of micrometers), we numerically retrieved the refractive index from an analytical model of the transmission of the THz pulse through the sample [63][64][65] .After obtaining the steady-state and excited sample's refractive index, we converted these into dielectric functions and, from the difference, we obtain the complex photoconductivity.The full analysis and method details are outlined in the SI (section SI5).The conductivity spectra along both directions for each thickness, n, are shown in Figure 4(eh).Surprisingly, for all samples, we obtain a conductivity spectrum that does not resemble an excitonic response in our frequency range (a negative imaginary-and zero real photoconductivity, i.e., a Lorentzian oscillator for an interexcitonic transition) at 10 ps pumpprobe delay time.Instead, the positive real, and almost zero imaginary photoconductivity for all n looks Drude-like.Indeed, we are able to fit our data with the Drude model, from which we obtain the plasma frequency (proportional to the carrier density) and the carrier scattering time.
The results of all the photoconductivity experiments are summarized in Figure 5.We fitted the OPTP transients with a model that contains the sum of two decaying exponentials convolved with the IRF (except for n = 4, where we only needed one single decaying exponential to fit our data), of which the fitted decay rates are shown in Figure 5 indicating that the photoconductivity in 2D BAPI is ~10% higher in the direction perpendicular to the 1 THz TDM compared to parallel to it.
The anisotropy of photoconductivity in all four BAPI thicknesses is highlighted in Figure 5(b).
The scattering times vs. the inorganic layer thickness n are obtained from fitting the Drude model to the conductivity spectra [shown in Figure 4(e-h)].The scattering times for all the different layer thicknesses are quite similar, around 17 fs, but we do, however, observe that in all our samples, the scattering time in the direction perpendicular to the 1 THz transition dipole moment (i.e. the <1 0 2> direction) is consistently higher than in the direction parallel to it (i.e. the <-2 0 1> direction), although they lie within each other's standard deviation from the fit; an indication that the photoconductivity is higher in one direction compared to the other.We also compare the photogenerated carrier quantum yields, defined as the carrier density obtained from the plasma frequencies, divided by photogenerated carrier density (obtained from the absorbed photon fluence), shown in Figure 5(c).Both the plasma frequencies and the quantum yields do not show a clear anisotropic trend vs. n, indicating that in both directions, the generated carrier densities are identical and excluding this as a cause for the apparent anisotropy in photoconductivity.The value of the quantum yield, around 30%, is in line with earlier observations in 3D perovskites 5,66 .
To quantify the anisotropy in the photoconductivity, we compare various measured parameters in Figure 5 TDM.Here, we defined the anisotropy ratio as the value perpendicular to the 1 THz TDM divided by the value parallel to the 1 THz TDM.As can be seen, for all different n, the apparent photoconductivity is higher in the direction perpendicular to the 1 THz transition dipole moment, compared to parallel to it.From this, we estimate this anisotropy of the photoconductivity between the direction perpendicular and parallel to the 1 THz TDM to be around 10% for all samples.
To explain these findings, we turn to the Feynman variational polaron solution of the extended Fröhlich polaron model Hamiltonian.The original Feynman theory explicitly includes multiple phonon modes 67 , and anisotropic effective masses 68 .Here, we use the calculated anisotropic modes in the LT phase, associate a Fröhlich dielectric mediated electron-phonon coupling with the infrared activity of each mode, and solve the finite temperature mobility theory for 300 K.
Due to the varying infrared activity in the two in-plane directions, we find a Fröhlich dimensionless electron-phonon coupling of 3.44 and 3.75 along the two in-plane lattice vectors; both considerable higher than in 3D halide perovskites.This leads to predicted mobilities of 1.89 cm 2 /V/s and 1.76 cm 2 /V/s in the two directions, an anisotropy of 8%, a value in line with the experimentally obtained anisotropy estimates.The additional electron-phonon coupling also increases polaron localization, by a similar quantity.We, therefore, explain the experimentally observed anisotropy in photoconductivity to be due to different intrinsic carrier mobilities, arising directly from the anisotropy in the dielectrically mediated electron-phonon coupling strength.We cannot, however, disentangle the effects of scattering time and effective mass, on the carrier mobility.Overall, it is clear that the impact the 1 THz phonon mode has on the electronic degrees of freedom is remarkable 25 .
CONCLUSIONS
We have studied anisotropy in the phonon transition dipole moments and the optical conductivity of single crystalline two-dimensional butylammonium lead iodide perovskites.
From the linear THz absorption measurements, we determine the transition dipole moment vectors of the 1 THz modes to lie in the <1 0 2> / <2 0 1> family of directions, show that the 2 THz mode is angle independent and discuss similarities and differences between the vibrational modes with different thicknesses of the inorganic layer n.Furthermore, we measured the optical conductivity by optical-pump/THz probe spectroscopy, and show that the photoconductivity is higher in the direction perpendicular to the 1 THz transition dipole moment by 5-10%, compared to the direction parallel to it.Theoretical calculations based on Feynman's variational polaron theory corroborate the observed anisotropy and pinpoint that directional electronphonon interactions are likely responsible for this effect.The impact of nuclear displacement along the 1 THz phonon coordinate on the electronic degrees of freedom in metal-halide perovskites is remarkable.Our results shed new light on some of the fundamental molecular physics governing direction-dependent effects in these quasi-two-dimensional perovskite semiconductors and corroborate the importance of the dynamic interplay between vibrational modes and charge carrier motion in these materials.
Supporting Information available.Detailed description of the interfacial synthesis method, description of experimental techniques, data analysis of the THz TDS and OPTP experiments, Synthesis procedure.The synthesis of cm-sized crystal flakes was done at a liquid-air interface, based on an adjusted protocol by Wang et al. 1 .For the exact amounts, see table S1 below.We mixed PbO in HI (57% w/w solution in water) and H3PO2 (50% w/w solution in water) in a 20mL glass vial.The mixture was heated to 80 o C, at which point the PbO dissolved and a clear yellow solution was obtained.The thermocouple that controls the hot-plate temperature was always inserted in a 20 mL glass vial filled with the same volume of water as the perovskite precursor solutions.
Table of Contents
Next to this, a solution of n-butylamine (nBAm) and methylammonium (MA) chloride in HI was prepared while cooling the vial, holding the nBAm and MA, in an icebath.Care should be taken to add the HI slowly, as the reaction is very exothermic (one can observe fumes forming as the HI is added).This nBAm/MA solution is dropwise added to the Pb-containing solution.
Upon mixing the two solutions, some small crystallites are formed, which dissolve rapidly.
Depending on the layer thickness that is formed, crystal growth is done either by lowering the After the growth of the crystals at the liquid-air interface, the crystal is scooped gently off the interface with Teflon tweezers, placed on a plastic lid, and dried in a vacuum oven for 5 hours.
It is important to attach a liquid N2 cold-trap in between the oven and the vacuum pump, to catch off the remaining water-HI solution.After drying, the samples are stored in a nitrogenfilled glovebox until further use.Note that the overall crystal thickness varies from 10-20 micrometers for n=1 (making them very fragile) up to 200 micrometers for n=4.
Additionally, for the layers where n > 1, we used MACl (instead of MAI).For reasons unclear to us, we were able to get interfacial nucleation more consistently (instead of nucleation in the bulk of the liquid), using MACl compared to MAI.
Precursor quantities for the synthesis of (BA)2(MA)n-1PbnI3n+1 single crystals Probe Spectroscopy (OPTP).This technique is able to probe the conductivity of carriers and further retrieve the charge carrier mobility !.
We use an amplified Ti:sapphire laser producing pulses with 800 nm central wavelength and ~50 fs pulse duration at 1 kHz repetition rate.The THz field is generated by optical rectification in a ZnTe(110) crystal (thickness 1 mm).The THz detection is based on the electro-optic (also called Pockels) effect in a second ZnTe crystal with 1 mm thickness.We vary the time delay between the THz field and the 800 nm sampling beam with a motorized delay stage (M-605.2DDpurchased from Physik Instrument (PI)).The time delay between the optical pump and THz probe pulses is controlled by a second motorized delay stage (M521.DD, Physik Instrument (PI)).The pump pulse has a 400 nm central wavelength and is produced by the second harmonic generation of the 800 nm femtosecond pulse in a beta barium borate (BBO) crystal, where the remaining 800nm is filtered afterwards out with a short-pass filter (blue colorglass).For the excitation-wavelength dependent measurements, we used an optical parametric amplifier (OPA) to convert our 800nm fundamental beam into the various pump wavelengths, resonant with the band-edge excitonic transition of our materials, which we filter further using a bandpass filter for the corresponding wavelength directly after the OPA.
Sample stability during pump-probe measurements:
For the OPTP measurements, we verify we are in the low-fluence or linear regime.Furthermore, at higher fluences we observe sample degradation during the measurement, over the course of roughly two hours, which can be observed by eye by the loss of photoluminescence from the sample.After checking initial samples to find a suitable stability window in terms of fluence, the data presented throughout this paper is optimized such that we do not have loss of photoluminescence, which seemed to be a good indicator for sample stability.Furthermore, we do note that spincoated films of BAPI photodegrade much faster than the single crystals we show here.
SI4 -Analytical model for the transmittance of THz pulses and photoconductivity through the quasi-2D perovskite single crystals We do not use the Tinkham approximation 2 , i.e. the thin-film approximation, to calculate the photoconductivity spectra.The reason for this is that 1) the crystals are relatively thick (tens to hundreds of micrometers), and 2) the ground-state response in the THz frequency range is not flat due to the presence of vibrational resonances [3][4][5][6] .Instead, we write down the analytical Fresnel equations for the complex transmission of a THz field at a given frequency: (eq.S1) With Tcalc the complex transmission at a given frequency f, Esample/free the amplitude of the THz electric field over a length L (the thickness of the crystal, measured with a profilometer) with the free-standing single crystal sample or without anything in its' path respectively.The transmission through the air-perovskite interface, t01, propagation through the perovskite crystal, p1, transmission through the perovskite-air interface, t10, and propagation of the THz pulse through the sample or air over a length L are specified.Furthermore, due to the high refractive index of the perovskite material, Fabry-Perot-like multiple reflections are possible, captured by the term FP010.These Fresnel coefficients are given by: SI5 -Numerical minimization to obtain complex refractive indices and photoconductivity in the ground-and photoexcited state Data pre-treatment: acquiring the proper phase.We now use the experimentally obtained complex transmission to obtain the real and imaginary parts of the complex refractive index.
Since we measure the electric field of our THz pulse, we directly are able to measure both amplitude and phase, as shown below.We obtain the experimental complex transmission as a function of frequency of our data, shown in Figure S4 above, as Tsample = FFT(Esample)/FFT(Efree).We can now rewrite our complex transmission with the properly unwrapped phase, which is important for the numerical minimization further ahead: with f being the unwrapped phase.
Numerical minimization.Since we now have the complex transmission of the sample, we can numerically minimize the mismatch of the calculated value (from equation S1) against our experimental data.To this end, we compute the difference between the calculated and measured amplitudes and phases, ∆" and ∆' respectively, and sum up the squared difference between experiment and calculation over all frequencies in our THz window: This difference can be numerically minimized against the complex refractive index of the sample nc through various strategies.We opted to use a double minimization scheme, starting with a Basinhopping minimizer, implemented in the scipy.optimizelibrary of Python.This minimizer starts at different initial guesses for the refractive index, for which we used the refractive index from the Tinkham approximation as an initial guess, and let the minimizer run for either a fixed or predetermined number of minimization steps, an example of which is shown in Figure S5 below.The advantage of the Basinhopping method, is that it searches a large parameter space for a global minimum.The output of the Basinhopping algorithm does not check for convergence, so we run the minimized complex refractive index obtained through Basinhopping through a similar minimizer, scipy.minimize,using a Nelder-Mead algorithm to check for convergence.We can apply the same minimization scheme to obtain the refractive index of the photoexcited sample.For this we calculate the electric field of the THz pulse transmitted through the photoexcited sample as ' &5676'8!-7'9 (() = ' :)6419 $7"7' (() + ∆'(() (eq.S4) Where we add the measured photoinduced change of the THz field, ∆((*), to the transmitted field without photoexcitation ( )*+,-.!0"0& (*).For the minimization, as an initial guess for the to-be optimized refractive index, we use the ground-state refractive index as an initial guess.Obtaining the photoconductivity spectra.We convert the numerically retrieved refractive indices into dielectric functions, which are additive, and separate them into various terms.
SI6 -Computational details
The density functional theory (DFT) calculations were performed using VASP [7][8][9] and the projector augmented wave (PAW) 10 methodology, with the PBE 11 exchange-correlation functional with added Tkatchenko-Scheffler 12 dispersion corrections (PBE+TS).For structural relaxation and phonon calculations, we used an 800 eV cutoff energy, a 6x6x2 Γ-centered kpoint grid (where the simulation cell was aligned so that the c-direction corresponds to the outof-plane direction), a convergence criterion for the electronic self-consistent iterations of 10 -8 eV and a Gaussian smearing of the electronic states of 10 meV.Full structural relaxation was performed until the force on any atom was less than 0.5 meV/Å.For each element we used the VASP-recommended PBE PAW-potentials, labeled 'Pb_d', 'I', 'N', 'H', 'C', and evaluated Phonon band structures and IR spectrum
Directionally resolved IR spectrum
We calculated a directionally resolved IR-spectrum by projecting the mode-effective charges on a unit-vector initially along the direction of the (a) shortest lattice vector, summing over all modes, and then letting this unit-vector rotate in-plane.
Figure 1 (
Figure 1(d) shows a schematic of the polarized THz (pump-probe) spectroscopic experiments.
Figure 2 :
Figure 2: THz transmission measurements through the single-crystalline BAPI crystals as a function of azimuthal angle and simulated eigenmodes for n = 1 BAPI.(a) Extinction spectra for n = 1 BAPI as a function of q.Four different modes are identified.(b) Extinction spectra for n = 2 BAPI as a function of q.(c) Extinction spectra for n = 3 BAPI as a function of q.(d) Extinction spectra for n = 4 BAPI as a function of q.Notice how, with increasing n, the q-dependence and 'on/off ratio' of the modes decreases.(e) Simulated eigenmodes for n = 1 and the corresponding absorption spectrum.The atomic displacement vectors of the four modes with the highest oscillator strength, indicated with a (*) in the spectrum, are displayed on the right.
Figure 3 :
Figure 3: transition dipole moment vectors of the vibrational modes in 2D BAPI.(a) Single-crystal diffraction pattern of the n = 1 BAPI crystal (left) and the intensity of the vibrational modes (right).The transition dipole moment (TDM) of the two modes around 1 THz lies in the [201] crystallographic direction, the TDM of the 2.5 THz mode is orthogonal to the former and lies in the [-102] direction.(b) Single-crystal diffraction pattern of the n = 2 BAPI crystal (left) and the intensity of the vibrational modes (right).The TDM of the vibrational mode around 1 THz lies in the [102] crystallographic direction.(c) Single-crystal diffraction pattern of the n = 3 BAPI crystal (left) and the intensity of the vibrational modes (right).The TDM of the vibrational mode around 1 THz lies in the [102] crystallographic direction.(d) Single-crystal diffraction pattern of the n = 4 BAPI crystal (left) and the intensity of the vibrational modes (right).The TDM of the vibrational mode around 1 THz lies in the [102] crystallographic direction.For all different n, the intense mode around 2 THz shows no angle dependence.
Figure 3 (Figure 2 ,
Figure 3(a-d) shows the angle-dependence of the observable modes extracted from the data in
Figure 4 :
Figure 4: Polarization-resolved optical-pump/THz-probe (OPTP) experiments on the 2D BAPI single crystals.All samples are excited at 400 nm at low fluence (i.e. in the linear regime) and additional care was taken for each n to measure the parallel and perpendicular photoconductivity at the same pump fluence.(a) Real and imaginary OPTP traces for n = 1.(b) Real and imaginary OPTP traces for n = 2. (c) Real and imaginary OPTP traces for n = 3.(d) Real and imaginary OPTP traces for n = 4.The blue vertical bars in (a-d) indicate at which pump-probe delay time we acquired conductivity spectra and averaged the OPTP signal.(e-h) Conductivity spectra for different n measured at a pump-probe delay time of 10 ps.Left and right panels show conductivity spectra obtained with the THz polarization perpendicular and parallel to the 1 THz transition dipole moments, respectively.Solid lines are fits to the Drude model.In panel (h), the vertical green bars indicate the spectral range in which the transmittance of the THz field was lower than 2%, which was omitted from further analyses.
(a).All the fitted decay rates show little to no dependence on the direction in which we probe the photoconductivity.The slower decay rate, resulting from carrier recombination, goes down from 4.40 ± 0.01 •10 -2 ps -1 for n = 1, to 6.32 ± 0.06 •10 -3 ps -1 for n = 3, and increases again for n = 4 up to 1.8•10 -2 ps -1 .
Figure 5 :
Figure 5: Results from fitting the Drude model to the conductivity spectra and hints of anisotropy in the photoconductivity in 2D single crystalline BAPI.In all panels the blue datapoints are data obtained with a THz polarization perpendicular to the 1 THz transition dipole moments, the red datapoints are obtained parallel to the 1 THz TDM.(a) Slowest decay rate vs. layer thickness, obtained from fitting the real part of the OPTP traces [Figure 3(a-d)].(b) Scattering times vs. n.(c) Quantum yield vs. n, which was obtained by comparing the absorbed photon density to the carrier density calculated from the plasma frequency.(d) Averaged real part of the OPTP signal [blue vertical bars in Figure3(a-d)], averaged real part of the conductivity spectra, and scattering times at 10 ps pump-probe delay time vs. n.Note how all the blue datapoints are higher for all n than the red datapoints, (d), where we show the average of the real photoconductivity from the OPTP traces around 10 ps [shown by the vertical blue bars in Figure 4(a-d)], the average of the real part of the photoconductivity spectrum [shown in Figure 4(e-f)] and the scattering times obtained from the Drude fits as a function of n and for both perpendicular and parallel directions to the 1 THz
temperature from 80 o C to 70 o C for n=1 and n=2, and crystal growth takes 10-30 minutes.For n=3 and n=4, we used a temperature controller (JKEM, controller type T), with a glass-coated probe inserted in a vial filled with water on the same hot plate, and ramped the temperature down by 2 o C per hour until nucleation of a crystal at the liquid-air interface, at which point the temperature was kept constant (usually between 55-65 o C).
19 2. 5 SI2-
Additional steady-state characterization X-ray diffraction was performed on a Rigaku Smartlab diffractometer, using copper Ka radiation (1.54 angstrom), both in Bragg-Brentano geometry for powder-XRD and in transmission geometry.From the powder XRD patterns, we fitted the peak position of the reflections.Since we are only sensitive to specular reflections in the out-of-plane directions in this geometry (i.e.reflections from the inorganic lamella), we could determine the interlayer spacing for each of the different thicknesses of BAPI.
Figure S1 :
Figure S1: from the fitted p-XRD peak positions, we calculate the interlayer spacing, as
Figure S2 :
Figure S2: transmission 2D WAXS patterns from the samples shown throughout the
(
Figure S3: schematic of the
Figure S4 :
Figure S4: THz-TDS data and data analysis for n = 1.(a) Example of measured THz
Figure S5 :
Figure S5: example of Basinhopping minimization, where from left to right, we zoom in more
Figure S6 :
Figure S6: Example of minimized refractive index.The left panel shows the absolute
Figure S7 :
Figure S7: Imaginary and real parts of the refractive index (left) for the two extreme angles (90 o apart) for n = 1 BAPI, which can be converted into complex dielectric
Figure S9 :
Figure S9: Hot carriers are more mobile than cold carriers -origin of the initial fast
Figure S10 :
Figure S10: Low-frequency parts of the phonon dispersion relations of the (a) LT and (b) HT
Figure S11 :
Figure S11: Comparison of the low frequency part of the calculated IR-spectrum for the n=1
Table S1 :
amount of precursors used in the synthesis of the (BA)2PbI4 crystals (n=1)
Table S2 :
amount of precursors used in the synthesis of the (BA)2(MA)Pb2I7 crystals (n=2)
Table S3 :
amount of precursors used in the synthesis of the (BA)2(MA)2Pb3I10 crystals (n=3)
Table S4 :
amount of precursors used in the synthesis of the (BA)2(MA)3Pb4I13 crystals (n=4) | 9,251.2 | 2024-04-19T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The optical frequency comb fibre spectrometer
Optical frequency comb sources provide thousands of precise and accurate optical lines in a single device enabling the broadband and high-speed detection required in many applications. A main challenge is to parallelize the detection over the widest possible band while bringing the resolution to the single comb-line level. Here we propose a solution based on the combination of a frequency comb source and a fibre spectrometer, exploiting all-fibre technology. Our system allows for simultaneous measurement of 500 isolated comb lines over a span of 0.12 THz in a single acquisition; arbitrarily larger span are demonstrated (3,500 comb lines over 0.85 THz) by doing sequential acquisitions. The potential for precision measurements is proved by spectroscopy of acetylene at 1.53 μm. Being based on all-fibre technology, our system is inherently low-cost, lightweight and may lead to the development of a new class of broadband high-resolution spectrometers.
O ptical frequency combs (OFCs) are nowadays the preferred tool for frequency measurements owing to their excellent frequency accuracy, high-spectral purity and broad spectral coverage. Originally developed to provide a link between the optical and radio frequency domain 1,2 , OFCs have spread in many research areas such as attosecond science 3 , optical waveform generation 4 , remote sensing 5 , microwave synthesis 6 , optical communications 7 and astrophysics 8 . Presently, OFC systems are well established in the visible and near-infrared (IR) spectral region, and they are expanding into the mid-IR [9][10][11] , as well as the terahertz 12 , and extremeultraviolet 13 region.
In the frequency domain, the OFC consists of an array of evenly spaced optical lines whose frequencies v n are defined by v n ¼ nf r þ f o , where n represents the comb line order, f r the comb line spacing and f o the comb offset frequency. When both f r and f o are linked to a frequency standard, the comb is said to be fully referenced and acts as a frequency ruler.
As the OFC has developed over the last years, two general approaches have emerged for its application as an ultraprecise measurement tool. In the first approach, the OFC serves as a frequency ruler against which a continuous-wave (cw) probe laser is referenced 14,15 . Once locked to the nth comb line with a frequency offset df, the absolute frequency v cw of the cw laser can be tracked through the simple relation v cw ¼ v n þ df, and scanned across a molecular absorption by tuning the repetition frequency f r . This approach is preferred when a single or few absorption lines have to be measured with high precision. The second approach employs the OFC to directly probe atoms and molecules and is called direct comb spectroscopy (DCS) [16][17][18] . The great advantage of DCS is the large number of comb components providing a massive set of parallel detection channels to gather spectroscopic information. The ultimate requirement on DCS is to push its resolution to simultaneously identify and measure the amplitude of each single mode within the comb. However, this task is highly challenging.
Untill now, scientists have found different techniques for DCS with comb-tooth resolution. A first one is based on the so called virtually imaged phased array (VIPA), that is a side-entrance etalon disperser providing resolution better than 1 GHz (ref. 19). The VIPAs are employed in free-space set-ups that hardly can be integrated in portable devices. This technique allows one to observe up to B2,000 separate comb lines (typically 0.2-10 THz bandwidth, depending on comb line spacing) in a single measurement taking a few milliseconds and has been demonstrated in various spectral regions [20][21][22] . A similar approach is based on substituting the VIPA with a scanning Fabry-Perot (FP) cavity operating according to a Vernier 23,24 . This approach has demonstrated near-parallel acquisition of B4,000 comb lines (4-THz bandwidth) in a time of 10 ms. A second technique for DCS is called dual-comb spectroscopy, and uses two OFCs with slightly detuned [25][26][27] comb-line spacings. This approach has the advantage of useing a single point detector, however, it requires two OFC sources. Dual-comb set-up based on free-running OFCs can only reach resolution of few GHz, that are typically not sufficient for observation of single comb lines 25 . Indeed, pushing the resolution of dual-comb spectroscopy to the comb-tooth level requires complicated set-up to overcome the relative drifts between the two OFCs 28,29 ; nonetheless, state-of-the-art systems have been shown to resolve B120,000 comb lines over 12 THz with acquisition times of 2.7 s (ref. 30). This technique has also been applied to probe two-photon transitions of Rb with Doppler-limited resolution 31 . A third technique, Fourier-Transform Infrared (FTIR) spectroscopy coupled to OFC sources, has become a powerful combination enabling the measurement of broadband spectra with comb-tooth level resolution 32 . As FTIR spectroscopy is based on a Michelson interferometer with a scanning delay line on one arm, reaching comb-tooth resolution requires a sufficiently long delay line (up to 10 m for comb line spacing of 100 MHz), which implies correspondingly long measurement times and extended optical layouts sensitive to vibrations.
In the following, we report on a technique for DCS with resolution at the comb-tooth level. The technique is based on a single OFC source and a fibre spectrometer consisting of a multimode (MM) fibre and a camera for acquisition of images. A fibre spectrometer is fundamentally different from other established spectrometers: while other spectrometers are based on a one-to-one spectral-to-spatial mapping (array spectrometers, VIPAs) or spectral-to-spectral mapping (FTIR, Dual-Comb), a fibre spectrometer is based on a complex spectral-to-spatial mapping where each frequency injected into the spectrometer corresponds to a specific intensity profile (speckle pattern), that is stored in a transmission matrix. A reconstruction algorithm is then used to retrieve an unknown input spectrum based on the measured intensity distribution. The fibre spectrometer was originally proposed as a general purpose device similar to array spectrometers [33][34][35] featuring high-resolution operation with spectra constituted by a single line and limited to a span of 12 GHz. The device also allowed for broadband operation, but at reduced resolution of 4 nm. Here, we demonstrate that a fibre spectrometer coupled to an OFC source can be used as a broadband metrology-grade spectrometer for high-resolution DCS experiments. We demonstrate spectral fingerprinting of acetylene at 1.53 mm through parallel acquisition of B500 comb lines, corresponding to a span of 0.12 THz, with an instrument resolution of 120 MHz. The span has also been extended to 7 nm (0.9 THz) with unchanged resolution, through sequential acquisition of speckle patterns. We observed absorption lines at signal-to-noise ratio (SNR) of 85 with acquisition times of 10 ms; the overall accuracy on the line centre frequency is ± 3.5 MHz.
The fibre spectrometer has been tested for amplitude retrieval of comb lines, showing high fidelity and reproducibility at levels of 5%. Moreover, absolute frequency calibration has been demonstrated, preserving the accuracy and precision inherited by the OFC. The resolution of 120 MHz observed with the fibre spectrometer represents a B10-fold improvement over state-of-art VIPA-based and free-running dual-comb systems. In addition, as the resolution scales with parameters of the MM fibre such as the length or the numerical aperture (NA), further improvements are possible by tuning the fibre design. Noise sources and instabilities have been reduced, and the overall improved performance avoids time lags due to reconstruction algorithm complexity. The system is simple, robust, long-term stable and can be easily integrated in a compact device.
Results
Optical setup, calibration and spectral reconstruction. The layout of the system used for experiments is shown in Fig. 1. The OFC is based on an Er:fibre femtosecond laser with an output power of B0.5 W and a repetition frequency of 250 MHz. Both the repetition frequency f r and the offset frequency f o of the comb have been locked to an radio frequency reference for long-term stability and absolute calibration of the frequency axis. The bandwidth of the comb is reduced by a fibre-coupled tunable filter having a À 10 dB band of B0.5 nm. Possibly, the comb lines are also filtered-out with a filtering factor of 4 (or higher) by means of a FP cavity having a free spectral range of 1 GHz (or higher) depending on the comb line spacing required for the experiment. After that, the comb is passed through a 142-mm cell containing acetylene at a pressure of 10 mbar. The comb is multiplexed with a fibre-coupled single-frequency (SF) tunable laser at 1.55 mm that is necessary for the calibration of the spectrometer. The principle of operation of a fibre spectrometer relies on wavelength-dependent speckle patterns formed by interference between the guided modes in a MM optical fibre. In addition to the input wavelength, the speckle pattern also depends on the spatial mode and polarization of light coupled into the MM fibre. To avoid variations in the modes excited in the MM fibre, light is first passed through a polarizing beam-splitter and then coupled to a single-mode, polarization-maintaining (PM) fibre. Finally, the beams are coupled into a standard MM fibre having a core diameter of 105 mm, a NA of 0.22 and a length of 100 m. The MM fibre is housed inside a temperature-controlled aluminium chamber. The speckle pattern at the output of the MM fibre is imaged onto an InGaAs camera with resolution of 320 Â 480 pixels.
The output speckle patterns act as a fingerprint, uniquely identifying the spectral components of light coupled into the fibre. To use the MM fibre as a spectrometer, we first calibrated a transmission matrix describing the spectral-to-spatial mapping properties of the fibre (see 'Methods' section). The transmission matrix T relates the discretized input spectrum S to a vector describing the discretized speckle pattern P, as P ¼ T Â S (ref. 35). The calibration is accomplished by tuning the SF laser within the range of interest at proper frequency steps and recording the corresponding speckle patterns produced at the end of the MM fibre, thereby measuring one column of T at a time. After calibration, an arbitrary input spectrum can be reconstructed from the speckle pattern it produces as S ¼ T À 1 Â P. Here, the inverse of the transmission matrix was calculated using a singular value decomposition combined with a truncation procedure to discard the weak singular values 35 . Note that the inversion procedure is performed only once as part of the calibration step, after which an arbitrary input spectrum can be reconstructed using a single matrix multiplication, hence the calculation of the spectrum is extremely fast and takes o100 ms on a standard laptop pc.
Characterization by single-frequency laser illumination. The fibre spectrometer has been first characterized under SF illumination conditions. The SF laser used for the calibration is used as a probe source of unknown spectrum and its frequency is set at a certain position in the middle of the calibration range. After recording of the speckle pattern, the reconstruction algorithm retrieves the spectrum reported in Fig. 2a, where a band of 0.96 pm can be inferred for the SF laser, limited by the resolution of the fibre spectrometer. Figure 2b shows the correlation coefficient between the first speckle pattern and all the following patterns acquired during calibration at steps of 250 MHz (B2 pm) in the same spectral range (see 'Methods' section for details). As clearly shown, all patterns can be considered uncorrelated at a level of 96%; trivially, the uncorrelation level represents the ability of the reconstruction algorithm to separate the different spectral components constituting the spectrum under measurement. Reaching a correlation among all patterns close to zero (ideal condition) requires that the speckles are very sparse and small, and have non-zero intensity at disjunct (not overlapped) pixel positions; depending on the design of the fibre spectrometer, the correlation coefficient can be considerably above zero. It is worth noting that at each step during the calibration, the beatnote frequency between the SF laser and nearest comb tooth is acquired and stored in a separate vector so that it can be used for the absolute calibration of the frequency axis, however, there is no feedback loop to lock the SF laser to the comb. More specifically, the SF laser is scanned through the frequency interval of interest at nominal steps of 2 pm (250 MHz) using a built-in function embedded in the laser software, however, because the SF laser is not locked to the comb, the exact change in frequency at each step is variable. This enormously simplifies the system, as there is no need for feedback loop electronics and controller, hence the calibration process is overall very simple and robust. At the end of the calibration process, there is a complete set of speckle patterns with corresponding absolute frequencies. The vector of beatnote frequencies is shown in Fig. 2c frequencies stay within a range of 14 MHz, with an average value of 66 MHz. This average frequency value is very important for the comb spectroscopy because at the end of the calibration process, the comb is rigidly shifted by the same amount (offset frequency tuning) so that the comb lines stay at the same frequencies where the calibration has been performed.
The primary challenge when using such a high-resolution fibre spectrometer is ensuring that the transmission matrix remains stable after calibration. Indeed, the operating principle of fibre spectrometer relies on the same wavelength always producing the same speckle pattern, hence mechanical perturbations or variations in the fibre environment (for example, temperature variations) changing the speckle pattern will corrupt the transmission matrix. In this work, the MM fibre was placed in a temperature controlled, aluminium sealed chamber (see Fig. 3), to minimize environmental perturbations. During experiments, a moderate vacuum condition was kept inside the chamber with a pressure of 10 À 3 mbar; however, it turned out that the key factor to guarantee highly reproducible speckle patterns is keeping the temperature of the chamber (fibre) constant, rather than the vacuum inside the chamber. To characterize the stability of the spectrometer, the SF laser has been locked to the OFC at a wavelength of B1,535.3 nm to be sure that its frequency stays constant over a long time, and then a speckle pattern has been recorded every ten seconds and correlated with the first one. The result reported in Fig. 4b shows that the speckle pattern experienced a negligible B1% loss of correlation within 3 h, due to the temperature stability of the chamber, which means stable operation for hours after calibrating the transmission matrix. In addition, the excellent fibre stability allowed us to use a simple matrix inversion procedure to reconstruct the input spectrum without the need for the non-linear optimization algorithm used in previous demonstrations of a MM fibre spectrometer 34,35 . The ripples in the autocorrelation trace of Fig. 4b could be ascribed to small oscillations of the locking point in the electronics used to lock the SF laser to the comb, but this effect does not influence the spectrum reconstruction process.
DCS spectroscopy. The principle of DCS is based on simultaneously exploiting and demultiplexing the vast number of spectral channels enabled by the OFC. In the ideal case, the system should provide a resolution down to the single comb line level. In this respect, the OFC fibre spectrometer has been tested under broadband illumination condition. Experimentally, the OFC (stabilized to a global positioning system (GPS)-disciplined Rb standard) is passed through a gas cell filled with acetylene ( 12 C 2 H 2 ) at 10 mbar. The absorption lines of acetylene are very well known (see HITRAN database 36 ) and represent a good target for DCS test. The comb is then launched directly into the fibre spectrometer (FP cavity not used), and the transmission spectrum is reconstructed starting from the speckle pattern as reported in Fig. 5. In particular, Fig. 5a represents the transmission spectrum consisting of 500 isolated comb lines measured in a single camera shot with exposure time of 40 ms, with a comb line spacing of 250 MHz. Two absorption lines of the v 1 þ v 2 band of acetylene, P(17) and P(7), are resolved. Figure 5b shows the corresponding absorption coefficient calculated by the ratio of the transmission spectra of the cell in filled/empty conditions. The single-shot measurement (blue circles) fitted by a Voigt profile is characterized by a notable accuracy on the line centre frequency and amplitude of 0.6 MHz and 5%, respectively. The SNR of 39 obtained in this case is due the high number of comb lines illuminating the spectrometer which causes a reduced contrast (see 'Methods' section for details) in the measured speckle pattern, hence the camera noise has higher correlations with the speckle patterns at the various frequency components across the calibration range, and the performance of the reconstruction algorithm are limited. Nonetheless, for applications requiring only a quick detection of a specified absorption line, this limited SNR can be sufficient, especially in view of the very short exposure time of 40 ms which could allow for the detection of a fast transient signal via a real-time 'absorption line movie'. Furthermore, as shown by the averaged multi-shot measurement (red circles) in Fig. 5b, the impact of the camera noise can be easily reduced and a satisfactory SNR of 85 can be obtained by averaging 10 camera frames, which requires an acquisition time of only 10 ms, limited by the maximum frame rate allowed by the camera, that is similar to the DCS techniques with comb-tooth resolution reported so far. There are various noise sources acting on the spectrometer that typically perturb the speckle patterns with the net effect of adding noise to the reconstructed OFC spectra. In particular, a comb speckle pattern can be viewed as the linear combination of speckle patterns corresponding to each comb frequency component. During calibration, the SF frequency laser is scanned through the frequency range of interest to acquire the speckle patterns at frequencies virtually coincident with the comb components (ideal condition) eventually resulting in a spectrum free of reconstruction noise. However, as shown in Fig. 2c, the calibration patterns are acquired at frequencies characterized by a root mean square error of ±7 MHz with respect to the comb components, that is one main source of reconstruction noise, hereafter referred to as 'calibration noise'. The noise limit set by calibration noise can be quantified numerically in terms of s.d. of absorption noise in the spectra retrieved by the reconstruction algorithm (see 'Methods' section for details). The results are shown in Fig. 6a. Assuming a near-complete suppression of calibration noise, the absorption noise would be reduced by B2. 6 10 À 3 with respect to the present condition where calibration noise amounts to ± 7 MHz, that is an increase of SNR to B384. The calibration noise can be suppressed by scanning continuously the SF laser along the frequency axis (no discrete step), and triggering the camera acquisition when the laser frequency is coincident with a comb component. It should be noted that, similarly to calibration noise, the frequency noise of the OFC, that is, the frequency jitter of each comb component along frequency axis, could be also a possible noise source affecting the reconstructed spectrum. However, the frequency jitter of the comb components is typically much lower than the calibration noise, hence this noise contribution from the OFC is negligible.
As a second source of reconstruction noise, amplitude noise should be considered. More specifically, amplitude noise of the speckle patterns is due to amplitude instability of the comb and electronic noise or drift of the InGaAs camera. Both these noise sources can be attenuated by averaging the speckle patterns. As with calibration noise, the noise limit set by amplitude noise has been quantified by measuring the s.d. of the absorption noise in the reconstructed spectra. The results are shown in Fig. 5d as a function of the number N of averaged speckle patterns and the corresponding acquisition time (10 ms per camera frame). The absorption noise decreases as 1= ffiffiffiffi N p for No20, but is limited at 0.9 10 À 3 (SNR of B111) for higher values of N. This lower limit is due to flicker amplitude noise in the low-frequency range below 100 Hz of the OFC illuminating the spectrometer; the flicker noise is commonly observed in OFC sources 37,38 , and can be even enhanced when the comb offset frequency is stabilized by acting on the pump diode current. However, the flicker noise could be also ascribed to the electronics in the acquisition camera; this has been excluded by performing the measurement of absorption noise with the spectrometer illuminated by a high-stability tungsten lamp (band-pass filtered at 1,530 nm), and observing that the absorption noise follows the 1= ffiffiffiffi N p attenuation law even for a number of averages N ) 20, that is where flicker noise of the comb limits the performance.
Among the main noise sources, temperature instability of the MM fibre has to be considered. Changes in temperature affect both the refractive index and the fibre length, and hence the phases of the interfering modes propagating in the MM fibre. As shown in Fig. 4b, a fibre temperature drift of B20 mK reduces the correlation of a single speckle pattern by B1%. The effect of this can be quantified by looking at Fig. 6b where the absorption noise is plotted as a function of the uncorrelation level between the speckle patterns acquired during calibration. Considering an uncorrelation level of 96%, a reduction by 1% degrades the absorption noise by 1.5?10 À 3 ?1 ¼ 1.5?10 À 3 , where 1.5?10 À 3 is the slope of the exponential fitting function around the considered operating point. Hence, a temperature drift of B20 mK produces only a minor increase of the absorption noise. On the other hand, assuming a linear dependence of correlation on temperature outside the range shown in Fig. 4b, a temperature drift of 0.2 K would reduce the correlation by 10% and increase the absorption noise by 1.56?10 À 2 . Correspondingly, the SNR in Fig. 5b would be reduced to B36, which represents a major increase in reconstruction noise.
Mechanical perturbations of the MM fibre such as acoustic and seismic vibrations, or air flowing, represents another potential noise source. Typically, in laboratory environments, the main concern is related to small fibre deformations or vibrations induced by air flowing, a problem which could be avoided by enclosing the MM fibre in a rigid container such as the aluminium chamber used in this work. For application into environments subject to strong vibrations, the fibres can be fixed with resin so that all the system (fibre þ spool þ fibre coupling optics) behaves as a rigid body, becoming virtually insensitive to mechanical perturbations 34,35 .
The sensitivity of the fibre spectrometer is dependent upon the uncorrelation level of the speckle patterns acquired during calibration. As a test of this characteristic, the absorption noise has been measured at different uncorrelation levels. To this purpose, the uncorrelation has been intentionally degraded with respect to Fig. 2b by acting on the coupling condition of the single-mode PM fibre and the MM fibre. In particular, a small lateral mismatch between the axes of the two fibres excites only low-order modes in the MM fibre, which means larger typical dimensions (radius) of the speckles and hence lower uncorrelation levels; on the other hand, a large lateral mismatch, which is easily tolerated by the MM fibre without affecting the coupling efficiency, allows for excitation of high-order modes yielding higher uncorrelation levels. At each degradation step of the uncorrelation, the calibration procedure has been repeated and then the absorption noise of a typical measurement performed using the OFC has been calculated. The results of this procedure are reported in Fig. 6b. The s.d. of the absorption noise data closely resembles a decreasing exponential function at uncorrelation levels above 40%; for application to DCS experiments, uncorrelation levels above 80% would be preferable. It is worth noting that the spectral resolution of the fibre spectrometer is virtually not dependent upon the uncorrelation of the speckle patterns, as experimentally verified by reconstructing the spectrum of the SF laser at different uncorrelation levels, and also by observing the correlation traces of the speckle patterns acquired within each calibration procedure performed after changing the uncorrelation level. However, when the spectrometer is illuminated by the OFC and the uncorrelation set to very low levels, the quality of the reconstructed spectra may be sufficiently degraded by the noise that the resolving power of the system would be compromised.
In addition, the dependence of absorption noise has been analyzed as a function of the number of comb lines contained in a single speckle pattern. To this purpose the OFC has been filtered by the FP cavity set to different lengths to select a subset of comb lines with the desired comb line spacing (0.25,1,2.5,5,10,12 GHz) and number of comb lines (M ¼ 500,125,50,25,10 lines). The absorption noise has been calculated after running the reconstruction algorithm at each different FP cavity lengths, and the corresponding results are shown in Fig. 6c, where a linear dependence on the number of comb lines can be seen. Also, the speckle contrast has been calculated as a function of the number of comb lines; in this case, the data are well fitted by a 1= ffiffiffiffi ffi M p function, M being the number of comb lines. Notably, a SNR of B1,000 could be reached when the number of comb lines incident onto the spectrometer is limited to 50. Any reduction of the overall absorption noise obtained by acting on the noise sources discussed would allow for increasing proportionally the number of comb lines retrieved in a single acquisition. As an example, a potential B4.5-fold improvement of SNR from 84 to 384 could be obtained by suppression of calibration noise from 7 MHz down to a negligible value, as outlined before; at this point, increasing the number of comb lines to 4.5 Â 500B2,250 would restore the SNR to 84 and increase the span to B0.5 THz.
As another strategy to increase the band, the redundancy of pixels of the acquisition camera could be exploited for parallelization of speckle patterns acquisition without affecting the overall absorption noise (see the 'Discussion' section for details).
The results presented here have been obtained using a cell characterized by an absorption path length of 142 mm. For extension of the system to situations where very weak absorbers have to be detected or trace-gas measurements performed, the absorption path length can be largely increased by using a multi-pass absorption cell such as a Herriott cell with typical absorption path length in the range 30-200 m, or even a resonant cavity approach, when ultra-high sensitivity is required, reaching equivalent absorption path length up to 10 4 -10 5 m. As the fibre spectrometer system has been characterized in terms of absorption noise, that is the product of the minimum observable absorption coefficient a min and the absorption path length l, it is straightforward to calculate the minimum absorption coefficient that could be measured by increasing the absorption path length: as an example, considering the absorption noise of 0.012 obtained in Fig. 5d, a multi-pass cell with absorption path of 200 m would limit the sensitivity of the fibre spectrometer to a minimum absorption coefficient of 0.012/200 m ¼ 6 Â 10 À 5 cm À 1 . Finally, it should be mentioned that an all-fibre architecture of the system featuring enhanced sensitivity and including also the sample absorption section could be implemented by substituting the free-space absorption cell with a hollow-core fibre enclosed in a vacuum-tight tank filled with the gas sample under analysis; by proper choice of the hollow-core fibre characteristics, the beam generated by the OFC has been demonstrated to propagate with fibre losses of 30 dB À 1 km, hence absorption path length of 50-100 m are easily tolerated 27 .
The stability of the spectrometer has been tested using the OFC, and the results are shown in Fig. 4d. In particular, the speckle pattern of the comb in locked conditions has been repeatedly acquired at intervals of 10 s over a time window of 3 h with a resulting overall loss of correlation of 0.01%, that is a factor of B100 less than that obtained with the SF laser under similar conditions (that is, similar temperature drift of the aluminium chamber). This result is ascribed to the reduced contrast of 0.078 characterizing the OFC speckle pattern, which is a factor of B10 less than the contrast of 0.71 obtained with the SF laser; as the correlation coefficient (see 'Methods' section for details) depends on the pixel-by-pixel product of two speckle patterns, this leads to a reduction factor of B100 in the autocorrelation of the OFC with respect to the SF laser, under similar conditions. The OFC autocorrelation trace does not show the oscillation observed with the SF laser due to the higher long-term stability of the locking electronics dedicated to the comb. Depending on the temperature stability of the laboratory, the calibration might be not preserved over long time window of 24 h. The resolution capability of the OFC fibre spectrometer has been investigated in detail. As already shown under SF laser illumination conditions, the fibre spectrometer has a resolution of 120 MHz, that is in principle more than sufficient to resolve single comb lines with our OFC which has a line spacing of 250 MHz. However, under broadband illumination conditions, the contrast of the speckle pattern at the output of the MM fibre is reduced, hence the ability to resolve single lines could be compromised and has to be confirmed. To this purpose, the comb has been filtered by a FP cavity having a free spectral range of 1 GHz to select a subset of lines, and then the transmission spectrum is calculated as reported in Fig. 7b. It can be seen that each comb line is followed by three zero-amplitude lines, that means the comb lines are fully resolved and there is no interference between adjacent components. To further confirm the resolution capability of the spectrometer, the FP resonances have been shifted progressively to select each of the following subset of comb lines and the corresponding spectra have been calculated. As shown in Fig. 7c-e the reconstruction algorithm again provides the expected results. Once the four subsets of comb lines have been acquired, the interleaving of the subsets can be represented as in Fig. 7a,f, where the P(9) absorption line of the v 1 þ v 3 band of 12 C 2 H 2 is clearly visible. Figure 7g shows the Voigt fit to the experimental absorption data and the line profile calculated by HITRAN; the overall agreement in terms of amplitude and frequency is very good. The higher SNR of B150 obtained in this case in the Voigt fit is due to the fourfold decrease of the number of comb lines in each speckle pattern leading to sharper patterns characterized by higher contrast.
The broadband potential of our DCS technique is further demonstrated in Fig. 8, which shows a sequence of spectra corresponding to 11 camera acquisition (averaged over 10 frames) and a total span of B0.9 THz (B7 nm). Each single camera acquisition contains the spectral information from B500 comb lines, that means B3,500 resolved comb lines over the whole covered span. This acquisition has been possible by calibrating the spectrometer in a single session from 1,530 to 1,537 nm, and then changing the central wavelength of the tunable filter by predefined regular steps, to cover the desired spectral range. At each step of the tunable filter, the camera acquisition is launched and the corresponding spectrum is calculated by a simple and fast matrix product in 10 ms. The red line in Fig. 8b represents the single-line fits to the experimental data, whereas the blue line represents the data from HITRAN. Notably, the line centre frequencies and amplitudes estimated from the experimental data are within 3.5 MHz and 5% (frequency and amplitude accuracy), respectively, compared with data from HITRAN.
Discussion
While the approach used to extend the bandwidth to 7 nm (as shown in Fig. 8) required a series of sequential measurements, thereby increasing the acquisition time, it is also possible to increase the spectrometer bandwidth in parallel. Specifically, a wavelength-division multiplexer could be used to first divide an unknown input spectrum into a number of subbands 39 . In many spectral regimes (including the telecom band considered in this work) compact, fibre based wavelength-division multiplexer modules are readily available. Each subband can then be coupled to a separate MM fibre spectrometer to record the high-resolution spectrum and the overall spectrum could then be concatenated from the separate measurements. Moreover, by simultaneously imaging the speckle patterns formed on several MM fibres onto a single camera, this method enables one to achieve parallel operation of multiple fibre spectrometers with a single camera. In a fibre spectrometer, the number of pixels on the camera will approximately dictate the number of spectral channels which can be measured (without relying on compressed sensing techniques). Since most 2D cameras have many more pixels (for example, 410 5 ) than required for a single fibre spectrometer, which might have B1,000 spectral channels, there is significant potential for parallelization.
The resolution of the fibre spectrometer could be further improved to B12 MHz. As shown in a previous work 35 , the resolution dv of the fibre spectrometer scales as the inverse of the fibre length L and the inverse square of the NA (dvpL À 1 NA À 2 ). Assuming the NA of the MM fibre is changed from 0.22 to 0.5, the resolution would be improved by a factor of ð0:5=0:22Þ 2 ' 5:2. By also increasing the fibre length to 200 m, that is a factor of 2 with respect to the configuration presented in this work, the overall improvement of resolution would be a factor of 5:2Â2 ' 10. Hence, in the conditions outlined, 500 spectral elements could be simultaneously measured with an instrument resolution of B12 MHz, that is nearly two orders of magnitude better then state-of-the-art VIPA systems. However, it is worth noting that, to keep the absorption noise level constant, the span has to be reduced proportionally to resolution, because absorption noise is dependent upon the number of spectral elements resolved.
Finally, although this initial demonstration was performed in the short-wave infrared spectrum, the same principle could be extended to operation in the visible or mid-infrared. The only requirements are to select an OFC and a SF laser source, a MM fibre and an acquisition camera appropriate to the desired spectral region. In particular, OFCs can be generated nowadays in the near-and mid-infrared spectral region with Watt-level powers 10,37,40 . Quantum cascade lasers represent a reliable and cost-effective technology for the implementation of tunable SF lasers in the mid-infrared up to 18 mm. The technology of infrared fibres based on fluoride (ZrF 4 , InF 3 ), chalcogenide (As 2 S 3 , As 2 Se 3 ) or polycrystalline (KBl:KCl) materials allows for excellent mechanical flexibility, good environmental stability and high transmission 41 . The best results were achieved with fluoride fibres, which can give propagation losses as low as few dB km À 1 around 2.5 mm, to be compared with 0.2 dB km À 1 for silica around 1.5 mm; in the mid-infrared, propagation losses of B0.1 dB m À 1 at 10.6 mm have been demonstrated with polycrystalline fibres 41 . A 100-m long MM fibre could give typical insertion losses of B10 dB in the near-and mid-infrared, that is compatible with the power levels of existing OFCs and quantum cascade laser sources. Finally, compact acquisition cameras in the near-and mid-infrared are currently available based on different detector types (InSb, McCdTe) with resolution up to 1,240 Â 1,024 and 14-bit amplitude dynamic.
We demonstrated a DCS technique allowing for parallel detection of individual comb lines. The technique is based on a fully referenced OFC source and a fibre spectrometer. We found that a standard MM fibre can be operated as a broadband high-resolution spectrometer by stabilizing its temperature to few tens of millikelvin in a proper container and providing also isolation from air flowing and mechanical perturbations. A 100-m long fibre is sufficient to resolve comb lines with frequency spacing of 250 MHz. This is a simple and viable solution to reach such a high resolution as fibres are cheap, lightweight and can be coiled into small volumes. The performance of the system, tested by precision spectroscopy experiments on acetylene, resulted in estimation uncertainties of the P-branch line centre frequencies at 1.53 mm within ±3.5 MHz and line amplitude within 5%. The absorption lines have been measured at SNR of 85 within a time of 10 ms, with absolute calibration of the frequency axis. Notably, 500 isolated comb lines in a span of 0.12 THz have been characterized with single camera acquisitions, or up to 3,500 comb lines over 0.85 THz, by performing sequential acquisitions at predefined adjacent spectral intervals to cover the band of interest. The span observed with a single camera acquisition can be also scaled to the THz-level by adopting easy fibre solutions. Our system represents arguably the first real application of a speckle-based spectrometer and could inspire a new class of high-resolution, broadband spectrometers and photonic sensors.
Methods
Experimental apparatus. A commercial OFC (Menlo Systems, FC1500-250) has been used for the experiments. In particular, the OFC is constituted by a low-power Er:fibre femtosecond oscillator operating at a repetition frequency of 250 MHz, amplified by an Er:fibre amplifier to an average power level of B0.5 W in a band from 1,500 to 1,620 nm. A secondary output of the low-power oscillator is also amplified and coupled in a highly non-linear fibre for generation of supercontinuum radiation covering an octave-spanning spectral range from 1,000 to 2,100 nm. The offset and repetition frequency of the comb are locked to a GPS-disciplined Rb frequency standard at 10 MHz (Precision Test Systems, GPS10RBN) by acting on the pump power and piezo cavity mirror of the low-power Er:fibre oscillator. The fibre-coupled tunable filter has a À 10 dB bandwidth of 0.6 nm (Santec, OTF-930). The FP cavity (Burleigh, RC-110) is equipped with two mirrors having a radius of curvature of 500 mm, a reflectivity of 99% and a flattened dispersion profile. The FP cavity is locked to the comb by the lock-in technique. The SF laser (Agilent 81600B) used for calibration and testing of the fibre spectrometer is a fibre-coupled semiconductor laser based on an extended cavity design; it can be tuned from 1,440 to 1,650 nm and is characterized by a linewidth of 100 kHz over an observation time of 1 ms. The SF laser was factory-calibrated with accuracy of better then ± f r /2, f r being the repetition frequency of the OFC. Hence, the comb tooth number n can be straightforwardly obtained using the relation n ¼ v cw /f r where v cw is the factory-calibrated frequency of the cw laser. In case the cw laser has to be calibrated, a wave-meter with accuracy better than half the comb line spacing or even few well-known absorption lines of a gas could be used. The MM fibre (Thorlabs, FG105LCA), characterized by a NA of 0.22 and a core diameter of 105 mm, is coiled and housed inside an aluminium chamber with dimensions of 150 Â 150 Â 220 mm 3 . The temperature of the chamber is stabilized within a range of B0.1°C by a standard temperature controller (Lightwave, LDT-5980) and four 10-W Peltier elements. An objective with a NA of 0.5 is used to image the speckle patterns onto a 320 Â 256 InGaAs camera (HighFinesse, LC320). During calibration, the SF laser is scanned at steps of 250 MHz (B2 pm) over a span of 0.12 THz (B1 nm). Each frequency step takes 300-ms time to let the laser stabilize, and the whole calibration is completed within B15 min; the exposure time of the camera is set to 40 ms. All instruments are driven by a laptop PC through a home-made software routine.
Reconstruction algorithm, correlations and speckle contrast. The MM fibre spectrometer is based on a complex spectral-to-spatial mapping where each frequency injected into the MM fibre produces a unique spatial intensity distribution. This intensity distribution is stored in a transmission matrix which describes the speckle pattern at each wavelength. We calibrated a transmission matrix consisting of 15,625 spatial channels (pixels) and 3,500 spectral channels. The spectral channels ranged from 1,530 to 1,537 nm in steps of 250 MHz (B2 pm). For each measurement, we used a subset of the measured transmission matrix corresponding to the bandwidth selected by the tunable filter. To reconstruct the original spectra, we adopted a matrix pseudo-inversion algorithm based on singular value decomposition. The singular values below a threshold are truncated to optimize the SNR in the reconstructed spectra.
The speckle contrast c A of a given speckle pattern A mn is calculated as and hAi ¼ P m;n A mn =MN, M and N being the number of rows and columns of A mn . The correlation coefficient r AB between two speckle patterns A mn and B mn is calculated according to the formula r AB ¼ hðA mn À hAiÞðB mn À hBiÞi=s A s B .
Noise limit of the fibre spectrometer. The noise limits of the fibre spectrometer have been characterized in terms of s.d. of absorption noise s n . Once two spectra S 1 , S 2 have been calculated through the reconstruction algorithm in the same conditions, the natural logarithm of the ratio of the two spectra can be expressed as lnðS 1 =S 2 Þ ¼ lnð1 þ nÞ ' n ¼ a min l (n o o 1), where a min represents the minimum measurable absorption coefficient, l is the absorption path length and n represents the noise of a typical absorption coefficient measurement assuming unitary absorption path length (absorption noise); the corresponding s.d. s n sets the fundamental noise limit of the spectrometer. The SNR of a measurement limited by the absorption noise s n is given by SNR ¼ 1/s n .
The effect of calibration noise has been measured by using a high-spectral purity Er-fibre laser (NKT Photonics, Basik) during calibration, in substitution of the original semiconductor laser (Agilent 81600B). The Er-fibre laser has a linewidth of B3 kHz over 10 ms, and can be only tuned in the range 1,535 ± 0.5 nm. A feedback loop has been added to the system to lock the Er-fibre laser to the OFC and to guarantee that during calibration procedure, the speckle patterns are acquired at exactly the same frequencies of the comb components. This virtually rejects all of the calibration noise and allows for quantifying the noise limit in the optimal condition. Moreover, the calibration noise can be arbitrarily increased by adding a random error (voltage) of proper s.d. in the locking point at each acquisition step of the calibration till reaching the ± 7 MHz obtained when using the original semiconductor laser. The absorption noise has been calculated for different s.d.'s of the locking point error in the range from 0 to 40 MHz.
For evaluation of the noise limits set by amplitude noise, the two spectra S 1 , S 2 have been calculated from two sequential sets of N averaged speckle patterns. The absorption noise has been measured for N spanning the range from 1 to 500, corresponding to an acquisition time from 10 ms to 5 s (camera set to 10 ms per frame).
Data availability. The data that support the findings of this study are available from the corresponding author on request. | 10,238.2 | 2016-10-03T00:00:00.000 | [
"Physics"
] |
Characterization of Skew CR-Warped Product Submanifolds in Complex Space Forms via Differential Equations
Recently, we have obtained Ricci curvature inequalities for skew CR-warped product submanifolds in the framework of complex space form. By the application of Bochner’s formula on these inequalities, we show that, under certain conditions, the base of these submanifolds is isometric to the Euclidean space. Furthermore, we study the impact of some differential equations on skew CR-warped product submanifolds and prove that, under some geometric conditions, the base is isometric to a special type of warped product.
Introduction
e studies [1,2] provide both important intrinsic geometric as well as isometric properties of Riemannian manifolds via differential equations. It is well known that the classification of differential equations has a significant effect on the global study of Riemannian manifolds. In 1978, Tanno [2] studied various aspects of differential equations on Riemannian manifolds. In particular, the authors of [3,4] characterized Euclidean sphere by the approach of differential equations. e analysis performed in [2,5] proved that a nonconstant function λ on a complete Riemannian manifold (U n , g) satisfies the differential equation if and only if (U n , g) is isometric to Euclidean space R n , where c is constant. Moreover, Garcia-Rio et al. [4] proved that, under some restrictions, the Riemannian manifold is isometric to warped product U × f R, where U is a complete Riemannian manifold, R is the Euclidean line, and f is the warping function. Moreover, warping function f satisfies the second-order differential equation if and only if there exist a nonconstant function ϕ: U n ⟶ R with a negative eigenvalue μ 1 ≤ 0, which is the solution of following differential equation: e categorization of differential equations on Riemannian manifold has become a fascinating topic of research and has been investigated by numerous researchers, for instance, [6][7][8][9].
Recently, Al-Dayel et al. [6] studied the impact of differential equation (2) on Riemannian manifold (L n , g) by taking the concircular vector field and proved that, under certain conditions, the Riemannian manifold (L n , g) is isometric to Euclidean manifold R n . Similarly, by taking gradient conformal vector field, Chen et al. [10] identified that Riemannian manifold (N n , g) is isometric to the Euclidean space R n . However, in [11], it has been proved that the complete totally real submanifold in CP n (complex projective space) with bounded Ricci curvature satisfying (3) is isometric to a special class of hyperbolic space.
On the contrary, Bishop and O'Neill [12] studied the geometry of manifolds having negative curvature and confirmed that Riemannian product manifolds always have nonnegative curvature. As a result, they proposed the idea of warped product manifolds, and these manifolds are defined as follows.
Consider two Riemannian manifolds (L 1 , g 1 ) and (L 2 , g 2 ) with corresponding Riemannian metrics g 1 and g 2 , and let ψ: L 1 ⟶ R be a positive differentiable function. If x and y are projection maps such that x: L 1 × L 2 ⟶ L 1 and y: L 1 × L 2 ⟶ L 2 , which are defined as x(m, n) � m and y(m, n) � n ∀ (m, n) ∈ L 1 × L 2 , then L � L 1 × L 2 is called warped product manifold if the Riemannian structure on L satisfies for all E, F ∈ TL. e function ψ represents the warping function of L 1 × L 2 . e Riemannian product manifold is a special case of warped product manifold in which the warping function is constant. e study of Bishop and O'Neill [12] revealed that these types of manifolds have wide range of applications in physics and theory of relativity. It is well known that the warping function is the solution of some partial differential equations, for example, Einstein field equation can be solved by the approach of the warped product [13]. e warped product is also applicable in the study of space time near to black holes [14].
Latterly, Ali et al. [7] characterized warped product submanifolds in Sasakian space form by the approach of the differential equation. e purpose of this paper is to study the impact of differential equation on skew CR-warped product submanifolds in the framework of the complex space form.
Preliminaries
Let L be an almost Hermitian manifold with an almost complex structure J and a Hermitian metric g, i.e., J 2 � − I and g(JX, JY) � g(X, Y), for all vector fields X, Y on L. If the almost complex structure J satisfies for all X, Y ∈ TL, where D is Levi-Civita connection on L, then (L, J) is called a Kaehler manifold. A Kaehler manifold L is called a complex space form if it has constant holomorphic sectional curvature and denoted by L(c). e curvature tensor of the complex space form L(c) is given by for any X, Y, Z, W ∈ TL.
Let L be an n− dimensional Riemannian manifold isometrically immersed in a m− dimensional Riemannian manifold L. en, the Gauss and Weingarten formulas are X ξ, respectively, for all X, Y ∈ TL and ξ ∈ T ⊥ L, where D is the induced Levi-Civita connection on L, ξ is a vector field normal to L, h is the second fundamental form of L, D ⊥ is the normal connection in the normal bundle T ⊥ L, and A ξ is the shape operator of the second fundamental form. For any X ∈ TL and N ∈ T ⊥ L, JX and JN can be decomposed as follows: where PX (respectively, tN) is the tangential and FX (respectively, fN) is the normal component of JE (respectively, JN). It is evident that g(ϕX, Y) � g(PX, Y), for any X, Y ∈ T x L; this implies that g(PX, Y) + g(X, PY) � 0. us, P 2 is a symmetric operator on the tangent space T x L, for all x ∈ L. e eigenvalues of P 2 are real and diagonalizable. Moreover, for each x ∈ L, one can observe where I denotes the identity transformation on T x L and Furthermore, it is easy to observe that KerF � S 1 x and KerP � S 0 x , where S 1 x is the maximal holomorphic subspace of T x L and S 0 x is the maximal totally real subspace of T x L, and these distributions are denoted by S T and S ⊥ , respectively. If − μ 2 1 (x), . . . , − μ 2 k (x) are the eigenvalues of P 2 at x, then T x L can be decomposed as for any x ∈ M.
If, in addition, each μ i is constant on L, then L is called a skew CR-submanifolds [15]. It is significant to account that 2 Mathematical Problems in Engineering CR-submanifolds are particular class of skew CR-submanifold with k � 1, Definition 1. A submanifold L of a Kaehler manifold L is said to be a skew CR-submanifold of order 1 if L is a skew CR-submanifold with k � 1 and μ 1 is constant. For any orthonormal basis e 1 , e 2 , . . . , e n of the tangent space T x L, the mean curvature vector Ω(x) and its squared norm are defined as follows: g h e i , e i , h e j , e j , where n is the dimension of L. If h � 0, then the submanifold is said to be totally geodesic and minimal if Ω � 0. If h(E 1 , E 2 ) � g(E 1 , E 2 )Ω, for all E 1 , E 2 ∈ TL, then L is called totally umbilical. e scalar curvature of L is denoted by τ(L) and is defined as where κ αβ � κ(e α ∧ e β ) and m is the dimension of the Riemannian manifold L.
e global tensor field for orthonormal frame of vector field e 1 , . . . , e n on L is defined as for all E 1 , E 2 ∈ T x L, where R is the Riemannian curvature tensor. e above tensor is called the Ricci tensor. If we fix a distinct vector e u from e 1 , . . . , e n on L n , which is governed by χ, then the Ricci curvature is defined by A submanifold L of a Kaehler manifold L is said to be skew CR-warped product submanifolds if it is warped product of the type L � L 1 × f L ⊥ , where L 1 is a semislant submanifold which was defined by N. Papaghiuc [16] and L ⊥ is a totally real submanifold. Sahin [17] proved the existence skew CR-warped product submanifolds. Recently, Ali Khan and Al-Dayel [18] studied Skew CR-warped product submanifolds of the form L d � L ⊥ is totally real submanifold. More precisely, they obtained Ricci curvature inequalities for these submanifolds as follows. Let f be a real-valued differential function on a Riemannian manifold L n ; then, the Bochner formula [19] is stated as where R L denotes Ricci tensor and H(f) is the Hessian of the function f.
Main Results
In this section, we obtain some characterization by the application of Bochner formula.
If χ ∈ S and satisfying the equality, then the base submanifold L ⊥ and λ 1 is the eigenvalue corresponding to the eigenfunction lnf.
Proof. Since χ ∈ S, by equation (16), By the assumption that R L (χ) ≥ K, we have Since R L (χ) ≥ K, on applying the theorem of Myers [1], according to this, if Ricci curvature is greater than by a positive constant, then base manifold L d 3 1 is compact. On integrating (22) and using Green's theorem, we obtain Let H(lnf) be the Hessian of the warping function lnf; then, we have |H(lnf) − nI| 2 � |H(lnf)| 2 + n 2 |I| 2 − 2ng(I, H(lnf)), where n is real number. e above formula provides Putting n � (λ 1 /(d 1 + d 2 )) and integrating the last equation with respect to dV (volume element), we obtain Using (19), with the fact Δlnf � λ 1 lnf, we have Combining (27) and (28), we derive By the assumption R L (∇f, ∇f) ≥ K, the above equation changes to Using (24), the last inequality leads to If (20) holds, then the above inequality produces erefore, we have H(lnf)(X, X) � (λ 1 /(d 1 + d 2 )). Hence, by the application of Tashiro's result [5], the fibre L 1 is isometric to Euclidean space R d 1 +d 2 .
□
Let lnf be an eigenfunction corresponding to the eigenvalue λ 1 satisfying Δlnf � λ 1 lnf, and we have\scale 90% (38) Again, using Δlnf � λ 1 lnf, it is easy to see that which on integrating provides Putting K � (λ 1 /(d 1 + d 2 )) in (40), we have Furthermore, integrating (16) and applying Green's Lemma, we find □ From the above two expressions, we have On using the assumption that R L (χ) ≥ K, for K > 0, Equivalently, 6 Mathematical Problems in Engineering | 2,668.4 | 2021-07-05T00:00:00.000 | [
"Mathematics"
] |
Listening to speech with a guinea pig-to-human brain-to-brain interface
Nicolelis wrote in his 2003 review on brain-machine interfaces (BMIs) that the design of a successful BMI relies on general physiological principles describing how neuronal signals are encoded. Our study explored whether neural information exchanged between brains of different species is possible, similar to the information exchange between computers. We show for the first time that single words processed by the guinea pig auditory system are intelligible to humans who receive the processed information via a cochlear implant. We recorded the neural response patterns to single-spoken words with multi-channel electrodes from the guinea inferior colliculus. The recordings served as a blueprint for trains of biphasic, charge-balanced electrical pulses, which a cochlear implant delivered to the cochlear implant user’s ear. Study participants completed a four-word forced-choice test and identified the correct word in 34.8% of trials. The participants' recognition, defined by the ability to choose the same word twice, whether right or wrong, was 53.6%. For all sessions, the participants received no training and no feedback. The results show that lexical information can be transmitted from an animal to a human auditory system. In the discussion, we will contemplate how learning from the animals might help developing novel coding strategies.
It is a human dream to communicate directly from brain-to-brain, control machines by direct brain-to-machine connections, or use devices serving as an input to the neural system [1][2][3][4] . Especially, brain-to-brain communication is intriguing and has been tried successfully in the past. Previous human-to-human or human-to-animal interfaces used various, non-corresponding brain areas or sensory systems to show information transfer between the brains.
In one experiment, two players participated in a computer game. The game's task was to defend a city from rocket attacks by shooting down the rockets with a cannon before reaching the city while avoiding the shooting down of friendly flying objects. While one participant (the sender) had visual control over the game but no touchpad to activate the cannon, the second participant (the receiver) had only the touchpad but could not see the game. Verbal or visual communication among the test subjects was impossible because the participants were in different buildings, separated by about one mile. Instead, electroencephalography (EEG) was used to record signals from one human test subject, transmit the information via the internet, and stimulate a second subject's brain through transcranial magnetic stimulation 3 . The brain-to-brain communication was determined by the computer game's performance and showed a rudimentary form of direct information transfer from one human brain to another 3 .
In a different experiment, Yoo and coworkers used the EEG recordings evoked by a visual flicker stimulus to control rats' brains via transcranial focused ultrasound and showed that the flicker response translates into the rat's tail movement 5 .
Matching brain areas were used to study brain-to-brain communications in rats. During the experiment, neural activity was recorded in one rat (coder rat) while performing a task. The activity was then converted into electrical pulses to stimulate a second rat's matching brain areas (decoder rat). The second rat showed similar Figure 1. (a) Shows the waveform of the word "shore" and (b) its corresponding spectrogram. (c) Shows the spike raster plot of the recorded neurons played to subjects 5 to 9. The maximum rate was 132 pulses per second (pps), the average rate 58.5 ± 35 pps. The frequency map presented to these subjects was composed of 6 different frequency channels ranging from 500 to 1500 Hz. (c) shows the sum of all spikes across the seven channels over time. Note the bin size is ~ 22 µs. www.nature.com/scientificreports/ pulse trains generated with a timing pattern based on these spike raster series served as the signals fed into different frequency channels of the patients' CIs. Note, frequency content below 500 Hz is underrepresented due to the challenge of recording from neurons with a low CF in the ICC. The lack of channels available to transmit information limited the data presented to the patients. The temporal and spectral spike rates can also be investigated by analyzing the spike patterns shown in the spike raster plot. Figure 1d shows for the word "shore" the temporal sum, which is the number of spikes occurring at a selected time summed over all channels. This temporal sum mimics the original wave file's amplitude and shows that concurrent pulses occur within the speech signal. The rate is still low, though, not exceeding 6 with a bin size of 22 µs for any of the words presented throughout all trials. The spectral spike rates, defined as the pulses per second (pps) for one frequency channel, are much lower than that of current CIs. The average spike rate of the information presented to patients was 42.6 ± 31.9 pps with a maximum of 149 pps. The instantaneous rate presented to patients had a maximum of 1760 pps with a mean of 400 ± 412 pps. The upper limit of the rate pitch is not a concern because the stimulus presented via the CIs does not contain a carrier with a fixed stimulation rate. The electrical pulses occur at a stochastic time pattern with a fixed amplitude, which decreased the total charges delivered to a patient compared to the charges delivered using a conventional CI coding strategy. Frequency information is given by the electrode contact location and the sound intensity by the pulse rate. It is unlikely that rate encodes pitch (rate-pitch) as suggested for contemporary coding strategies because the pulse patterns have a Poisson-like distribution with preferred intervals with a difference between maxima equal to 1/CF of the frequency band under investigation. It is unclear whether patients can determine the mode of inter-pulse-time-intervals. The rate reported in this study is the average rate of pulses presented through a CI. Contemporary CIs' coding strategies differ; they modulate a carrier with a fixed rate with the acoustic signal's envelop information. Differences in spike rate can be attributed to the recorded neurons' activity, though some fluctuation is due to differences in down sampling across testing periods.
Patient trials. Description of the transferred information.
At the beginning of the testing session, after the signal's loudness was adjusted to a comfortable hearing level (for details, see "Methods"), the first two subjects (S1 and S2) had to describe their hearing experience. S1 and S2 did not know that the sound signals were encoded words. They were able to discriminate length, rhythm, and loudness differences. Although the biphasic pulses' amplitude was constant so that the current amplitude did not encode loudness, the patients reported loudness differences within one word. An explanation is that the pulse rate across channels (Fig. 2a,c) and the pulse rate within one channel (Fig. 2b,d) sum and encode the sound level intensity. When told that the sound signals are words and the lexical content of the words they listened to during the training portion, both patients stated that what they heard sounded different.
During the following section of the testing session, both subjects knew that they were listening to words. They were not told the lexical content but had to communicate what they understood-each subject assigned lexical information to the information they received. While the patient's answer was sometimes wrong, it remained the same across multiple trials. For example, when played the word "ditch," one subject stated the word was "Willhelm." When "ditch" was replayed without the subject's knowing which word, they once again said that it sounded like "Willhelm. " Patients received, identified, and interpreted the same cues twice in the same way.
The lexical content of the transferred signal. Subjects S5-S12 performed a four-word forced-choice test. The communication between the computer and the CI occurred with the Bionic Ear Data Collection System (BEDCS) in subjects S5 through S9 and with the HR-Stream in S6b, S7b, S9b, and S10 through S12. We used both systems for S6/S6b, S7/S7b, and S9/S9b. The test subjects listened to the signal transmitted via their CI and subsequently selected one out of four possible words. Each subject completed the word list twice, trial 1 and trial 2, in the same order. Subjects were not informed that and when the repeat started. The patients did not receive feedback on whether their answer was correct or wrong. Figure 3, Table 1, and STable 2 and STable 3 show each subjects' performance on the test. Figure 4 shows the aggregate results, with Fig. 4a the results from test sessions done with the BEDCS; Fig. 4b the results from test sessions done with the HR-Stream. Figure 4c shows the results from the combined data (twice the same word). The ordinate provides the number of correct answers out of 21 questions. With the BEDCS, the correct answers were 28.2% in trial 1, 31.1% in trial 2. The performance was better with the HR-Stream, with 31.0% in trial 1, 36.5% in trial 2. In addition to scoring the correct answers, we also counted the selection of twice the same wrong word. This evaluation's rationale was that the subject must have recognized the selected word from their spectral and intensity cues but have given the wrong lexical content. With the BEDCS, twice selecting the same wrong word was 37.9%, and 30.2% with the HR-Stream. For the test subjects selecting a word twice (correct or false), the scores were 54.4% with the BEDCS and 50.8% with the HR-Stream. Feedback was not shared throughout the testing session, preventing the subjects from learning or making adjustments during the testing session.
It should also be noted that the test subjects recognized some words better than others. The words selected correctly for more than 50% are bomb, ditch, sun, make, patch, boat with the BEDCS (STable 1) and ditch, sun, van, make, boat with the HR-Stream (STable 2). Test subjects performed especially poorly (≤ 10%) recognizing shore, bean, seize, and lease with the BEDCS (STable 1) and identified ≤ 10% of times the correct word merge, tough, seize, lease, tough, and knife with the HR-Stream (STable 2). Figure 5 shows the spike patterns recorded for the correct word together with the spectrograms of the set of four words presented during the forced-choice test. It is not apparent why one word should be favored over another. It is not clear why a test subject should perform better for the words bomb or ditch when compared with the words lease or merge. Provides the corresponding average rate in one channel and the maximum rate as calculated from the shortest time between two pulses. The average rate is low. However, when stimulation occurs, it is between 300 and 500 Hz. The rate changes dynamically. Intensity pattern could be used as a cue in the processing of the encoded acoustic information.
Figure 3.
The results from each subject. Recognition scores often have higher than 50% recognition rates. Some test scores are near 50% correct, and for recognition, close to 80%. Chance was at 42.5%. Initially, only the BEDCS was available to us for testing. Subjects S1-S9 were tested with this system. The forced-choice comparisons were only completed in S5-S9. www.nature.com/scientificreports/ Performance above chance. As described in "Methods", we corrected the threshold for chance for the small sample size using the MATLAB function binoinv(). For an N of 21 and a four-word forced-choice test, performance for each subject was above chance if the score was 42.5% or more. The performance differs among the test subjects (Fig. 3). None of the subjects tested with the BEDCS performed above chance during their first trial (trial 1) and only S6 during the second trial (trial 2). For the same word selected twice, S7 and S8 scored above chance. The scores were higher with the HR-Stream. S9b (retested S9) and S12 performed above chance during trial 1; S9b, S10, and S12 scored above chance during trial 2, and S9b through S12 for the same word selected twice. We tested the mean accuracy for all test subjects during trial 1, trial 2, and with a two-tailed binomial test (myBinomTest(); MATLAB_R2020b) if the number of questions answered correctly is above chance for trial 1, trial 2, and twice the same word (correct or wrong). Table 2 shows the results. The numbers are the probabilities to accept the hypothesis that the selection of the words during the forced-choice test is chance. The word recognition tasks' performance was above chance for the combined data set if the probability shown was smaller than 0.05.
Discussion
Overall, patients performed better than selecting the right answer by chance for test 2 and the recognition (twice the same word in trials 1 and 2, correct or wrong) test. On average, subjects performed noticeably better on the recognition test over testing for the accurate lexical content. These results indicate that a low-level speech perception occurs, though open-set speech perception has yet to be achieved. The acoustic signal, encoded by an animal's auditory system (guinea pig), can be deciphered by the human auditory system. This study's results have led to the development of a novel coding strategy to be implemented and used in CIs 7,8 . Tests in human CI users using the novel coding strategy are currently ongoing.
CIs restored hearing in more than 550,000 implanted severely-to-profoundly deaf 9 . While some CI users' performance is exceptional, many users complain about poor performance in noisy listening environments, difficulties with tonal languages, and difficulties with music perception. What is missing? A normal-hearing listener's auditory system performs a spectral analysis of sound using an array of overlapping auditory filters. The output of each filter is like a bandpass filtered version of the sound. It contains two forms of information: the relative slow variation in amplitude over time (E, envelop) and the rapid oscillations with the rate close to the filter's center frequency (TFS, temporal fine structure). E cues alone can lead to high intelligibility for speech in quiet. However, in noisy listening environments and music recognition, normal hearing subjects also take advantage of TFS. Although it has been demonstrated in normal hearing subjects that temporal fine structure (TFS) is important for speech recognition in noise [10][11][12][13][14][15][16][17] , to process tonal languages 18 and for music perception 19,20 , most of today's Table 1. The left four columns show the 21 sets of four words shown to the test subject. The word with the green background is the correct word in a set of four, matching with the information presented to the test subject via their cochlear implant. Words selected by the test subject are shown in the cumulative tables for the BEDCS, the HR-Stream, and the combined data. N indicates the number of times a word set is included in the analysis. For example, for the BEDCS system (N = 5), test subjects × 2 presentations of a word set, which equals 10. No feedback has been given to the patient on the correct answer. Therefore, the presentation of the same set twice was considered independent. For the nine columns assigned to each, "BEDCS", "HR-Stream", and "combined" the following applies: The four columns with the red backgrounds show the frequency a word was selected in trial 1 (T1), and the four columns with the blue background show how often a word was chosen in trial 2 (T2). The column with the header "T1" counts how often the correct word was selected in trial 1, the column with the heading "T2" shows the number of correct words selected in trial 2. The rows in the column "%" show the overall percentage correct for a given word. www.nature.com/scientificreports/ cochlear implant (CI) coding strategies rely primarily on the envelope (E, for more details, see below). With the limited success of the few coding strategies that specifically claim to encode TFS, it is important to determine whether CI patients could potentially improve in performance with TFS included in the coding strategy. This study's results help progress by underlining that the neural activity recorded in an animal auditory nerve or midbrain can serve as a surrogate for a similar approach in a cochlear coding strategy. Based on the results from this study, we propose encoding E and TFS by modulating a stochastic pulse pattern with the center frequency of the selected frequency band. To model the frequency place-map along the cochlea, at each electrode contact, stochastic pulse trains with a Poisson-like distribution of inter-pulse-intervals are frequency-modulated, corresponding frequencies obtained from the location of the electrode contact along the cochlea. The current amplitude of the pulses is constant and does not contribute to loudness growth, which codes the spatial and temporal rate changes, as demonstrated in Fig. 2. Overall rate changes code E, and temporal correlations code TFS for each channel. This code has been developed recently and tested in a pilot study with 17 implant users with implants from 2 major CI manufacturers. The research is ongoing, and the results will be published elsewhere.
A limitation in the experimental design is using the forced-choice test because it provides the words in the first place. The performance is not necessarily speech recognition but the ability to select the best fit. Several phonetic cues are available to the test subject, including loudness changes, rhythm, and word length. To identify possible cues, all patients were asked at the beginning of the session to describe what they hear without being told that the signals they heard were single words. They characterized the words by their loudness changes, their rhythm, their length, and sound quality. When told that the signals they listened to words, they assigned lexical content or 'label' to the words. The lexical content could be right or wrong. When presented with the same word later in the session, 6 out of 8 test subjects would offer the same label in more than 50%.
It is noteworthy that the test subjects recognize the loudness variations despite the amplitude of the biphasic charge-balanced pulses and their pulse lengths were constant. Only the pattern of pulse occurrence changed.
The results also show that speech recognition depends on the frequencies selected. Unfortunately, the neurons recorded in the guinea pig ICC only cover parts of the given words' required frequency information (Fig. 5). In many of the examples, crucial frequency information is missing to distinguish between two words clearly. The importance of the number of frequencies or frequency bands for the performance in normal hearing, hard of hearing, or even cochlear implant users has been stressed before [21][22][23][24][25][26] . The number of available frequency bands will affect performance during the test. Studies with normal-hearing human subjects addressed how many independent channels are required to perceive combinations of multiple pitches or a single pitch with interfering tones 23,24,27 . 32-64 channels without any spectral overlap and a filter slope of at least 72 dB/octave are required for extracting spectral pitch. In a different set of experiments by Stafford and coworkers 27 with normal hearing On average, the words on the left (bomb and ditch) were identified correctly in over 50% of the cases, the two words on the right (lease and merge) were identified in less than 10% correctly. The raster plots for the correct word overlay each spectrogram in one set. Each black dot indicates the timing when a pulse is delivered. The spectrogram and electrodogram can be directly compared. From the figure, it is obvious that the frequencies and the timing are poorly matched. It is also clear from the electrodogram that the test person had cues from intensity patterns or length of the stimulation. The electrodograms also show that either an artificial code or a better-distributed pulse pattern is required to improve performance. www.nature.com/scientificreports/ Test subjects in this study were cochlear implant users with a damaged auditory system and subsequent neural degeneration. Only a limited number of frequency bands could be used. Cochlear implant patients' speech performance, such as consonant recognition, is a function of the available number of channels 28 . In some patients, performance improved as the number of channels was increased up to 6. In this study, further increase in the number of channels did not improve performance 28 . Other studies demonstrated that CI users' sentence recognition increased from one to ten channels, but there was no difference in performance if more channels were used 21,22,26 . Overlapping electric current fields might have limited the number of channels during stimulation at neighboring electrodes, caused by factors such as electrode configuration, mode of stimulation (monopolar, bipolar), placement of the electrode array (insertion depth, distance between electrode and spiral ganglion neurons), or degeneration of the spiral ganglion along the cochlea. It is understandable that with a very rudimentary frequency representation from an animal speech performance is limited too-the low number of channels used while testing with the BEDCS device likely hinders speech recognition. Increasing the number of channels through the HR-Stream helped some patients boost their scores but resulted in no significant performance difference. This result does fit expectations since only 2-4 channels are needed for speech recognition, and more than four channels were always used 25 .
Another potential factor limiting patient understanding is the frequency map presented to the patients. Previously published work showed that the frequency-to-place mapping manipulations in the cochlea decrease speech recognition [29][30][31][32][33] . In the present study, the map used for patients S1 to S9 differed. The map presented to subject S9 during their second round of testing better matched typical Advanced Bionics provided frequency map of the CI the patients used with their implant. The map typically used in most implants may still be nonideal and cause some degree of frequency distortion. From the plots where the pulse pattern is superimposed on the word's frequency presentation (spectrogram), it is clear that often information is missing, which could be an animal-human difference or the influence of anesthesia (Fig. 4).
Subjects S1 and S2, who were not tested using the forced-choice test, but simply asked to describe what they heard, had some training (< 2 h). All patients that completed the forced-choice test, though, had no degree of training. They were never told the correct word they were listening to and only heard the word presented 1-3 times on each forced-choice comparison. Adapting to understand a vastly different neural code, such as the one presented, may require significant training to boost the top-down mechanism of speech understanding. A more recent study showed that CI users identified CNC words correctly in a quiet listening environment in 47.2% at three months after the CI activation and 57.5% at one year 34 . Performance improved even after multiple years of use with their processing system. These results were achieved, allowing ample top-down mechanism [35][36][37][38][39][40][41][42][43][44][45][46] . Additionally, in this study, patients were given example words before testing to allow some training 47 . Since subjects were unaccustomed to this code, allowing for a training period could markedly improve speech perception.
Conclusions
For the animal neural system to function as the basis for a sound processing system in cochlear implants, important speech cues must be accounted for. Currently used coding strategies mainly focus on place theory over volley theory, possibly ignoring the important dependence of speech on the neurons' phase-locked responses. Processing speech via a functioning cochlea avoids this issue by naturally encoding speech and phase. It can be seen by analyzing the neural data's spectral representation by comparing the spectrogram to the corresponding raster plot. Mapping the recordings from a particular neuron to the tonotopic placement of that neurons' CF on the cochlea acts as a spectral and waveform information extractor of sorts.
To encode loudness, typically done through amplitude modulation, the stimulation current remains constant for our experiments while the pulse rate increases. The patients reported loudness fluctuations while stimulated at continuous current. This may lead to an increase in the dynamic range resolution, though further studies are underway to verify this. Notably, while pulse rates vary, the maximum rate is still far below that used in current implants. This low rate stimulus can increase selectivity by limiting the current spread and increasing the neuron's spontaneous activity by stimulating at the threshold. Increased selectivity could lead to more genuinely independent channels, greatly benefiting speech intelligibility. Simultaneously, low rate stimulation could lead to longer battery life and compatibility with optics-based stimulation.
This study is encouraging. Over 50% recognition was achieved for multiple tests, and up to 70% achieved on the recognition test. This success was achieved even though the patient was only presented with a crude representation of the neural code: guinea pig neural responses recorded in the ICC that was successfully translated to a human auditory nerve. This transfer from a high processing center back to a lower one indicates that temporal fine structure (TFS, frequencies > 500 Hz) and temporal envelop (TE, frequencies ≤ 50 Hz) cues are transferable between the ICC and the auditory nerve and can help provide further insight into how auditory information is encoded. Additionally, improvements in the recordings' preparation and the recordings may significantly improve speech perception and allow the patient's training. This strategy could push cochlear implant technology to better speech in noise perception and music appreciation due to a more complex sound encoding. By allowing a perfectly functioning auditory system to analyze sounds, improvements can be made to speech processing either through using the system directly, as done in this study, or by analyzing the animal code to find new insights on how speech is encoded.
Methods
Ethics declaration. All animal procedures followed the NIH Guide for Care and Use of Laboratory Animals and received approval from the Institutional Animal Care and Use Committee at Northwestern University. The study was carried out in compliance with the ARRIVE guidelines. All experimental procedures with human subjects followed the institutional research committee's ethical standards and the 1964 Helsinki Declaration and General description of the approach. The study's objective was to test whether neural responses to speech, recorded from the brainstem [central nucleus of the inferior colliculus (ICC)] in guinea pigs, can be deciphered by the human brain. To code speech composed of a complex pattern of different frequencies, neural activity from auditory neurons with a wide range of best frequencies is necessary. In this study, multi-channel recording electrodes were inserted into the ICC, recording many frequency bands simultaneously during speech presentations via a speaker. Each train of action potentials recorded at one of the electrode contacts in the ICC was then converted into a sequence of biphasic and charge-balanced electrical pulses with the same temporal pattern as the train of action potentials. The train of electrical pulses was presented to a cochlear implant user's brain via her/his cochlear implant. The amplitude of the electrical pulses is fixed in amplitude such that a comfort level of loudness was achieved. Single words encoded by the guinea pig's auditory system were played to the test subjects, and they had to identify the word out of four (forced-choice). During the testing sessions, no feedback was given to the patient on whether they selected the correct word to reduce learning. The four words in a group were chosen such that additional information beyond the frequency information was minimized. In other words, loudness patterns, length, etc., were similar for the four words. Two systems were used to interface with the cochlear implants, the BEDCS and the HR-Stream. The second system allowed more channels and a better frequency representation of the speech signal.
Animal data collection and preparation. Animals. Animal procedures are the same as we have used in the past and have been published previously [48][49][50][51][52] . We collected no new animal data; the animal data originate from a study on distorting temporal fine structure by phase-shifting, published in 2017 in Scientific Reports 52 . The recordings in four guinea pigs of either sex were suitable for this study. Animals 1, 3, and 4 were about seven months old, weighing 1045 g and 1090 g, and 860 g, and animal 3 was five weeks old and weighed 558 g. The following section describes the animal data collection briefly.
Animal anesthesia. A mixture of Ketamine (44-80 mg/kg) and Xylazine (5-10 mg/kg) was injected intraperitoneally to induce anesthesia. While under deep anesthesia, body temperature was maintained at 38 °C by placing the animal on a heated blanket. A tracheotomy was made, and a plastic tube (1.9 mm outer diameter, 1.1 mm inner diameter, Zeus Inc., Orangeburg, SC) was secured into the trachea. The tube was connected to an anesthesia system (Hallowell EMC, Pittsfield, MA), including a vaporizer (VetEquip, Pleasanton, CA) to maintain anesthesia with isoflurane (1-3%). During the experiments, the depth of anesthesia was assessed by a paw withdrawal reflex, and the isoflurane concentration was adjusted accordingly. Body temperature, breathing rate, and heart rate were monitored continuously and were logged every 15 min.
Placing the multi-channel ICC recording electrode. To access the ICC, the right temporalis muscle was reflected. An approximate 5 × 5-mm opening was made in the right parietal bone, just dorsal to the parietal/temporal suture and rostral to the tentorium. A small incision was made in the dura mater. A silicon-substrate, thin-film multichannel penetrating electrode array (A1 × 16-5 mm-100-177, NeuroNexus Technologies, Ann Arbor, MI) was advanced with a 3D-micromanipulator (Stoelting, Kiel, WI) through the occipital cortex into the ICC. The trajectory was dorsolateral to ventromedial at approximately 45º off the parasagittal plane in the coronal plane. The electrode array passed through the central nucleus of the ICC approximately orthogonal to its iso-frequency laminae 48,53,54 . After the initial placement of the electrode's distal tip into the ICC, the electrode was advanced while a pure tone stimulus was presented to the left ear. The final placement of the electrode was achieved when neural responses from the array's distal contact could be stimulated with a pure tone stimulus between 16 and 25 kHz. In some instances, the electrode was advanced several times into the ICC before the desired placement was achieved. After placing the electrode array, the exposed skull and dura mater were covered and protected from dehydration with gauze sponges (Dukal Corporation, Hauppauge, NY) soaked with Ringer's lactated solution.
Pure tone and speech stimuli. Voltage commands for acoustic stimuli, generated using a personal computer (PC), equipped with an I/O board (KPCI 3110, Keithley, Cleveland, OH), drove a Beyer DT 770Pro headphone speaker (Beyerdynamic, Farmingdale, NY). The speaker's speculum was inserted with a short, 3 mm-diameter plastic tube into the opening to the cartilaginous ear canal. Acoustic stimuli were tone pips (12 or 20 ms duration, including a 1 ms rise/fall) with different carrier frequencies, presented at a rate of 4 Hz. We measured the speaker's sound level at the speculum opening with an 1/8-inch microphone (Bruel & Kjaer North America Inc., Norcross, GA). In addition to pure-tone bursts, stimuli were single words from the Hearing in Noise Test (HINT) played via the Beyer DT 770Pro headphone speaker to the anesthetized guinea pigs.
Data acquisition and electrical pulse trains. We recorded the neural activity with a multi-channel electrode array and a Plexon data acquisition system (16-channel, Model MAP 2007-001, Plexon Inc, Dallas, TX) as described before 48 . Neural activity was recorded at a 40 kHz sampling rate at each channel, with a 16-bit analog/ digital (A/D) input conversion. The recorded signal was bandpass filtered, 0.1-8 kHz. Times at which spikes occurred were determined online with Plexon's data acquisition software and stored for all 16 active electrodes. Following filtering to remove the field potential response, a user-defined threshold determined the neural activ- www.nature.com/scientificreports/ ity considered for further analysis. Neural activity was recorded at each site along the electrode array during the presentation of pure tone stimuli at different carrier frequencies and sound levels. Responses to pure tone stimuli were used to construct a tuning curve to determine the best frequency of each neuron recorded. The best frequency of a recording site was the pure tone stimulus frequency, which needed the lowest sound level for a response at this site. Words from the HINT test were played at about 60 dB average sound pressure level (SPL = sound level re 20 µPa). Recordings from 100 distinct neuronal units were used with frequencies from 500 Hz to 16,000 Hz. Recordings of the neural activity were accepted if the action potential amplitude had a signal to noise ratio of more than 6 dB and no apparent artifacts from the electrical signal from the heartbeat and artifacts caused by breathing. The recorded audio files were then converted to a vector containing timing information for an action potential. The vector had zeros and ones, with ones encoding the time when the recording crossed a predefined threshold level from low level to high. An overview of the spike train generation can be seen in Fig. 6.
Each spike train corresponds to a neuron with a CF that was then used to determine the closest electrode to stimulate based on the tonotopic coding of the cochlea. The frequency distribution among the 16 available electrodes in the Advanced Bionics implant changed over the process of the experiment, as shown in Fig. 7.
The map is non-ideal both due to the inability to obtain ICC recordings at every desired frequency and due to limitations of the first device (BEDCS) used for subject testing (subjects 1 through 9). The device lacked the random-access memory (buffer) necessary to present across all 16 channels simultaneously. Thus between 6 and 12 channels were active depending on the word presented to the initial nine subjects. The maximum pulses per second were calculated and averaged across all files presented to patients. Similarly, the instantaneous rate, defined as the rate between two successive pulses, was also calculated for all files. The instantaneous rate was limited to below 1.4 kHz due to the spike trains' necessary down sampling to fit the RAM limitations of the device. After switching to a new and less limited device (HR-Stream), the instantaneous rate became 2.4 kHz. The cross-correlations of spike trains delivered to each channel were calculated and graphed. The pulse trains presented to adjacent channels were cross-correlated to assess their stochasticity. Subject testing. To assess the recordings' lexical content, adult cochlear implant users (n = 12) were played the trains of electrical pulses and asked to complete specific tasks. Patient demographics can be viewed in Table 3.
Of the 12 patients, six were female, and six were male with a mean age of 65 ± 12 years. Testing of the first nine patients was conducted using the Bionic Ear Data Collection System (BEDCS). The three new patients and two returning patients were tested with the HR-Stream, both devices on loan from Advanced Bionics. The loudness of the stimuli was adjusted globally, starting from 0 µA of current and increased until a comfortable listening level was reached. Comfort was defined using a standard audiology scale that ranges from 0 to 10, where 0 indicates no auditory perception, 10 extreme discomfort, and 5 indicates a comfortable listening level.
During the first two testing sessions, subjects S1 and S2 were played 21 different words with between 6 to 12 channels used to present the information, varying based on the word presented. The frequency maps used for testing sessions 1 and 2 can be seen in Fig. 2. The first part of testing consisted of a training session where the subject was informed of the word they were being played. After the training session, subjects were played www.nature.com/scientificreports/ words without being told which word they were hearing. They were asked to describe what they heard and if they could identify the word. The testing protocol changed after the session. During sessions 2 and 3, the test subjects S3 and S4 were asked once again to describe what they heard and if they could identify the word with no context. After that, they were asked to complete a forced-choice comparison amongst four different words. One of the four words corresponded to the word being presented to the patients. The same set of 21 words was presented using the same frequency map as in testing sessions 1 and 2. At this point, subjects were no longer provided with training before presenting each word. Subjects were not informed of the correct answer after making the forced-choice comparison. After completing the first trial (T1) of 21 force choice comparisons, S3 and S4 were asked to retest a few words of interest.
For the remaining testing sessions 5-15, subjects were given a standardized list of 21 forced-choice comparisons. Once again, no training was provided, and the subjects were not given the correct answer. Subjects were also asked to take a retest (trial 2) of the 21 forced-choice comparisons. Subjects 5 and 6 completed only 20 out of the 21 forced comparisons due to a technical issue during testing. The frequency map used to present these words to the subjects was altered as more data was collected; each map can be seen in Fig. 7. For sessions 5 to 9, between 6 and 10 channels were used to present information, varying based on the word presented. For session 10, the protocol remained the same though the frequency map was shifted, as seen in Fig. 7, and only 6 channels were used to present each word to the subject. Additionally, the protocol remained the same for testing sessions 11 to 15. With the HR-Stream system in use for these sessions, subjects were presented with information across 15 channels for every word, and the altered frequency map can be seen in Fig. 7. Data analysis. The threshold for chance was calculated to 25% by dividing the 100% by the number of classes (c), four for this study. However, this is only valid for large sample sizes 55,56 . For smaller sample sizes with an N of 21, such as in our test, the threshold for chance was corrected using the MATLAB function binoinv() 55 ; threshold = binoinv(1-alpha,N,1/c)*100/N, where alpha = 0.05, c = 4, and N = 21. The resulting threshold was 42.5%, indicating that the test subjects using the BEDCS performed less than chance.
For each testing session, trials 1 and 2 were scored to provide the number correct for each subject. The trials were also scored for recognition (same label twice); the amount "recognized" is defined as an answer that was the same across both test 1 and test 2 despite if it was right or wrong. The number of questions answered correctly was tested using a two-tailed binomial test (myBinomTest(); MATLAB). The results were tested at a significance level of α = 0.05.
Figure 7.
The figure shows the maximum number of channels used during the testing sessions. Each circle represents a channel used to deliver the processed neural recordings across. The frequency of that channel corresponds to the characteristic frequency (CF) of the originally recorded neural single unit. A channel is defined here as an electrode contact with the most basal contact corresponding to electrode 1 and the most apical contact corresponding to electrode 16. For testing sessions 1-4, the frequencies, from lowest to highest, were played to electrodes 1, 2, 3, 4, 6, 7, 8, 10, 11, 12, 13 and 14. For words that required less than 12 channels, the most apical (i.e., highest frequency) channels were removed. For testing sessions 5 to 9, the frequencies were played to electrodes 1, 2, 3, 4, 6, 7, 8, 10, 11 and 12. Once again, for words requiring fewer channels, the most apical channels were removed. For session 10, all words were played across only six channels played to electrodes 1, 6, 7, 11, 13, and 15. Finally, for the remaining testing sessions, 11-15, all words were played across 15 channels corresponding to electrodes 2 to 16. The final frequency map used for testing in sessions 10 to 15 was closest to the standard AB patient map used in our subjects' current CI's.
Data availability
The data generated or analyzed during this study are included in this published article (and its Supplementary Information files). The datasets generated during and analyzed during the current study are also available from the corresponding author on reasonable request. | 9,332.4 | 2021-06-10T00:00:00.000 | [
"Biology"
] |
Subcellular localization of UDP-GlcNAc , UDP-Gal and SLC 35 B 4 transporters
The mechanisms of transport and distribution of nucleotide sugars in the cell remain unclear. In an attempt to further characterize nucleotide sugar transporters (NSTs), we determined the subcellular localization of overexpressed epitope-tagged canine UDP-GlcNAc transporter, human UDP-Gal transporter splice variants (UGT1 and UGT2), and human SLC35B4 transporter splice variants (longer and shorter version) by indirect immunofluorescence using an experimental model of MDCK wild-type and MDCK-RCAr mutant cells. Our studies confirmed that the UDP-GlcNAc transporter was localized to the Golgi apparatus only and its localization was independent of the presence of endogenous UDP-Gal transporter. After overexpression of UGT1, the protein colocalized with the Golgi marker only. When UGT2 was overexpressed, the protein colocalized with the endoplasmic reticulum (ER) marker only. When UGT1 and UGT2 were overexpressed in parallel, UGT1 colocalized with the ER and Golgi markers and UGT2 with the ER marker only. This suggests that localization of the UDP-Gal transporter may depend on the presence of the partner splice variant. Our data suggest that proteins involved in nucleotide sugar transport may form heterodimeric complexes in the membrane, exhibiting different localization which depends on interacting protein partners. In contrast to previously published data, both splice variants of the SLC35B4 transporter were localized to the ER, independently of the presence of endogenous UDP-Gal transporter.
INTRODUCTION
The glycan moiety of glycoproteins is synthesized and modified by glycosyltransferases located in the lumen of the endoplasmic reticulum (ER) and Golgi apparatus.The substrates required by these enzymes are sugars activated by the addition of a nucleoside mono-or diphosphate (UDP, GDP, or CMP).Nucleotide sugars are synthesized in the cytosol (Coates et al., 1980), except for CMP-sialic acid (CMP-Sia), which is synthesized in the nucleus (Munster et al., 1989).To be available for glycosyltransferases, nucleotide sugars must be transported into the ER or Golgi apparatus by nucleotide sugar transporters (NSTs) (for reviews see Hirschberg et al., 1998;Gerardy-Schahn et al., 2001), hydrophobic transmembrane proteins with a molecular mass of 30-45 kDa.Most predictions determine an even number of spans, which results in the N-and C-termini being directed to the cytosolic side of the membrane, but the membrane topology has been experimentally determined for the murine CMP-Sia transporter only (Eckhardt et al., 1999).NSTs function as dimers (Eckhardt et al., 1999;Puglielli & Hirschberg, 1999;Puglielli et al., 1999;Gao & Dean, 2000) or higher oligomers (Hong et al., 2000).It has been proposed that they act as antiporters, exchanging the nucleotide sugar with the corresponding nucleoside monophosphate, which is a product of the glycosylation reaction (Hirschberg et al., 1998;Gerardy-Schahn et al., 2001).
The first characterized NSTs were specific for the translocation of a single nucleotide sugar (for review see Hirschberg et al., 1998;Gerardy-Schahn et al., 2001).Recently, multisubstrate transporters of nucleotide sugars have been described in several organisms, including humans (Muraoka et al., 2001;Suda et al., 2004;Ashikov et al., 2005).Although NSTs have been mainly identified in the Golgi apparatus, those located in the ER have also been characterized, e.g., UDP-N-acetylglucosamine (UDP-GlcNAc) transporter of Saccharomyces cerevisiae (Roy et al., 2000) or UDP-galactose (UDP-Gal) transporters of Schizosaccharomyces pombe and S. cerevisiae (Nakanishi et al., 2001).
Although the biosynthesis of nucleotide sugars is well understood, the mechanisms of their transport and distribution in the cell remain unclear.In an attempt to further characterize NSTs, we determined the subcellular localization of overexpressed epitope-tagged canine UDP-GlcNAc transporter, human UDP-Gal transporter splice variants (UGT1 and UGT2), and human SLC35B4 transporter splice variants (longer and shorter version) by indirect immunofluorescence using an experimental model of MDCK wild-type and MDCK-RCA r mutant cells.
Construction of mammalian expression plasmids.
Open reading frames (ORFs) of canine UDP-GlcNAc transporter, human UDP-Gal transporter splice variants (UGT1 and UGT2), and human SLC35B4 transporter splice variants (long and short) with appropriate restriction sites at both ends were amplified using cDNA synthesized from 5 µg of total RNA as a template.RNA was isolated from 3-5 × 10 6 MDCK cells or 3-5 × 10 6 HL-60 cells using NucleoSpin RNA II Kit (Macherey-Nagel).The concentration of purified RNA was deter-mined spectrophotometrically at 260 nm.The quality of purified RNA was examined using an Agilent 2100 Bioanalyzer equipped with an RNA Chip (Agilent Technologies).For the reverse transcription reaction, Ther-moScript First-Strand Synthesis Kit and oligo dT (20) were used as recommended by the manufacturer (Invitrogen).Constructs containing FLAG epitope at the N-terminus were prepared by ligation of sequences encoding respective transporters (Table 1) into 3×FLAG-myc-CMV-26 (Sigma).For pVitro1-neo and pSelect plasmids (Invivo-Gen), an adaptor encoding 6His-HA epitope was ligated.The adaptor sequence was prepared from sense (5′-AC-CATGGCACATCACCACCACCATCACGCATCT-TACCCATACGACGTACCAGACTACGCA-3′) and antisense (5′-TGCGTAGTCTGGTACGTCGTATGGG-TAAGATGCGTGATGGTGGTGGTGATGTGCCAT GGT-3′) oligonucleotides, purified using denaturing polyacrylamide gel electrophoresis and annealed at an initial temperature of 80 ºC with slow cooling down (1 ºC per min) to 35 ºC in 0.5 M NaCl.The double-stranded adaptor was phosphorylated with T4 kinase according to the manufacturer's recommendations (Fermentas).Original plasmids were digested with BamHI (for adaptor ligation to pSelect) or Kpn2I (for adaptor ligation to pVitro, MCS1).Sticky ends of BamHI or Kpn2I linear products were filled with Klenow polymerase according to the manufacturer's instructions (Fermentas).All linear plasmids were dephosphorylated using calf intestine alkaline phosphatase (Fermentas).Phosphorylated adaptors were ligated to linear pSelect or pVitro plasmids using Rapid DNA Ligation Kit (Fermentas).Resulting plasmids containing a 6His-HA epitope at the start of each polylinker were used for ligation of sequences encoding respective transporters.Sequences of primers for amplification of ligated ORFs are available upon request.All ligations were performed using Rapid DNA Ligation Kit.Additional sequences attached to the N-terminus of analyzed proteins contained the respective epitopes and aminoacid residues resulting from the addition of sequences for respective restriction enzymes (3×FLAG: MDYKD-HDGDYKDHDIDYKDDDDKL, 6His-HA: MAHHH-HHHASYPYDVPDYAPEYTDPN).All plasmids constructed in this study are listed in Table 1.
Subcellular fractionation, Western blotting, and lectin reactivity.Subcellular fractionation was performed using discontinuous gradient as described by Balch et al. (1984).After final ultracentrifugation, 0.1-ml fractions were collected.Proteins present in respective fractions were separated by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS/PAGE) in 8 % gels and transferred onto nitrocellulose membranes (Whatman).For the detection of HA epitope, membranes were blotted with a 1:1 000 dilution of mouse anti-HA antibody conjugated with horseradish peroxidase (HRP, Roche).For the detection of FLAG epitope, membranes were blotted with a 1:1 000 dilution of mouse anti-FLAG monoclonal antibody (Sigma-Aldrich), followed by incubation with a 1:10 000 dilution of HRP-conjugated goat anti-mouse antibody (Promega).For the detection of ER or Golgi markers, rabbit antibody specific for calnexin (Abcam) or GM130 (Sigma-Aldrich) at 1:2 000 or 1:5 000 dilution was used, respectively, followed by incubation with a 1:10 000 dilution of HRP-conjugated goat antirabbit antibody.Immunoreactive bands were visualized using the Western Lightning Chemiluminescence Reagent Plus system (Perkin Elmer).
For lectin reactivity, cells adapted to serum-free culture conditions were collected and lysed using Complete Lysis-M reagent (Roche) supplemented with protease inhibitor cocktail and 1 mM EDTA according to the manufacturer's instructions.Aliquots containing 20 µg of total protein were separated by SDS/PAGE and transferred onto nitrocellulose membranes.For the detection of glycans bound to glycoproteins, a lectin specific for terminal N-acetylglucosamine present in both N-and Oglycans (GSL II) was used.After blocking with Carbo-Free Blocking Solution (Vector Laboratories) membranes were incubated with biotinylated lectin (Vector Laboratories) in 50 mM Tris/HCl, pH 7.5, containing 150 mM NaCl (TBS), 0.2 % Tween-20, 1 mM Mg 2+ , 1 mM Ca 2+ and 1 mM Mn 2+ .Lectin bound to specific glycans was subsequently detected using alkaline phosphatase-conjugated avidin D (Vector Laboratories) and visualized with NBT/BCIP solution (Roche) according to the manufacturer's instructions.
Nucleotide sugars transport assay.Golgi fraction derived from MDCK, MDCK-RCA r or MDCK-RCA r cells overexpressing UDP-Gal transporter splice variants was isolated as described above.UDP-Gal transport into the Golgi vesicles was determined according to the procedure described by Sturla et al. (2001) with modifications.Assays were performed using Golgi vesicles (200 µg of protein) suspended in 10 mM Tris/HCl buffer, pH 7.4, containing 0.14 M KCl, 1 mM MgCl 2 , 0.25 M sucrose (STKM), supplemented with 0.5 mM 2,3-dimercaptopropanol. Reactions were carried out for 10 min at 30 °C in 0.2-ml samples containing 20 µM cold UDP-Gal (Sigma-Aldrich) and 5 µCi tritium-labeled UDP-Gal (American Radionucletide Chemicals, 20 Ci/nmol).Then, reactions were stopped with 1 ml of ice-cold STKM buffer, placed immediately on ice, and centrifuged at 60 000 × g for 15 min at 4 °C.Pellets were washed twice with STKM at the same conditions and dissolved in 0.6 ml of 1 M NaOH.Golgi lysates were neutralized with 0.2 ml of 4 M HCl and mixed with 20 ml of Rotiszint Eco Plus scintillation liquid (Roth).Beckman LS 6500 scintillation counter was used to measure radioactivity of UDP-Gal transported into Golgi vesicles.
Immunofluorescence microscopy.MDCK wild-type and MDCK-RCA r mutant cells overexpressing respective transporters were grown for 24 h using Lab-Tek Chamber Slide TM System (Nalgene Nunc Int.).Cells were fixed with 4 % paraformaldehyde in PBS for 20 min at room temperature, permeabilized with 0.1 % Triton X-100 in PBS for 3 min, and non-specific binding sites were blocked with 10% normal goat serum for 30 min.After blocking, cells were incubated overnight with a 1:100 dilution of mouse monoclonal anti-HA antibody (Abcam), a 1:500 dilution of rabbit polyclonal anti-HA antibody (Abcam), or a 1:100 dilution of mouse monoclonal anti-FLAG antibody (Sigma-Aldrich) combined with a 1:100 dilution of rabbit monoclonal antibody against GM130 Golgi protein (Abcam) or a 1:250 dilution of rabbit polyclonal antibody against ER calnexin (Abcam).After washing with PBS, cells were incubated for 1 h with a 1:100 dilution of goat anti-rabbit Cy5-conjugated antibody and/or goat anti-mouse Cy2-conjugated antibody (Abcam).Cell nuclei were counterstained with Hoechst 33342 dye (Sigma-Aldrich).After washing with PBS, slides were mounted onto glass coverslips using Dako Fluorescence Mounting Medium (Dako) and examined using a ZEISS LSM 510 confocal microscope.
RESULTS AND DISCUSSION
Identification of the localization of individual NSTs may be helpful in clarifying their biological role in glycosylation of macromolecules.Therefore in this study, an experimental model of a Madin-Darby canine kidney (MDCK) wild-type and MDCK mutant cells resistant to Ricinus communis agglutinin (MDCK-RCA r ) were used to further characterize selected NSTs.The subcellular localization of the overexpressed epitope-tagged NSTs was determined by indirect immunofluorescence.All sequences of interest were labeled with a specific tag attached at the N-terminus (FLAG or HA fusions), since the N-terminus is not required for either ER export or Golgi localization of NSTs (Zhao et al., 2006).To detect marker proteins specific for the ER or Golgi apparatus, antibodies directed against calnexin or GM130 were used, respectively.Unlike wild-type MDCK cells, MD-CK-RCA r mutant cells do not express functional UGT1 and UGT2 splice variants of the UDP-Gal transporter, but they express a short version of UGT (Olczak & Guillen, 2006;Maszczak-Seneczko et al., 2011).Transport of UDP-Gal into the Golgi vesicles in the mutant cells is residual (about 2 %) (Brandli et al., 1988).The mutant cells are enriched in glucosyl ceramide and cell surface glycoconjugates possessing terminal N-acetylglucosamine and significantly lower amounts of sialic acid attached to glycoproteins and glycosphingolipids.In addition, they possess a still unidentified defect in the keratan sulfate biosynthesis pathway (Toma et al., 1996) which is not corrected by complementation with the UDP-Gal transporter (Maszczak-Seneczko et al., 2011).To assess the functional activity of overexpressed NSTs, we analyzed phenotypic correction using GSL II (Griffonia simplicifolia lectin II) and determined the transport of UDP-Gal to Golgi vesicles derived from wild-type MDCK, mutant MDCK-RCA r and mutant cells overexpressing UDP-Gal transporter splice variants.This confirmed the functionality of this transporter (Fig. 1 and data not shown).
Based on the results of the UDP-Gal transporter analysis, our preliminary experiments performed with UDP-GlcNAc transporter (unpublished), and the assumption that all the transporters examined in this study are highly homologous to one another, we assume that the UDP-GlcNAc and SLC35B4 transporters are also functional and properly localized.
UDP-GlcNAc transporter
NSTs transporting UDP-GlcNAc selectively (Abeijon et al., 1996;Guillen et al., 1998;Ishida et al., 1999a) or multi-specific NSTs (Suda et al., 2004;Cipollo et al., 2004) have been identified.Recently, it has been reported that a point mutation of a UDP-GlcNAc transporter causes complex vertebral malformation (CVM) in animals (Thomsen et al., 2006).However, a detailed analysis of this transporter is difficult since mammalian mutant cells defective in this activity have not been isolated.An earlier immunofluorescence microscopy analy- sis of human UDP-GlcNAc transporter overexpressed in CHO cells demonstrated its localization in the Golgi membrane (Ishida et al., 1999a).In this study we found that canine UDP-GlcNAc transporter colocalizes with a Golgi marker in both MDCK wild-type (not shown) and MDCK-RCA r mutant cells (Fig. 2).Based on these results it seems that the transporter is localized exclusively to the Golgi apparatus and this localization is not affected by the presence of endogenous UDP-Gal transporter.
It has been demonstrated that translation of the longer mRNA results in the synthesis of UGT1 possessing the C-terminal sequence SVLVK, whereas translation of the shorter mRNA generates UGT2 possessing the LL-TKVKGS sequence.Kabuss et al. (2005) reported that the dilysine motif functions as an ER retention signal and directs human and hamster UGT2 also to the ER, resulting in dual localization to both the ER and Golgi apparatus.For murine CMP-Sia transporter only, it has been shown that a C-terminal IIGV motif is responsible for its Golgi localization (Zhao et al., 2006).However, it has been suggested that not all C-terminal KKXX and KXKXX sequences efficiently retain proteins in the ER (Itin et al., 1995;Andersson et al., 1999;Zerangue et al., 2001).The evidence that also CMP-Sia transporter carrying the KVKGS motif localized to the ER and Golgi (Kabuss et al., 2005) leaves open the question of whether this is a characteristic feature of the nucleotide sugar transporter or the dilysine motif.In this study, we found that after overexpression of UGT1 in MDCK-RCA r mutant cells, the protein colocalized with a Golgi marker only (Fig. 3), which is in agreement with published data (Yoshioka et al., 1997;Kabuss et al., 2005).When UGT2 was overexpressed, the protein colocalized with an ER marker only (Fig. 4).In contrast to published data (Yoshioka et al., 1997;Kabuss et al., 2005), when UGT1 and UGT2 were overexpressed in parallel, UGT1 colocalized with the ER and Golgi markers and UGT2 with the ER marker only (Fig. 5 and 6).This is surprising, because UGT2 may correct, at least in part, the N-glycosylation defect in CHO-Lec8 and MDCK-RCA r mutant cells (Oelmann et al. 2001;Maszczak-Seneczko et al., 2011).The process of N-glycan galactosylation occurs in the Golgi apparatus and one may suspect that UGT2 may be transferred at very low, not easily detectable levels into this organelle in a complex with other proteins, e.g., galactosyltransferases.
It has been previously shown that, although UGT1 localized exclusively to the Golgi apparatus (Yoshioka et al., 1997;Kabuss et al., 2005), it could be detected in the ER when coexpressed with ceramide-galactosyltransferase 1 (cer-GalT 1) (Sprong et al., 1998) which harbors a Cterminal dilysine motif and localizes to the ER.Sprong et al. (1998) reported that the ER-resident cer-GalT 1 is inactive if expressed in CHO-Lec8 cells which exhibit a genetic defect in the UDP-Gal transporter.However, the lack of functionality could be complemented by cotransfecting the cells with UDP-Gal transporter cDNA (Sprong et al., 1998).It has been suggested that cer-GalT 1 is able to make physical contact with UGT and thus can retain it in the ER (Sprong et al., 2003).Based on our results it is likely that also another protein partner, such as a splice variant of the same transporter, may be responsible for ER localization of UGT1.
SLC35B4 transporter
Human NSTs exhibiting dual activity and transporting UDP-GlcNAc and a second nucleotide sugar have been characterized by several laboratories.It has been demonstrated that vesicles from yeast cells expressing the human SLC35B4 gene showed specific uptake of UDP-GlcNAc and UDP-xylose (UDP-Xyl) (Ashikov et al., 2005), whereas the yeast homologue identified by Roy et al. (2000) transported UDP-GlcNAc alone.In the case of the human SLC35B4 gene, two splice variants, a longer version (encoding a protein of 331 amino acids) (Ashikov et al., 2005;Kobayashi et al., 2006) and a shorter version (encoding a protein of 231 amino acids) (Kobayashi et al., 2006), have been reported.Microsomes from V79 cells (Chinese hamster lung fibroblasts) over- expressing both splice variants of the SLC35B4 transporter showed specific uptake of UDP-glucuronic acid (UDP-GlcA) only after preloading microsomes with UDP-GlcNAc (Kobayashi et al., 2006).Both the yeast UDP-GlcNAc transporter and the human longer version of the UDP-GlcNAc/UDP-GlcA transporter have a C-terminal dilysine motif, potentially responsible for ER retention.However, the human transporter (longer splice variant) has been shown to localize in CHO cells to the Golgi apparatus (Ashikov et al., 2005).In contrast, homologous proteins from yeast and Drosophila have been found in the ER (Roy et al., 2000;Ishikawa et al., 2010).The localization of the shorter splice variant has not been examined.Our studies demonstrated that both stably overexpressed SLC35B4 splice variants colocalized in MDCK wild-type (not shown) and MDCK-RCA r mutant (Fig. 7 and 8) cells with the ER marker only, demonstrating different localization compared with the data obtained by Ashikov et al. (2005).Previously published data showing that SLC35B4 is localized in the Golgi apparatus suggested its involvement in delivering UDP-Xyl and UDP-GlcA for, e.g., proteoglycan synthesis.Our results demonstrating localization of SLC35B4 in the ER argue against this hypothesis since the glycosaminoglycan moieties of proteoglycans are not synthesized in this organelle.
CONCLUSIONS
Elucidation of the differences in localization of NSTs may contribute to the understanding of the role of these transporters in cellular glycosylation.Our studies confirmed that the UDP-GlcNAc transporter is localized in the Golgi apparatus and its localization is independent of the presence of endogenous UDP-Gal transporter.In the case of the UDP-Gal transporter, its localization may depend on the presence of the partner splice variant (this study) or glycosyltransferase (Sprong et al., 1998;2003).This suggests that proteins involved in these interactions could form heterodimeric complexes in the membrane, exhibiting different localization compared with single splice variants.Both splice variants of the SLC35B4 transporter, when overexpressed singly, are found in the ER.Based on literature data (Sprong et al., 1998;2003) and the results of this study, we cannot exclude the possibility that localization of NSTs may depend not only on a C-terminal ER/Golgi retention signal, but also on the presence of a partner protein such as glycosyltransferase, other NST/NSTs or a splicing variant of the same transporter.In conclusion, our data suggest that localization of NSTs may be caused by complex mechanisms.
Figure 1 .
Figure 1.Functional activity of UDP-Gal transporter (A) Lectin staining of glycoproteins from wild-type MDCK, mutant MDCK-RCA r , and mutant cells overexpressing UDP-Gal transporter splice variants UGT1 or UGT2.Phenotypic correction was analyzed using GSL II (Griffonia simplicifolia lectin II), specific for terminal N-acetylglucosamine present in both N-and O-glycans.Results of one of three experiments with a similar pattern are shown.(B) UDP-Gal transport into Golgi vesicles derived from wild-type MDCK, mutant MDCK-RCA r , and mutant cells overexpressing UDP-Gal transporter splice variant UGT1.Results are shown as mean ± standard deviation from three independent experiments performed in duplicate.
Figure 3 .
Figure 3. Subcellular localization of UGT1 splice variant of UDP-Gal transporter in MDCK-RCA r cells by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with HA-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 4 .
Figure 4. Subcellular localization of UGT2 splice variant of UDP-Gal transporter in MDCK-RCA r cells by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with FLAG-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 5 .
Figure 5. Subcellular localization of UGT2 splice variant of UDP-Gal transporter in MDCK-RCA r cells overexpressing also UGT1 splice variant of UDP-Gal transporter by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A) Reactivity with FLAG-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (C) overlay of A and B. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 2 .
Figure 2. Subcellular localization of UDP-GlcNAc transporter in MDCK-RCA r cells by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with HA-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 6 .
Figure 6.Subcellular localization of UGT1 splice variant of UDP-Gal transporter in MDCK-RCA r cells overexpressing also UGT2 splice variant of UDP-Gal transporter by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with HA-specific antibodies (A -red, Cy5; D -green, Cy2), (B) reactivity with FLAG-specific antibodies (green, Cy2), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 7 .
Figure 7. Subcellular localization of longer splice variant of SLC35B4 transporter in MDCK-RCA r cells by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with FLAG-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm.
Figure 8 .
Figure 8. Subcellular localization of shorter splice variant of SLC35B4 transporter in MDCK-RCA r cells by indirect immunofluorescence MDCK-RCA r cells were stably transfected with expression constructs, cultured and treated with antibodies as described in Materials and Methods.(A and D) Reactivity with FLAG-specific antibodies (green, Cy2), (B) reactivity with Golgi marker (GM130) antibodies (red, Cy5), (E) reactivity with ER marker (calnexin) antibodies (red, Cy5), (C) overlay of A and B, (F) overlay of D and E. Cell nuclei were counterstained with Hoechst 33342 dye.Bar, 20 µm. | 5,233.2 | 2011-09-14T00:00:00.000 | [
"Biology"
] |
Power Line Charging Mechanism for Drones
The use of multirotor drones has increased dramatically in the last decade. These days, quadcopters and Vertical Takeoff and Landing (VTOL) drones can be found in many applications such as search and rescue, inspection, commercial photography, intelligence, sports, and recreation. One of the major drawbacks of electric multirotor drones is their limited flight time. Commercial drones commonly have about 20–40 min of flight time. The short flight time limits the overall usability of drones in homeland security applications where long-duration performance is required. In this paper, we present a new concept of a “power-line-charging drone”, the idea being to equip existing drones with a robotic mechanism and an onboard charger in order to allow them to land safely on power lines and then charge from the existing 100–250 V AC (50–60 Hz). This research presents several possible conceptual models for power line charging. All suggested solutions were constructed and submitted to a field experiment. Finally, the paper focuses on the optimal solution and presents the performance and possible future development of such power-line-charging drones.
Introduction
In this work, we present a complete proof of concept of a robotic mechanism that allows a VTOL drone to be charged from power lines. Motivated by the demanding challenge of constructing a long-duration multirotor platform, we designed an "add-on" mechanism that allows a commercial drone to safely land on power lines, perform a short-term charging loop (in 1-2 C), and finally, take off safely from the power line. Figure 1 represents the overall concept of the suggested power line charging mechanism for drones.
Related Works
The topic of maximizing the flight time of drones and UAVs has attracted the attention of many researches; see [1,2]. The inspection of power lines is commonly performed using UAVs [3,4]. In order to be able to closely and safely inspect wires and power lines, there is a need to be able to detect, track, and map them in real time [5,6]. Several research works suggested wireless charging from power lines [7][8][9]. The use of a swarm of drones as a dynamic network infrastructure faces a related energy optimization challenge; see [10] for a recent paper regarding the main battery charging (energy harvesting) methods for such swarms. Recently, the concept of "drone-in-a-box" has become popular in many commercial and homeland security applications [11,12]. Several companies such as Skycharge [13], Heisha [14], and Flytbase [15] are offering after-market solutions for Commercial Off-The-Shelf (COTS) drones, allowing them to perform accurate landing on a charging pad. Alternatively, the concept of automatic battery swapping for drones has been suggested by a few researchers and companies [15][16][17]. The concept of "drone delivery" has motivated researchers to model optimization problems related to facility location [18] and vehicle routing [19]. In such cases, the goal is to locate a set of charging stations or battery swapping stations in a way that optimizes delivery performance (communally, time, or cost) [20][21][22]. Finally, the concept of energy harvesting for drones has attracted the attention of many researchers. The use of solar panels on a drone may allow some additional flight time [23,24], but this is mainly applicable to energy-efficient UAVs (i.e., VTOL drones are energy-inefficient by nature). Other methods of "in-air" charging include high-power laser (or mm-wave) long-range wireless charging [25]; however, such methods require the drone to be within the Line-Of-Site (LOS) of the charging station and raise major safety issues, so they are currently impractical for COTS drones. In this paper, we address the challenge of charging a drone on site, without any "charging station". Moreover, unlike most energy-harvesting methods, which are usually slow and require a long charging period, in this paper, we focus on the use case of "in-mission rapid charging".
Requirements
In general, a charging system for drones should satisfy the following requirements: • Light weight: The overall flight time on a single charge should remain as close as possible to the original performance of the drone (without the charging mechanism), e.g., at least 85% of the original performance of the platform; • Fast charging: The platform should allow a 2:1 charging vs. flying ratio (e.g., 30 min of charging should allow at least 15 min of extra flight time); • Rapid landing and takeoff: In order to maximize the operational time, the overall landing and takeoff time should be less than 1 min; • Safety: The charging tasks (landing, charging, and takeoff) should be safe and efficient, allowing a common operator to perform a complete charging process with a success probability greater than 99%; • Remote operation: the operator should be able to perform the charging maneuver via First-Person Flight (FPV) control (no line-of-site) from a remote location, which might be a few kilometers away; • Standard COTS platform: The charging platform should be able to be integrated with standard commercial drones.
Motivation
This research was motivated by the challenge of maximizing the operational performance of tactical drones. We were mainly motivated by the use case of wildfires, where drones are commonly used at the tactical level to help firefighters gain real-time information regarding the risks, changes in the fire, and post-fire analysis [26][27][28]. In the case of severe fires, a "risky" procedure such as power line charging might be seen as considerable, in particular if such an operation reduces the risk for firefighters. The use case of fighting wildfires often requires the continuous inspection of a wide region; thus, it is important to maximize the drone's overall operational time. Moreover, it might be desired to allow the drone to land on a spot with good visibility, allowing it to inspect the region with very low energy consumption (telemetry transmission only). We observed that wires are often suitable spots for such landing and guarding applications, and in the case of low-voltage power lines, they can also serve as a charging source. This paper focuses on small commercial drones, which are widely used for such applications. While wireless methods for charging drones from high-voltage power lines have been suggested by several researchers [7][8][9], this requires additional, relatively heavy (onboard) components, in particular a coil. Moreover, this requires the drone to maintain flight (hovering) during the charging process, which requires additional charging power and very accurate hovering capabilities (above the power lines). For recent surveys on wireless-charging methods, see [7,29]. We were unable to design a reliable power-line wireless-charging mechanism for small drones; therefore, we limited the scope of this paper to direct (contact) charging of the drone from low-voltage power lines. We constructed the charging system and tested it in field experiments.
Our Contribution
This paper focuses on the challenge of charging a drone from low-voltage power lines using direct contact. The research presents several design concepts and discusses their performance in real-world experiments. This paper's novelty is two-fold: (i) the new concept of "hook landing", which allows basically any drone to land safely and efficiently on practically any horizontal wire (and take off from it later on); (ii) the new concept of robotic measuring tape, which allows the drone that lands on the wire to approach and contact another wire (power line) in its surroundings (e.g., up to 1 m away), then close the circle for charging. Both concepts were designed and installed on an existing COTS drone (DJI's Matrice 100). The drone was able to safely land on a single wire, connect the robotic "meter arm" to the other wire, and perform a fast charging, resulting in 15 min of extra flight time using 15 min of charging. To the best of our knowledge, the "hook landing", the 2DoF robotic measuring tape for charging, and the 1:1 ratio for charging/extra flight time are all new in the scope of commercial COTS drones and UAVs.
Onboard Charging Concept for Drones
In this section, we present the design concept of the onboard charger (for drones). In simple terms, one can take the drone's regular charger and locate it on the drone. For that, we needed the following changes to the drone: • We changed the AC socket: we connected the two wires of the charger to the pads that contacted the power lines (one to the ground and the other to the line); • We located the charger onboard the drone; • We connected the charger to the drone battery (or batteries).
In fact, the charger had two main components: (i) an AC to DC transformer; (ii) a DC battery charger. For simplicity and for safety testing, we started our design with a low-voltage DC charger (mainly 12-24 V). Figure 2 presents the very basic concepts of the modified drone.
Recently, new mini drones such as DJI's Mavic mini, mini2, and miniSE, have allowed direct 5 V charging capabilities. Thus, one can modify one side of a USB charging cable (two wires) to connect to each of the legs, while the other side of the cable can be permanently connected to the USB charging port. This relatively simple modification allows costeffective "drone-in-a-box" solutions. The need to add an AC to DC transformer and the nature of unisolated low-voltage (100-250 V) power lines, which are commonly at least 40 cm apart, limit the use of microdrones (with weight below 1-2 kg). Therefore, we focused on relatively large drones of 2-10 kg that are applicable to real-world power line charging.
Power Line Landing Mechanisms
In this section, we cover the topic of landing on wires (power lines). The first wirelanding method is the natural vertical landing approach, which basically uses two parallel wires as a "landing pad"; this requires adjustable skids, (see . Next, we present a rotation landing, which is based on a "T-shaped" drone (see Figure 6) that approaches the wires from below; once the bar is above the two wires, it rotates 90 • and lands. Finally, The concept of "hook landing" is presented; this method uses a vertical pole with a "hook" to land on a single wire with a horizontal approach. This method is by far the most efficient, and therefore, it was chosen to be implemented in the advanced mode.
Vertical Landing
A natural way of landing on wires is to try to approach the wires from above (vertical landing) and use some kind of skid to safely rest on the two wires (see Figures 4 and 5. As all the other methods of landing on wires, the drone uses GPS (GNSS) to perform a "position hold", and the operator aligns the drone above the center of the two wires, then slowly reduces the drones height, until it tor uches and lands on the two wires.
Rotation Landing
In this landing method (on wires), the drone has a "T-shaped" structure. The drone approaches the power lines from underneath. First, the drone rotates so that the "T shape" will be parallel with the two wires (in the center below them). Then, the drone elevates itself until the "T" is slightly above the two wires (the drone frame is below the wires). Finally, the drone rotates 90 • and lands on the wires. Figure 6 demonstrates such a landing. Note that the takeoff process is performed in the reverse order. Figure 6. Rotation landing: the drone approaches the wires from underneath, and the "T-shaped" structure is aligned with the two wires (in between the wires). (Left) The drone's FPV camera is located on one of the sides of the "T-shaped" structure (marked in pink). (Right) The experiment setting: a very basic structure to practice FPV-based "rotation landing".
Hook Landing Drone
The first two power line landing methods have the following major drawbacks: (i) Vertical landing requires two leveled wires, with some fixed distance. This is often not the case for suburban low-voltage power line infrastructure, which is commonly vertical (or simply has unleveled wires). (ii) Rotation landing requires clear access from below the wires, which are commonly blocked or simply vertical. In general, both landing methods depend on the relative elevation and distance between the two wires, which rarely have any standards, thus limiting the vertical and rotation landing methods to be specifically adjusted to a known scenario.
The "hook landing" method uses a vertical pole with a "hook" at the upper side of the pole. In order to land on a wire (a single one), the drone flies slightly below the wire, while its "hook" is slightly above the wire. Once the pole touches the wire, the drone reduces the motors' throttle, allowing the hook to hold the wire. Figure 7 shows the "hook landing" of a drone on a wire. Figure 8 shows the "hook takeoff", which can be seen as the reverse order of the hook landing: the drone starts the motors; it elevates its hook above the wire and then flies back until it is at a safe distance from the wires.
Improved Charging Drone
In this section, we present the advanced model for the landing and charging mechanism. Considering the mentioned requirements and the drawbacks of the first two concepts (vertical landing and rotation landing), we designed a solution that first lands on a single wire (hook landing) and only then uses a novel robotic mechanism to connect to the other power line. The mechanism is based on a motorized measure meter with a gimbal-andvideo-based control loop (see Figures 9-11). The charging system also contains an onboard charger (converting 100-250 V AC current to 24 V DC) and several safety mechanisms and sensors to allow a safe charging with real-time telemetry for current, heat, and battery charging state. The drone was also equipped with a wide-angle up camera, allowing the operator to position the hook of the drone on the wire and then operate the robotic meter toward the other wire.
In order to simplify the charging system while maintaining a high level of safety, we used a drone with "smart batteries". Such batteries were equipped with an internal Battery Management System (BMS) with several safety features. In simple terms, using smart batteries simplified the interface of the battery to the two wires only (ground and power) to which both the drone and the onboard charger could connect. Finally, we looked for a COTS drone platform that was relatively suitable for the modifications, in particular large enough to add a pole (with the robotic meter arm, charger, and FPV camera). Given all the above requirements, the suggested platform was implemented on DJI's Matrice 100 drone, which is powered by one or two 6S 4.5-5.7 Ah (23 V) smart batteries and has several payload configurations. Matrice 100 commonly weighs 2.7-3.3 kg in a "ready-to-fly" configuration, depending on the battery and the payload. The overall charging mechanism weighed less than 270 g, which was about 7-10% of the drone's weight (including the onboard charger, wiring, and all the needed mechanisms to perform a safe and efficient charging from the power lines). Thus, even with the two batteries and advanced payload, the drone could support the additional weight of the power line charging mechanism without exceeding the drone's maximum takeoff weight, which was 3.6 kg. The integrated mechanism included two wide-angle FPV cameras and a long-range two-way remote control with complete charging telemetry. The suggested platform was tested in several field experiments, including: winds up to 12 kn, a flight elevation of up to 3000 ft above sea level, and a total drone weight of 2.9-3.5 kg. We also tested the power line charging mechanism in lab experiments with a 0.5-2.0 C charging rate, air temperatures of up to 40 • C, and hook hanging "swing tests" in (artificial) winds up to 20 kn. The hook mechanism was able to hold the drone safely on a single cable even in high winds with very little "swing" effect; this was due to the nonelastic (static) nature of metal power lines and the relatively short "hook pole". However, the charging mechanism was only tested during winds up to 7 kn, due to safety regulations. The modified drone passed all the tests, satisfying the requirements (mentioned above), allowing the platform to charge from 25% to 75% capacity in about 15 min, thus affording the platform an additional 15 min of flight time. To the best of our knowledge, this is the first work that has shown a 1:1 field charging vs. flight time for commercial drones. Hook landing: testing the ability to guide the robotic meter to the other wire. The image was taken from a real charging experiment using a 220 V AC generator. Note that the first wire (to which the hook was connected) was lower than the second one; this was not an issue as the robotic meter had a "pitch" range of 145 • .
Results
In this section, we present the main experimental results of the power-line-charging drone. We start by presenting the safety considerations, as power line landing can be extremely risky and in no way should be done without the proper experimental protocols. We then present the results regarding the actual landing and takeoff processes as performed by the "hook drone", and we elaborate on three options: the FPV, secondary drone control, and autonomous detection and landing using wire detection and mapping.
Safety Considerations: Power Line Landing and Battery Charging
In general, drones should not land on power lines. This paper presents a research concept that was implemented and tested in a controlled testing facility. In this subsection, we present a few safety issues that we found to be helpful in terms of performing such experiments in the safest way: • The components should be tested independently: (i) low-voltage (e.g., 24 V) static charging (no landing); (ii) wire landing: we performed many safe landings on a disconnected pair of wires; • The flying of the drones should be in a designated area, in which the "power lines" model is located; one should not attempt to perform an actual landing of a drone on real (active) power lines; • Lithium batteries are explosive (e.g., see Figure 12), and dangerous situations can occur, so it was necessary to make sure all the experiments were performed in the proper settings. Figure 12 shows the result of a 1 kg lithium polymer battery explosion. During a lab charging (and discharging) experiment with the 24 V DC charger, one of the two 6S batteries overheated (most probably due to a manufacturing defect), and at some point, the defective battery exploded, causing a fire and a "total loss" of the drone and surrounding electronics. Due to safety regulations, there was no one near the faulty battery (the drone in the charging process), resulting in just minor damages to the gear and no harm to any person; • Smart batteries with an inner protection (BMS) should be used [25,30]. This method can result in many lithium battery overheating or overcharging events, which may lead to battery explosions, as shown in Figure 12. We made sure that each battery had the following protection measures: (i) overcharge/overdischarge voltage cutoff; (ii) overheating: this was extremely important during charging experiments; (iii) maximum current; • In a full field experiment, the generator should have a Residual-Current Device (RCD) that is both sensitive and has a fast cutoff (response) time (see Figure 13 for the basic field experiment setting).
Power Line Charging for Drones: Performance Evaluation
In order to evaluate the performance of the power line charging, we considered a relatively large drone with a minimum weight of 5 kg and a maximum weight of 10 kg; its flight time was 33-15 min accordingly. Such a (large) drone allowed us to install several parallel chargers and present the drone's performance with respect to the number of onboard charges. The correlation between the drone weight and its flight time is shown in Figure 14 (left); note that the graph is relatively "close" to linear. Figure 14 (right) presents the expected performance reduction with any additional charger. Overall, each charger reduced the flight time by about 4% of, which was considered acceptable. Figure 15 (left) presents the power consumption of the drone (Watts) with respect to its weight; again, one can see that the graph is relatively linear. Finally, in Figure 15 (right), the significant difference between charging the battery to 100% or to 70% is shown. The graph also presents the benefit of having all (three) chargers, allowing a speedy charge of about 45 min from 10% to 70% charge, which would allow a an extra flight time of about half an hour.
Power Line Landing Using Analog FPV Systems
The FPV is a popular method for controlling racing drones by human operators. Unlike LOS control in which the drone operator sees the drone and controls it accordingly, in the FPV, the operator sees what the drone cameras see and controls it accordingly. We found the FPV to be an efficient and safe method for controlling a drone while landing on power lines (or taking off from them). Combined with the "GPS position hold" mode, the FPV controlling method allowed us to test each of the three landing methods (vertical, rotation, hook) for over 50 successful landings on the power lines (without any "crashes"). Yet, the "hook landing" method was the fastest, allowing an experienced FPV operator to land the drone on a wire in less than 10 s (the same holds for the takeoff process). Moreover, the "hook landing" was performed on a single wire, allowing the robotic meter arm to reach the other wire in almost any configuration (as long as the wires were within 1 m apart), while for the other two methods (vertical and rotation), the distance between the two wires needed to fit the drone skids or "T-shaped" structures.
Two Drones' Guided Autonomous Landing: Negative Results
We found that in some cases, the onboard cameras could not always capture the whole picture. Therefore, we developed the following (complicated) guided landing method: (i) The first drone detected a landing spot and lands (or hovers) in a nearby location such that it could properly view the power lines in the landing position; (ii) then, the second drone approached the lines and was guided for "hook landing" by the operator based on the video as transmitted by the first drone; (iii) once the second drone has landed safely, the first drone may guide another drone for wire landing. After the first drone has completed the charging process, the first drone can also guide the takeoff stage of the second drone. Figure 16 presents the overall concept of guided landing (two drones). Note that both images (of Figure 16 are the actual view of the guiding drone, at a low resolution (360 P), as all the computations were performed "onboard" the guiding drone and needed to support at least 30 fps (preferably 60 fps).
Long-Range Power Line Landing
Recent improvements in long-range digital video transmitters such as DJI's digital FPV system allow low-range and low-delay video transmutation: 4 km range, 28 ms delay (720P at 0 fps). We observed that in order to perform a safe FPV landing on the power lines, a delay of 100 ms was acceptable; therefore, one can use longer video transmission configurations, e.g., DJI's OcuSync 3.0 has a range of up to 10 km (LOS conditions). Moreover, the "HereLink" digital video transmission system has a range of up to 20 km using a stack antenna. We were able to exceed this range by up to 25 km using a 14 dBi patch antenna. Finally, recent commercial drones such as Parrot's Anafi AI have a built-in 4G (LTE) modem, allowing an operator to practically fly the drone at any range, as long as the drone has cellular coverage. Based on these advantages of long-range video transmission and due to safety regulations, we did not perform an autonomous landing on the power lines, limiting the landing to a human FPV operator. It should be noted that the expected video transmission range would often exceed the drone's flying range. Yet, the challenge of autonomously and safely landing a drone on a power line is surely a research topic we will consider as a future work.
Conclusions and Future-Work
This paper focused on the concepts of charging drones (VTOLs) on power lines by landing on them, charging at a low voltage (100-250 V unisolated), and finally, taking off (safely) from the power lines. The main results of the paper were: (i) the concept of "hook landing"; (ii) the concept of a robotic meter arm. Combined, the two concepts allowed us to land on a single wire efficiently and safely; moreover, this allowed us to separate between the landing and the charging processes. The suggested method was implemented and tested on a few drone platforms. To the best of our knowledge, the charging performance of our suggested system on the Matrice 100 drone was outstanding, allowing a 1:1 ratio between charging and extra flight time. Landing a drone on a power line is risky, and there is a tendency to bury power lines [31]; however, the significant cost of such infrastructure improvements limits this tendency; thus, unisolated low-voltage power lines will probably remain common in a wide range of regions. We conjecture that in some cases, the use of a power-line-charging drone can in fact reduce the risk to firefighters and therefore might be considered. For future work, we plan to add an autonomous landing capability to the power-line-charging drone, the idea being to detect the wires (using vertical stereo cameras) and then performing a control algorithm according to the drone's relative position to the wires. Finally, we plan to design a safe charging station, which will use 24 V DC wires and allow a set of commercial drones to perform a safe and efficient charging. | 5,982.2 | 2021-10-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Temporal evolution of oscillating coronal loops
Context. Transverse oscillations of coronal structures are currently intensively studied to explore the associated magnetohydrodynamic wave physics and perform seismology of the local medium. Aims. We make a first attempt to measure the thermodynamic evolution of a sample of coronal loops that undergo decaying kink oscillations in response to an eruption in the corresponding active region. Methods. Using data from the six coronal wavelengths of SDO/AIA, we performed a differential emission measure (DEM) analysis of 15 coronal loops before, during, and after the eruption and oscillation. Results. We find that the emission measure, temperature, and width of the DEM distribution undergo significant variations on time scales relevant for the study of transverse oscillations. There are no clear collective trends of increases or decreases for the parameters we analysed. The strongest variations of the parameters occur during the initial perturbation of the loops, and the influence of background structures may also account for much of this variation. Conclusions. The DEM analysis of oscillating coronal loops in erupting active regions shows evidence of evolution on time scales important for the study of the oscillations. Further work is needed to separate the various observational and physical mechanisms that may be responsible for the variations in temperature, DEM distribution width, and total emission measure.
Introduction
Transverse oscillations of coronal loops in erupting active regions have been intensively studied in recent decades. A recent and comprehensive review can be found in Aschwanden (2019). These waves are considered to be global magnetohydrodynamic (MHD) kink eigenmodes, most commonly detected as the fundamental mode, with an antinode in the vicinity of the loop apex. The relation between the period of oscillation and the estimated loop length in Goddard et al. (2016) and Nechaeva et al. (2019) confirmed this interpretation.
Much of the interest in these oscillations stems from their seismological potential (e.g. Liu & Ofman 2014) and the opportunity for detailed study of the associated MHD wave theory. Recent examples of studies that attempted seismology include Guo et al. (2015); Pascoe et al. (2016a) and Arregui et al. (2019). A further development is the detection of a decayless low-amplitude regime that is not associated with a flare or eruption (Nisticò et al. 2013a;Anfinogentov et al. 2015). Anfinogentov & Nakariakov (2019) demonstrated that these waves may be used for seismology of quiet active regions. We also note that examples of apparently undamped, or even growing, high-amplitude oscillations associated with eruptive events have been reported (e.g. Wang et al. 2012).
As the observations and theory of these oscillations become increasingly complex, the uncertainties about the nature of the loops themselves become critical. Reale (2014) recently reviewed loop observation and modelling. There are different categories of loops, but for this study, only long, warm (1-2 MK), and non-flaring loops are important. Individual loop threads observed with the Atmospheric Imaging Assembly (AIA) are normally part of larger bundles of similarly oriented threads. When such threads are analysed with higher resolution instruments (e.g. Peter et al. 2013;Aschwanden & Peter 2017), significantly increased fine structuring is not typically seen. However, the presence of further sub-resolution structuring (and the associated filling factor) of these individual apparently resolved threads is still a subject of debate, and is not considered here. Kucera et al. (2019) recently confirmed that these long, warm loops show little evidence of expansion with height, nor do they show cross-sectional asymmetry, despite the expected expansion of the magnetic field with height. Brooks (2019) studied the emission around warm loops, finding that these loops are generally over-dense and largely isothermal, and are embedded in diffuse multi-thermal plasma with a peak temperature similar to that of the loops themselves. However, many studies have shown that the cross-field temperature structure of active regions loops varies from approximately isothermal to clearly multi-thermal (e.g. Schmelz et al. 2011Schmelz et al. , 2014Schmelz et al. , 2016. Such individual loop threads are expected to be nonequilibrium structures, with lifetimes from tens of minutes to several hours predicted by thermodynamic analysis. In whole loop bundles, observation of thermal cycles are common (see Froment et al. 2019, and references therein), with periods from 2-16 hours. Additionally, various plasma flows can be detected in a loop, which may result in a constant evolution. Su et al. (2018) reported density variations of the oscillating loop and related them to the period variation of the kink mode. Loop oscillation studies typically observe loops for about an hour, and a visible evolution of the intensity and structure is frequently ob-A&A proofs: manuscript no. 37467corr served (see example time-distance (TD) maps in Goddard et al. 2016;Nechaeva et al. 2019), and it is becoming increasingly important to analyse this evolution in greater detail.
Numerical simulations of transverse oscillations in radiatively cooling coronal loops were performed in Magyar et al. (2015), where a 20% difference in amplitude after few oscillation cycles was found. Related analytical studies include those of Al-Ghafri & Erdélyi (2013); Ruderman et al. (2017) and Arregui et al. (2019). The role of the Kelvin Helmholtz instability (KHI) during loop oscillations has been investigated by means of numerical simulation (e.g. Terradas et al. 2008;Antolin et al. 2017;Afanasyev et al. 2019) and can lead to the evolution of the density, temperature, and other observables (e.g. Antolin et al. 2016;Goddard et al. 2018).
The aim of our study is to observationally explore the variation of the loop emission measure (and density by proxy), temperature, and width of the temperature distribution before, during, and after undergoing displacements and oscillations. In Section 2 the data and processing are described, in Section 3 case studies are presented, collective results are reported in Section 4, and a discussion and conclusion are given in Section 5.
Event selection
Kink oscillation events were selected from the catalogue given in Nechaeva et al. (2019), which were observed by the AIA on board the Solar Dynamics Observatory (SDO) (Lemen et al. 2012) from 2010-2018. The selection was based on the requirement for an isolated loop thread to be visible for the entirety of an oscillation (until the main oscillation has damped), and for a significant period before and after the oscillation (of the order of the damping time). A further requirement was that the loop position must be able to be tracked by Gaussian fitting of the transverse intensity profile to avoid manual tracking as in Goddard et al. (2016) and Nechaeva et al. (2019), or more complex tracking procedures.
Linear slits perpendicular to the loop axis were used for each event at 171 Å to create a series of TD maps, as described in Nechaeva et al. (2019), and the clearest TD map was confirmed to meet the above criteria. This procedure resulted in just 15 events for further analysis. The next two subsections outline the procedure we performed for each oscillation event.
We note that our definition of the oscillating loop is an individually distinguishable, apparently monolithic 'thread seen at 171 Å, that is, a warm loop (∼ 1 MK) with a lifetime of about an hour or longer. This is in contrast to long-lived loop systems, which may be analysed for tens of hours (e.g. Auchère et al. 2016;Froment et al. 2019).
Our analysis is limited to a few pixels around the apparent loop centre determined in the 171 Å images to attempt a first search for any evolutionary signatures in an ensemble of oscillating loops. We do not attempt to characterise the transverse temperature or density profiles of the structures (e.g. Pascoe et al. 2017;Goddard et al. 2018). This would require a careful treatment of the background in all wavelengths, which may be a natural extension of the work in the future.
Initial processing
Data for all coronal filters of AIA (94, 131, 171, 193, 211, and 335 Å) were downloaded for a time window before and after the reported oscillation start time. This was done using the Solar-SoftWare (SSW) function ssw_cutout_service, and included frames with the AEC = 1 flag to retain the 12-second cadence for each wavelength. The data cubes for each wavelength were then processed with read_sdo and aia_prep. For the chosen slit position, TD maps were created at each wavelength (coaligned based on the header information), and averaged over a 5-pixel width perpendicular to the slit to reduce noise. The selected events are listed in Table 1. All analysed loops were offlimb, except for event 29, loop 2.
An oscillation time-series was then obtained by fitting the transverse intensity profile of the loop at 171 Å with a Gaussian plus a second-order polynomial at each time, taking the peak position as the loop centre, as in Pascoe et al. (2016b). This is taken to be the loop centre position at all other wavelengths, regardless of whether the loop is visible. The top panels of Fig. 1 show examples of a loop-centre track overplotted on the 171 Å TD maps (the rest are presented in Appendix A). Detrended TD maps at each wavelength were then created by extracting ±14 pixels about the loop centre position (29 pixels in total). These detrended TD maps were smoothed in time with a boxcar of 15 time steps (3 minutes) to reduce noise because the DEM analysis is sensitive to small changes in the channel intensities (and their errors). The errors on the smoothed TD map intensities were estimated using the SSW routine aia_bp_estimate_error. A limited form of background subtraction was then performed for each detrended and smoothed TD map by subtracting the minimum intensity at each time. This was performed separately for each wavelength and removed some of the influence of the background intensity and its variation over time. The middle and bottom panels of Fig. 1 show two examples of detrended and smoothed TD maps at 171 Å. Averaging to obtain time-series of all parameters was performed within ± 2 pixels about the central pixel.
Because different slit positions were sometimes used with respect to the catalogue, the oscillation period (p osc ) and exponential damping time (τ d ) were estimated again for each event (following Goddard et al. 2016), and are given in Table 1. Figure 2 shows histograms and the mean values of the loop length, period, and damping time. A more detailed treatment of the damping profiles is not required (e.g. Pascoe et al. 2016b;Morton & Mooroogen 2016) because the damping time is just used to define consistent time chunks for the further collective analysis of all events. The time of the initial perturbation was determined (t 0 ), and then three temporal windows These three time intervals were found to define the pre-oscillation (P1), oscillation (P2), and postoscillation (P3) phases for most events. In some cases, marked by an asterisk in Table 1, intervals of just one damping time were used. When these intervals extended slightly beyond the limits of the TD map, this resulted in correspondingly shorter P1 or P3 averaging intervals.
Differential emission measure analysis
The light recorded by extreme ultraviolet (EUV) telescopes (e.g. STEREO/EUVI and SDO/AIA) is sensitive to the square of the electron plasma density (n e ) distributed along the line of sight (LOS, z). The amount of plasma along the LOS is defined as the emission measure, Because the plasma integrated along the LOS can be at different temperatures, it is useful to define the differential emission measure dEM/dT (more commonly indicated as DEM). Therefore, the associated emission measure, EM, is given as an integral of DEM over the temperature, T , as The temperature-response function (K) of the instrument for a given waveband (λ) filters the observed plasma in a certain Article number, page 3 of 12 A&A proofs: manuscript no. 37467corr Table 1. Fifteen loop oscillation events chosen for analysis. The first two columns refer to the event and loop IDs given in Nechaeva et al. (2019), and the loop length is taken from the catalogue. The period and damping time are determined by the same method as in Goddard et al. (2016), using the present data. Events with asterisks note cases where temporal windows of just one damping time were used in the further analysis. temperature interval. Therefore, the associated flux or intensity is given by
Event ID Loop ID Length
The response functions of EUV imagers have relatively narrowband peaks, but include spectral contributions from a wide range of temperatures (e.g. Lemen et al. 2012). Moreover, only a limited set of observations in different EUV wavebands are available (e.g. the six channels of AIA), and the inversion of the corresponding intensities into a DEM distribution constitutes an ill-posed problem (e.g. Hannah & Kontar 2012). Several algorithms have been developed for the determination of the DEM distribution of coronal structures observed in EUV and X-ray wavebands. For example, Aschwanden et al. (2008) forwardmodelled the DEM distribution with Gaussian functions. Other methods are based on regularisation to avoid calculating negative DEMs (Hannah & Kontar 2012;Plowman et al. 2013), or they are based on the concept of sparsity (Cheung et al. 2015). Recently, multiple methods were compared in Morgan & Pickering (2019). For this study we adopted the DEM analysis for the AIA images developed by Hannah & Kontar (2012). The code is available on-line 1 .
The DEM code was run over the detrended and backgroundsubtracted intensity TD maps at the six coronal EUV channels of AIA (we excluded the 304 Å wavelength). These maps were given as input, along with the error estimates (obtained from aia_bp_error_estimate_error) and the response functions at the time of observation (determined using aia_get_response). The DEM code returns a series of DEM maps calculated over user-defined temperature intervals. We chose a temperature range spanning 0.6 to 10 MK, defined over 95 equispaced bins of 0.1 MK.
The left panels in Figure 3 show the DEM maps for event 28, loop 1 at the temperature intervals centred on 0.7, 1.1, and 2.1 MK. We averaged the DEMs over the indicated central pixels for each time and built a DEM distribution as a function of the temperature. An example is given in the top right panel of Fig. 3, where we show the DEM distribution for a specific time (t=10 min). Most of the DEM contribution is located in the temperature range 0.8 -1.5 MK, which is the typical temperature range for non-flaring warm coronal loops.
The typical shape of the DEM distributions is not purely Gaussian. The obtained distributions are ofen asymmetric, and they are sometimes double-peaked, probably due to the presence of multiple structures along the LOS. Therefore the temperature of the DEM peak is not a good observable for the characteristic temperature of the DEM distribution and does not represent its average. We consider the median temperature of the DEM distribution as a good observable. This was calculated by considering the cumulative distribution function (CDF) of the DEM distribution for any time as with EM = Σ k DEM k ∆T k . Equation 4 has the same meaning as a probability, and the sum over all the temperature bins returns 1, that is, the normalised total area of the DEM distribution. The median of the temperature divides the distribution into two sectors of equal area. Therefore we calculated the median temperature by linear interpolation as the value T med where CDF(T med ) = 0.5 (we refer to it as T below). The standard deviation is defined as the temperature interval around the median, with a probability of 0.68. To calculate the extrema of this interval, we therefore determined the lower and maximum temperature as CDF(T min ) = 0.16 and CDF(T min ) = 0.84. Because of the asymmetry of the distribution, the values T min,max are in general not equidistant from T med . The time variation of the standard deviation is related to the changes in the broadness of the DEM distribution, which has important physical implications. We therefore considered another observable in our analysis: the width of the DEM distribution as W DEM = |T max − T min |, along with the total emission measure, EM, and median temperature, T . These definitions are shown in the right panels of Fig. 3. From the DEM maps we created time series of EM, T and W DEM averaged within the central 5 pixels for each time. Examples of the final time series are shown in the bottom three rows of Fig. 4, which are normalised by the initial values.
Case studies
Two case studies are now presented to show the analysis of the TD maps and the observables obtained from the DEM analysis. This also allows for qualitative discussion of the trends for the emission measure, the temperature, and the broadness of the DEM distribution. Plots related to the DEM decomposition of this event are shown as an example in Fig. 3. The DEM map for 1.05-1.15 MK clearly shows the loop plasma seen in the detrended 171 Å TD map. The cooler and hotter DEM maps show the appearance of other structures that are co-spatial with the loop plasma at about minute 30, highlighting the difficulty and limitations of the analysis. Additionally, the loop is seen as a reduction in the DEM in the higher temperature bin. In the top left panel of Fig. 4 the intensity time series for each wavelength are plotted (after secondary smoothing with a boxcar of 10 minutes for clarity), normalised by the starting values. The remaining three left panels show the EM, T, and W DEM normalised to the starting values.
The second dashed line in all plots represents the onset of the initial perturbation of the loop (t 0 ), the start of P2. Most of the damping then occurs during P2. Some residual oscillation plus the decayless oscillations are visible during P3. For this event, the loop remains clearly visible for a significant duration after P3. This is not typical in the 15 events, however, and so the collective analysis is restricted to the three time intervals defined based on the estimated damping time.
The intensity at 171 Å increases over the first 10 minutes and reaches a plateau before it decreases after the loop is perturbed by an eruption. The T time series shows that this corresponds to an ≈ 20% increase in the measured temperature. W DEM also increases slightly later, but began to increase from the start of the observation. The EM time series is dominated by a more gradual rise and fall, which is due to the plasma, which appears at cooler and hotter temperatures. Nisticò et al. (2013b) described an expansion or rise of the loops just before the eruption, which may be a signature of gradual evolution occurring in the loop. The intensity peaks at 94 and 131 Å that occur after the main part of the oscillation are due to the passing through the slit of multithermal plasma associated with the eruption. Such a transient feature cannot be easily removed from the analysis.
During the oscillation, a modulation of EM with approximately p osc is present, which is likely due to a variation in column depth, but might be a combination of multiple factors (e.g. Yuan & Van Doorsselaere 2016b). Modulation of the temperature can also be seen, which might be due to higher temperature structure in the background that the loop may periodically pass over.
After P3, EM and T remain almost constant, therefore the loop and surrounding environment appear to have reached a new equilibrium following the eruption and oscillation. However, all parameters varied strongly before P1. It is almost impossible to separate the components that are due to the eruption and flare, and those due to the oscillation. However, we can probably rule out an overall heating or cooling cycle of the loop because the DEM parameters and intensities are almost constant after P3.
Event 37, loop 4
This loop oscillation was previously analysed in a seismological manner as loop 1 in Pascoe et al. (2016a), where a context image of the loop and active region is shown. The TD maps are shown in the right panel of Fig. 1, and Here we also see some residual oscillation during P3. However, for these two events, the chosen time intervals adequately represent the phases; prior to the large amplitude oscillations (and eruption), during the oscillation (post eruption), and after the main oscillation phase.
In contrast to the previous event, no jumps in the intensity at 94 and 131 Å during P2 (following the eruption) are observed. The intensity at 171 Å increases gradually throughout the observations before it drops again after P3. The brightening of the oscillating loop is clearly seen in the original TD maps. The EM time series follows this trend, meaning that the quantity of lowtemperature plasma increases. It is unclear, however, whether this is due to the increasing loop density or to movement of the second loop nearby in the 171 Å TD maps. The T and W DEM time series show a gradual evolution that begins well before the eruption, and so the increase in the EM is likely to be due to evolution of the loop resulting from general evolution of the active region during the eruptive event, rather than directly due to the oscillation.
Collective analysis
We now present a collective analysis of the behaviour of EM, T, and W DEM averaged during phases P1, P2, and P3, defined using the corresponding damping time. Additionally, we consider these parameters averaged over the corresponding oscillation period (p osc ) from one period before the initial perturbation Article number, page 6 of 12 C. R. Goddard and G. Nisticò: Temporal evolution of oscillating coronal loops Table. 2. The values for some events lie outside of the plotted ranges. Table 2. Average parameters during the time intervals P1, P2, and P3 over all events. The top three rows show the average value of the parameters, as seen in Fig. 5. The middle three rows give the average percentage change with respect to the values in P1. The bottom three rows are the same as the middle three, but the average unsigned percentage changes are given. All uncertainties are the standard error on the associated average.
Averaging in three time intervals
For each event the time intervals P1, P2, and P3 were calculated as described in Section 2.2. All 171 Å TD maps are shown in Appendix A with the intervals overplotted. The parameters EM, T, and W DEM were then averaged during these intervals for each event. Figure 5 shows histograms of these values, with the mean values and associated standard error indicated. These mean values over the 15 events are also given in the top three rows of Next we studied the percentage variations of the parameters with respect to the value in P1, which allows for a collective analysis of all the events. For example, ∆T 2 = 100 × (T 2 − T 1 )/T 1 gives the percentage change of the temperature from P1 to P2. A&A proofs: manuscript no. 37467corr The mean was then taken over all events, giving ∆T 2 , or |∆T 2 | when we take the mean of the unsigned variations.
The individual values for each event are plotted as histograms in Fig. 6, with the mean values indicated. There is a clear tendency towards large increases in the EM for some events, reflected in the mean percentage variations of 110% and 130 %. This is likely due to cases where some part of the eruptive plasma (or other perturbed loop) passes through the observational slit. There are no clear trends towards positive or negative variations for T or W DEM . The mean variations for P2 and P3 with respect to P1 are also given in rows 3-6 of Table 2.
The mean unsigned variations are given in the bottom three rows of the table; we do not present the associated histograms. There is a ≈ 150%, 15%, and ≈ 30% variation from P1 to P2 in the EM, T, and W DEM , respectively, with little further variation between P2 and P3. This indicates that the eruption and initial perturbation of the loop account for most of this variation, or the first few cycles of oscillation.
Averaging over oscillation cycles
To specifically investigate changes over shorter timescales we then averaged the time series over the oscillation period, p osc . This was done from one period before t 0 (−1p osc ) until ten periods after (10 p osc ) for each event, defining the percentage variation with respect to the value at −1p osc . The mean value over all events was then computed for each time step, again defining the standard error, so that we have ∆EM , ∆T , and ∆W DEM as a function of time. For many events, 10 p osc extends beyond the data range, so that later time steps are averages over fewer events. This caveat should be recalled when the trends beyond 3-4 p osc are discussed. We also note that in most cases, the oscillation has largely damped after a few periods, but we continued the period-averaged time series where possible regardless of this.
The time series averaged over all events are shown in Fig. 7. Zero represents 1 p osc immediately following the initial perturbation of the loop (t 0 ). The signed variations are not presented because they are largely within the error of zero. All three parameters undergo some variations from −1p osc to 0, ≈ 80%, 7%, and 20 % for |∆EM| , |∆T | , and |∆W DEM | , respectively. This corresponds to changes during the initial displacement of the loop. Over the next few oscillation periods, there is little further change in the average variation with respect to the starting values. |∆EM| shows a peak between 4-8 p osc , supporting the other indications that the EM generally shows a strong variation some time after the eruption and initial perturbation of the loop, which is not seen in the other parameters.
In summary, we find that EM, T, and W DEM all vary at the time of the initial loop displacement, but there is no tendency towards an increase or decrease across all events. Therefore the variation from P1 to P2 in the previous section is likely due to the displacement of the loop, rather than the oscillation. The EM time series also shows large variations after a few oscillation periods.
Discussion
Our sample of oscillating coronal loops has an average loop length of 300 Mm, a kink mode period of 9 minutes, and an exponential damping time of 18 minutes. These are in line with the typical values of the distributions in Nechaeva et al. (2019). The oscillating loops (or individual loop threads) were selected based on observations at 171 Å and are typically not prominent at other wavelengths. They are therefore typically only well seen in the DEM maps at temperatures of about 0.7-1.2 MK, with some exceptions. The mean temperature value is 1 MK. The mean values of EM and W DEM are 0.7×10 26 cm −5 and 0.6 MK, respectively. It is important to reiterate that we only calculated these parameters Fig. 7. Time series of the unsigned percentage variations of EM, T, and W DEM averaged over the oscillation period, and then over all events. The percentage change is with respect to the starting value, at -1 p osc (before the initial perturbation). Zero represents the first oscillation period immediately following the initial perturbation of the loop (t 0 ). The time series extends until 10p osc after t 0 .
for the central 5 pixels of the TD maps. Therefore the measurements are largely comprised of emission from the loop core, the position of which was determined at 171 Å.
Two cases studies were presented in Section 3. For both we observed changes in the loop parameters before the time interval of interest as well as during it. The first example was dominated by additional plasma from the eruption seen at 94 and 131 Å, whereas the second did not show any sharp changes due to the eruption. These two situations were present in the 15 events we analysed. We see evidence for periodic modulation of the parameters during the oscillation in some cases, and further indepth study may help characterise to what extent this is due to the variation of the column depth, background structures, or other effects (e.g. Cooper et al. 2003;Yuan & Van Doorsselaere 2016a,b). Loop width and intensity variations associated with the oscillation have been noted in other observational studies (e.g. Aschwanden & Schrijver 2011;Wang et al. 2012), and a relation to density variations is postulated. In the former observation, it was noted that the measured intensity modulation would require an LOS angle change, which is incompatible with the observed oscillation, but this does not rule out the influence of other structures along the LOS. We did not verify this here, and it requires a careful event-by-event analysis.
There are no significant shifts in the mean values of the emission measure, temperature, or DEM width over P1-P3; additionally, there are no clear trends in the signed percentage changes to positive or negative variations, except for the emission mea-sure. On average, we detect ≈ 150%, 15%, and 30% unsigned variations from P1 to P2 in EM, T, and W DEM , respectively. This does not significantly change for P3. Based on this, we conclude that the main detected changes in emission measure (and density by proxy) and temperature occur following the initial perturbation. Shorter timescale variations were analysed by averaging the time series over the oscillation period for each event. It is found that the parameters undergo large average changes during the eruption, emphasising the above results and further localising the main variations to the initial perturbation. However, the mean unsigned variation of the emission measure shows a peak after a few oscillation cycles as a result of the cases where additional plasma from the eruption, or neighbouring loops, passes the observational slit. This is linked to the intensity peaks at 94 and 131 Å shown in Fig. 4, which appear to be related to multithermal plasma from the eruption and flare in the vicinity of the TD map slit. This is apparent for several of the studied events and may therefore support this conclusion. This may still be related to a change in the background conditions, and may be important for the analysis of the loops and their oscillations.
Aschwanden (2019) pointed out that coronal loops at temperatures of 1-2 MK are expected to undergo efficient radiative cooling. Consequently, the observed lifetime of a loop seen at 171 or 193 Å should be 10-20 min in the absence of heating, but the apparently monolithic loop threads observed here generally last one to two hours, although they do visibly evolve in this time (see Appendix A). Further analysis is required to determine to what extent the detected evolution is due to general thermal cycles of the loop bundles (as mentioned in the introduction) or to thermodynamic changes in the active region due to the eruption.
Evidence for KHI?
One motivation for this work was to make a first step in searching for evidence of KHI occurring for oscillating coronal loops. Signatures such as a varying intensity ratio between hotter and cooler channels, the appearance of substructure, and a broadening of the loops DEM have been predicted (Antolin et al. 2017;Goddard et al. 2018;Van Doorsselaere et al. 2018).
The mean value of W DEM is measured to increase in P2 and P3, but the values are all within the error. In Table 2 and Fig. 7 we show that the emission measure, temperature, and DEM width do vary during the oscillation, but the case studies showed that this is likely part of a longer timescale evolution. Sharper changes also occur at and during the initial displacement of the loop. Terradas et al. (2018) showed that for loops with large inhomogeneous layers, which seem to be more common , the effect of twist may be important in suppressing the development of shear instabilities at the loop boundary. Loops being in an initial state with a large inhomogeneous layer also delay the onset of KHI, and these effects may help explain the lack of hard observational evidence found here. However, we emphasise that we only analysed the centre of loops as seen at 171 Å, and might therefore miss predicted changes at the loop edges or trends in loops that are better seen at other AIA wavelengths. However, the TD maps in Figs. A.1 and A.2 show the generally large variability of the loop structure and intensity over the timescales of the oscillation. This should be the subject of a more detailed study to determine if it results from non-linear processes.
Implications for seismology
Multiple theoretical studies have analysed the effect of the varying loop structure on the oscillations they exhibit. If the loops vary strongly during the oscillation, for example in density, then it is clear that this will affect the analysis of the oscillation and seismology. This analysis assumes that the loops are in a steady state. Making a first attempt at quantifying the evolution of oscillating loops observationally was another aim of this study.
Details of the detected variation have been discussed extensively above. To study the oscillations, the variation of the loop after the initial perturbation is most important. During a time period that exceeds the oscillation period following the eruption ten times, we find that the EM, T, and W DEM typically vary by about 100%, 10%, and 35 % respectively, compared to the values prior to the perturbation. However, this may still be strongly influenced by the limited background subtraction.
Variation of the loop interior density would vary the associated Alfvén speed and therefore the kink speed and observed period of oscillation. This potential source of uncertainty would already be taken into account by the error on the estimated period, which should reflect its variation, and therefore on any seismologically determined parameters. However, this ignores the interplay between density, temperature, and magnetic field in maintaining pressure balance across the loop. Additionally, the transverse structure of the loop may evolve, which determines the damping behaviour through resonant absorption. Here, we note that the measured loop properties are seen to evolve on timescales that are important for the oscillations, and should be the subject of further investigation.
Limitations
Our preliminary analysis has various limitations that may need to be addressed in further work. For example, we did not analyse the evolution of the background plasma, but the density contrast is also important in the context of waves and seismology. We did not try to model the transverse density profiles of the loops either, as performed in Pascoe et al. (2017) and Goddard et al. (2017Goddard et al. ( , 2018. This would require careful treatment of the background in all wavelengths, and introduce the further complexity of applying it to all structures over time. This may be a natural extension of the work in the future, however.
Our results can clearly be heavily influenced by the multiple overlapping structures with different temperatures (see Fig. 3 for an example). These structures often do not appear to be part of the same monolithic loop, but separate structures at different temperatures. Active regions appear more populated at higher temperatures, and this may mask the actual higher temperature component of the loop of interest (Schmelz et al. 2014(Schmelz et al. , 2016. Recently, efforts have increased to improve DEM analysis codes (e.g. Morgan & Pickering 2019) and forward-modelling approaches (e.g. Pascoe et al. 2019). These techniques should be considered for future studies, with multi-wavelength analysis at multiple positions along the oscillating loop.
Conclusions
Following a preliminary analysis, we find that the emission measure, temperature, and DEM distribution width of coronal loops undergo significant variation on timescales relevant for the study of their oscillation. This is expected because of the variability seen in the 171 Å TD maps. There is no clear trend towards increases or decreases, however, and so the mean values of the distributions do not change significantly. The emission measure seems to be the most sensitive parameter to the plasma from the eruption and often shows significant variation. Furthermore, we have shown that most of the variation that is not due to eruptive plasma passing through the slit occurs at the time of the initial perturbation of the loop, likely related to the change in the column depth, the background structures, or genuine perturbation of the thermodynamic equilibrium. Through this publication we indicate a sample of 15 high-quality oscillation events and their characterising parameters for further analysis by the community. | 8,562.4 | 2020-04-30T00:00:00.000 | [
"Physics",
"Environmental Science",
"Geology"
] |
CNN–Aided Optical Fiber Distributed Acoustic Sensing for Early Detection of Red Palm Weevil: A Field Experiment
Red palm weevil (RPW) is a harmful pest that destroys many date, coconut, and oil palm plantations worldwide. It is not difficult to apply curative methods to trees infested with RPW; however, the early detection of RPW remains a major challenge, especially on large farms. In a controlled environment and an outdoor farm, we report on the integration of optical fiber distributed acoustic sensing (DAS) and machine learning (ML) for the early detection of true weevil larvae less than three weeks old. Specifically, temporal and spectral data recorded with the DAS system and processed by applying a 100–800 Hz filter are used to train convolutional neural network (CNN) models, which distinguish between “infested” and “healthy” signals with a classification accuracy of ∼97%. In addition, a strict ML-based classification approach is introduced to improve the false alarm performance metric of the system by ∼20%. In a controlled environment experiment, we find that the highest infestation alarm count of infested and healthy trees to be 1131 and 22, respectively, highlighting our system’s ability to distinguish between the infested and healthy trees. On an outdoor farm, in contrast, the acoustic noise produced by wind is a major source of false alarm generation in our system. The best performance of our sensor is obtained when wind speeds are less than 9 mph. In a representative experiment, when wind speeds are less than 9 mph outdoor, the highest infestation alarm count of infested and healthy trees are recorded to be 1622 and 94, respectively.
Introduction
The red palm weevil (RPW) Rhynchophorus ferrugineus (Olivier) is one of the world's major invasive pest species that attacks date, coconut, ornamental, and oil palms in a variety of agricultural ecosystems worldwide [1,2]. In the past four decades, the RPW has spread rapidly and has been detected in more than 60 countries in the Mediterranean, North Africa, the Middle East, and parts of the Caribbean and Central America [1,3]. This plague has a significant social and economic impact on the date palm industry and the livelihoods of farmers in the affected areas [4,5]. The RPW causes economic losses estimated at millions of USD annually, whether through lost production or pest control costs. In Italy, Spain, and France, for example, the control of the RPW and losses are expected to reach about $235 million by 2023, unless a strict containment program is implemented [1].
Treatment of RPW-infested trees by chemical injection [6], for example, is a straightforward and effective method; however, the detection of the RPW threat at an early stage is challenging. Since the RPW larvae feed internally in tree trunks, they are difficult to detect in palm groves before the tree shows visible signs of distress in a well-advanced infestation stage, when the tree is difficult to save by treatment [7]. In the literature, sniffing dogs [8], electronic nose [9], X-ray-based tomography [10], and thermal imaging [11] show promising results for the early detection of RPW; however, they lack feasibility on large farms due to their slow scanning processes. For large-scale implementation, in contrast, the most promising early detection methods rely on acoustic sensors that identify the gnawing sounds of RPW larvae while they are chewing on the core of a palm trunk [12][13][14]. Current acoustic detection methods implant acoustic probes into individual tree trunks and construct a wireless network to communicate with the sensors [13]. Existing acoustic detection methods suffer from the following drawbacks: (1) assigning an acoustic probe to each tree is not cost-effective, especially for a large farm with hundreds of trees, (2) the detection provides point sensing at the location of inserting the acoustic probe. In other words, the sensor cannot monitor the entire tree trunk, with the same sensitivity, (3) acoustic probes are invasive and may damage trees or create nests for insects.
For the purpose of early detection of RPW, we recently introduced the use of an optical fiber distributed acoustic sensor (DAS), designed using the phase-sensitive optical time-domain reflectometer (Φ-OTDR) [12,15,16]. The original approach is described in [12], where, starting at a DAS interrogation unit, a single optical fiber cable is extended and wound non-invasively around tree trunks to possibly monitor a vast farm in a short time. Compared to the point sensing offered by the acoustic probes, the optical fiber DAS can provide distributed monitoring of many trees and also along the trunk of each tree. However, in [12], the distinction between healthy and infected trees is based on a simple signal processing method (signal-to-noise ratio (SNR) measurement), which is difficult to rely on in an outdoor farm with different noise sources. Thus, in [15], we then presented the use of neural network-based machine learning (ML) algorithms as powerful tools for classifying healthy and infested trees, using the data recorded by an optical fiber DAS. However, the latter work was carried out in a laboratory environment using an artificial sound of RPW larvae, produced by a loudspeaker implanted within a tree. Finally, in [16], we extended our aforementioned work to use the ML-assisted optical fiber DAS to detect true weevil larvae in a well-controlled environment.
Here, we substantially extend our aforementioned work to use a convolutional neural network (CNN)-aided optical fiber DAS to recognize healthy and truly RPW-infested trees in an outdoor farm. The overall sensing approach is presented in Figure 1, where the optical fiber DAS unit records and processes acoustic signals from individual trees on the farm. Then, the processed data are passed to the trained CNN model that distinguishes healthy and infested trees. Training, validation, and testing of the CNN model are performed using acoustic temporal/spectral "infested" signals (from trees infested with 2-3-week-old RPW larvae) and "healthy signals" (from healthy trees placed in calm or noisy environment). Additionally, we discuss the limitations of using the designed sensor outdoors. To the best of our knowledge, no such deployment of ML-assisted optical fiber DAS for RPW detection in an outdoor farm has been previously conducted. Integrating ML with optical fiber DAS to detect the true sound of RPW larvae, especially in outdoor farms, would be very useful for controlling the spread of RPW infestation, and this work adds an important step toward designing a practical RPW detection sensor.
Experimental Setup
The Φ-OTDR-based optical fiber DAS used for the detection of RPW is schematically shown in Figure 2a [17], where a narrow linewidth laser produces a continuous wave (CW) light of a 1550-nm wavelength, a 40-mW optical power, and a 100-Hz linewidth. Using an acousto-optic modulator (AOM), the CW light is modulated into optical pulses of a 5-kHz repetition rate and a 50-ns width (∼5-m spatial resolution DAS). Next, the optical pulses are amplified with an erbium-doped fiber amplifier (EDFA) and then injected through a circulator into a standard single-mode fiber (SMF) of a ∼1-km length. The SMF is extended throughout the farm, and we loop a ∼5-m fiber section around each tree trunk. We further add a layer of plastic wrap over the fiber section to reinforce the fiber attachment to the tree and to mitigate the impact of the environmental acoustic noise. The backscattered Rayleigh signal from the SMF is directed via the circulator toward another EDFA for power amplification, and the amplified spontaneous emission (ASE) noise of the EDFA is discarded using a fiber Bragg grating (FBG). Finally, the filtered Rayleigh signal is detected by a photodetector (PD) and sampled by a digitizer. The design of the used optical fiber DAS system is conventional, which was initially described in [18,19]. However, the combination of the DAS system with ML for the early detection of RPW outdoors is new and significantly beneficial. Figure 2b shows an example of a Rayleigh trace recorded along the ∼1-km SMF. The high-power signal found at the beginning of the SMF is common and is caused by the Fresnel reflection from the front facet of the SMF. In ideal scenarios when the refractive index is unperturbed along the optical fiber, the subsequent temporal Rayleigh traces along the fiber should be identical [17,20]. Thus, the differential signal between the subsequent temporal Rayleigh traces and an initial reference one should ideally be zero along the entire fiber. In the case that weevil larvae are chewing on a tree trunk, their eating sound perturbs the refractive index of the SMF, which yields to altering the Rayleigh intensity only at the site of the infested tree. Applying the normalized differential method [21] and the fast Fourier transform (FFT) to the subsequent Rayleigh traces, the temporal and spectral acoustic signals along the optical fiber can be calculated, respectively.
Classifying "Infested" and "Healthy" Acoustic Signals Using CNNs
In general, neural networks can provide high efficiency in image classification [22]. Recently, additional advanced methods such as integrating principal component analysis (PCA) and local binary pattern (LBP) [23], and mathematical morphology spectrum entropy [24] are used to improve the accuracy and generalization ability of hyperspectral image classification and signal feature extraction, respectively. It was found that CNN architectures can handle a large amount of data, similar to that produced by the optical fiber DAS, and at the same time can reveal patterns associated with the larvae eating sound [15]. In this section, we compare the efficiencies of classifying "infested" and "healthy" acoustic signals when using the DAS temporal and spectral data as separate inputs to CNN architectures. To reduce the sensor false alarm rate, in addition, we present an approach for integrating the classification results generated when using the temporal and spectral data.
In terms of data organization and labeling for the CNN architectures, the spatial sampling of the digitizer used is ∼0.5 m and we wind a ∼5-m fiber section around each tree trunk; thus, the fiber around each tree trunk is represented by 10 spatial points. For each spatial point on the tree trunk, a digitizer reading lasts for a 100-ms period, which is 500 temporal measurements because the pulse repetition rate is 5 kHz. Since CNNs have been proven to be highly effective in classifying images [22], we organize the temporal data into a 2D matrix (10 spatial points × 500 temporal measurements). Similarly, the spectral data are organized as a (10 spatial points × 250 spectral components) 2D matrix, obtained by applying the FFT to the temporal data of each spatial point.
During the CNN training process, we rely on supervised learning such that the data are labeled based on the tree condition (infested or healthy) and the SNR value of the temporal acoustic signal at the location of the tree. The "infested" data are recorded from six artificially infested trees with weevil larvae less than three weeks old (Figure 3a), which is considered to be an early stage of infestation [12]. A detailed description of the artificial infestation process and age control of weevil larvae is provided in [12]. To ensure that the recorded acoustic "infested" signals are caused by the larvae, we place the artificially infested trees in a well-controlled environment so that the trees are not exposed to major acoustic noise such as that produced by outdoor wind [15]. Under these conditions for the infested trees, if the SNR is greater than 2 dB (the minimum acceptable SNR for optical fiber DAS [21]), we label and record the signal as "infested". On the other hand, the "healthy" data are collected from 10 healthy trees, of which six are on an outdoor farm that includes typical sources of acoustic noise produced by wind, birds, humans, etc. and the other four healthy trees are in the above-mentioned controlled environment. We divide the "healthy" data as "calm" and "noisy" signals, where the SNR is <2 dB and >2 dB, respectively.
In total for the CNN architecture associated with the temporal/spectral data, we record 18,000 examples of the "infested" signals and another 18,000 examples (9000 "calm" and 9000 "noisy") of the "healthy" signals. To evaluate the performance of the CNN architectures, the recorded temporal/spectral examples are split as 60% (21,600 examples) training, 20% (7200 examples) validation, and 20% (7200 examples) testing datasets. All of the examples are processed by applying a [100-800 Hz] band-pass filter. This filter mitigates the environmental acoustic noise that typically has low frequencies, less than 100 Hz, and discards the high-frequency (larger than 800 Hz) noise produced by the electronic/optical components in the DAS system, without affecting the dominant weevil larvae acoustic frequencies [12,15]. Figure 3b,c shows representative examples of the input images for the CNN models when using the "infested", "calm", and "noisy" temporal data and their corresponding spectral images, respectively.
Infested
Calm Noisy Figure 4a shows the architecture of the CNN model used to handle the temporal (spectral) input data. The CNN architecture includes an input layer, two pairs of convolutional and max pooling layers, a flattened layer, a fully-connected layer, and an output layer, respectively. The first convolutional layer has the ReLu activation function and comprises 16 (32) filters of a 3 × 50 (3 × 5) size and a 1 × 1 (1 × 1) stride, while the first max pooling layer has a 2 × 2 (2 × 2) pool size. In contrast, the second convolutional layer also has the ReLu activation function and includes 32 (32) filters of a 3 × 3 (3 × 3) size and a 1 × 1 (1 × 1) stride, while the second max pooling layer includes a 2 × 2 (2 × 2) pool size. Following the flattened layer, the fully-connected layer has the ReLU activation function and includes 50 (50) nodes. Eventually, the output layer of the CNN contains a single node with a sigmoid activation function for binary classification ("infested" or "healthy" signal). The adopted CNN model, shown in Figure 4a, contains many structural configuration features and parameters for the training process. The setting of the structure and parameters are very flexible, and there is no universal rule among different tasks. We follow the standard practice and use the classification accuracy as the primary evaluation standard to try different parameters repeatedly until the performance stops improving. For instance, about the number of interlayers in the model, we start with one pair of convolutional layers and max pooling and increase it gradually. We find that two pairs can obviously provide higher accuracy than one pair, and more pairs will bring more time consumption but no more performance gain. Thus, we use two pairs of convolutional layers and max pooling finally. Some key parameters, such as the convolution window size and sliding step, are limited by the input graph size and determined by repeated trails. Moreover, we keep the model's default values for some parameters that do not affect the performance. Figure 4b,d show the evolution of the training/validation accuracy and loss with the epoch, when the temporal and spectral data are used, respectively. At the end of the training cycles, 96.97% and 96.78% validation accuracy values are obtained for the temporal and spectral data, accordingly. Following the training and validation processes, we use the testing datasets to evaluate the performance of the two CNN models. The confusion matrixes when using the temporal and spectral data are shown in Figure 4c,e, respectively. Of the classification values, 97.0% and 97.1% are, respectively, obtained using the CNN models of the temporal and spectral data. The results of the confusion matrixes in this contrast experiment confirm the effectiveness of the CNN models to distinguish between the "infested" and "healthy" signals.
The FalseAlarm (false infested or false positive) is a critical performance metric of the CNN models that should be decreased in our experiments, to avoid removing or applying a treatment to a healthy tree because of sensor false alarms. Given the false positives FP and the true positives TP in a confusion matrix, the FalseAlarm is expressed as FalseAlarm = FP/(TP + FP) [25]. Using the results of the confusion matrixes in Figure 4c,e, the value of the FalseAlarm is 3.64% and 3.56% for the CNN models of the temporal and spectral data, respectively. To reduce the value of the FalseAlarm, we introduce integrating the classification results of the two CNN models such that a temporal example and its corresponding spectral one are marked to be "infested" if and only if the two CNN models produce "infestation" classification results. In other words, if a temporal example is classified as "infested" by the temporal CNN model, while its corresponding spectral example is classified as "healthy" by the spectral CNN model, then we classify this overall example as "healthy". By adopting this approach, the sensor FalseAlarm is decreased to 2.82%. Compared with the original 3.64% and 3.56% FalseAlarm values of the temporal and spectral data, the new 2.82% FalseAlarm obtained after using the strict decision-making method has improvement percentages of 22.5% and 20.8%, respectively. Consequently, we decide to apply the introduced merged classification approach to count the infestation alarms when classifying the infested and healthy trees in the subsequent section.
Classifying Infested and Healthy Trees Using CNNs
In this section, we use the aforementioned merged classification approach with the trained CNN models to distinguish between infested and healthy trees. In other words, in an experiment involving infested and healthy trees, we record equal data examples from the individual trees and pass them to the CNN models with the merged classification approach to count the number of infestation alarms for each tree. These experiments are carried out when trees are located in a controlled environment and in an outdoor farm.
Focusing first on the controlled environment experiments where the trees are located in a closed room with windows so that the trees may be exposed to mild acoustic noise produced by birds flying around the room and/or humans inside the room. We arrange two different experiments (Exp. 1 and Exp. 2) in the controlled environment such that each experiment involves four trees (two infested and two healthy). Part of the data collected in Exp. 1 are used to train the CNN models; however, the trees and data of Exp. 2 are never included in training the CNN models. This experimental design is important for investigating the generalization of the trained CNN models. The infested trees in Exp. 1 and Exp. 2 include larvae less than three weeks old, which is controlled during the artificial infestation process [12]. The height range of the infested and healthy trees, placed in the controlled environment, is 1-1.5 m. Figure 5a shows an example of a tree used in the experiments, while the optical fiber is wrapped around it and a plastic wrap is added as an outer layer over the fiber and the tree. For each tree in Exp. 1 (Exp. 2), we record 129,761 (144,755) temporal images with identical number of their corresponding spectral images. As Figure 5b,c show, the merged classification approach is generalized and can efficiently distinguish between the infested and healthy trees in the two experiments by providing obvious contrasts in the number of alarms between the infested and healthy trees. Thus, these contrast experiments demonstrate the efficacy of the reported method for identifying the infested and healthy trees in the designed controlled environment. To investigate the impact of the wind speed on the performance of our sensor, we further carry out Exp. 4 and Exp. 5 at different wind speed ranges. In particular, Exp. 4 is carried out in the "light air" and "light breeze" conditions where 16,694 data examples per tree are recorded when the wind speed is within a [3,5] mph range. In contrast, we collect 22,763 data examples per tree for Exp. 5 in the "gentle breeze" and "moderate breeze" conditions where the wind speed is within a [9,14] mph range. In Exp. 4, when the wind speed is relatively low, the system performs outstandingly and perfectly discriminates between the infested and healthy trees (Figure 6d). As the wind speed increases to that range of Exp. 5, the performance of the sensing system degrades. These results are in good agreement with our findings in [15]; wind is the main source of noise in our system, compared to the acoustic noise of birds and humans that is greatly attenuated when propagating through the air before reaching the fiber [26]. Thus, these contrasting experiments conducted outdoors show that the best performance of our sensor can be obtained when wind speeds are less than 9 mph . H17 H16 H15 H14 H13 H12 H11 H10 H9 H8 H7 H6 H5 H3 H4 H2 H1 I2
Discussion
In the experiments conducted in the controlled environment and outdoor farm, one can observe that the infestation alarm count for an infested tree is much lower than the tree's total number of recorded examples. This is attributed to the fact that the larvae may not continuously produce sound and/or their sound is sometimes not strong enough to be picked up by the optical fiber. Thus, it is important to differentiate between classifying "infested" and "healthy" acoustic signals, presented in Section 3, and classifying infested and healthy trees, described in Section 4. For example, regarding the acoustic signals, it is straightforward to calculate the FalseAlarm values because the data size and class are known. However, for the real scenario of classifying the trees, the FalseAlarm cannot be calculated because even an infested tree produces "infested" and "healthy" signals. Thus, we decide to rely on counting the infestation alarms to distinguish between the healthy and infested trees. Considering the practical application of the sensor, we can select few healthy trees as references and based on their maximum infestation false alarm count, we can set an appropriate threshold for infestation alarm count to announce a tree as infested.
We also compare our optical fiber DAS and CNN method with existing RPW detection technologies. Table 1 summarizes the comparison results. We can observe that the acoustic detection methods have attracted the most research interest for RPW detection in past years. Among all methods Based on acoustic sensors, our technique based on DAS with CNN algorithm demonstrates advantages in most aspects of concern, including high detection accuracy, 24/7 monitoring, unattended, early detection capability, low cost for large-scale applications, and moderate computational complexity. However, our sensor suffers from the degradation in performance outdoors at high wind speeds, which will require further investigation and improvement. Thus, we believe that our DAS-based method is worthy of implementation in large-scale practical applications.
To sum up, this work aims to use optical fiber DAS to monitor RPW infestation in outdoor date plantations. The acoustic data recorded by the optical fiber DAS are passed to a trained CNN model to decide whether the acoustic signal is "infested" or "healthy". For each tree, the infestation alarm count produced by the CNN model can be used to decide whether the tree is infested or healthy. The significance of this work is to pave the way for future experiments, as we plan to use our sensor to detect RPW in naturally infested trees. However, this may require challenging arrangements as it is difficult to find a tree at an early stage of infestation because the tree only shows signs of visual distress at a very advanced stage of infestation. In addition, we will consider improving the overall performance of the CNN model by training it further on diverse data to improve the performance of our system in terms of the contrast between the infestation alarm counts of the infested and healthy trees. Detection of a small number of larvae with a simple signal processing method (Low contrast between infested and non-infested sound) An acoustic device (acoustic probe and headphone set), 2010 [14] Bandpass filtering, amplification Invasive 97% accuracy Simple and portable hardware (Manual identification with four detection positions needed) A radiography system (X-ray technology), 2012 [30] Visual detection based on X-ray photos Not Observable larvae on the photos Simple and visual operation (Difficult for large-scale applications) An acoustic sensor (audio probe), 2013 [13] Filtering All used methods are non-invasive with a detailed comparison (Accuracy needs to be further improved) An IoT system (commercial accelerometer sensor), 2020 [33] FFT, the estimation of power spectral density (PSD), peaks average difference (PAD) analysis Invasive Observable signature of the infestation Simple hardware with a connection to network (Low sensitivity and contrast)
Conclusions
We report on the integration of optical fiber DAS and CNN for the early detection of RPW in large farms. The temporal and spectral acoustic signals recorded by the optical fiber DAS are used to train CNN models, resulting in classifying the "infested" and "healthy" signals with accuracy values of 97.0% and 97.1%, respectively. Merging the classification results of the temporal and spectral CNN models can reduce the FalseAlarm performance metric of the sensor by ∼20%. Our sensor shows success in recognizing the infested and healthy trees in a controlled environment and an outdoor farm, with a high efficiency when the wind speeds are less than 9 mph outdoors. The main advantage of the reported sensor, compared to other current technologies, is that the sensor can provide 24/7 monitoring while offering wide coverage of the farming area, using only a single optical fiber cable. In contrast, the performance of the reported sensor still requires improvement when working outdoor at high wind speeds. | 5,866 | 2022-08-29T00:00:00.000 | [
"Computer Science"
] |
A Condition Monitoring Strategy of Looms Based on DSmT Theory and Genetic Multiobjective Optimization Improves the Rough Set Method
In order to improve the effect of intelligent monitoring and condition analysis of textile machinery, some solutions have been proposed to mitigate incomplete monitoring positions, insufficient decision accuracy, uncertainty reasoning and generalization of the current loom monitoring system. Firstly, a model of the weaving machine spindle dynamics was constructed, and the types and sources of monitoring data were specified. Secondly, an improved rough set method is proposed for processing the collected loom attribute data. A genetic multi-objective optimization method combined with a genetic algorithm is proposed to improve the problem of too many reduction results of the rough set method and improve the monitoring system’s reliability. In order to solve the problem that new objects do not have unique matching rules in the constructed rule base, a fusion of Dezert-Smarandache Theory (DSmT) for uncertainty inference is proposed, which increases the distinguishability of decision support probabilities. Experiments show that the improved rough set method based on DSmT and genetic multi-objective optimization has higher classification accuracy and better recognition than the traditional rough set method for weaving machine condition monitoring.
I. INTRODUCTION
Condition monitoring and identification of textile machinery is the basis for the digitalization and intelligence of textile machinery. With the continuous development of textile machinery functions and structures, traditional equipment condition monitoring technology cannot meet the needs of intelligent weaving equipment. Therefore, a particular loom condition monitoring method is studied. It is an inevitable choice and an essential direction for the current development of textile machinery. Equipment maintenance technology has gone through four stages: post-event maintenance, preventive planned maintenance, maintenance considering economic goals, and condition maintenance. The innovation of each generation of equipment maintenance technology is actually the innovation of equipment condition monitoring technology. With the development of deep learning and multi-source The associate editor coordinating the review of this manuscript and approving it for publication was Hisao Ishibuchi . data fusion, the current equipment condition monitoring technology is no longer limited to equipment maintenance. It is based on the hidden state information inside the equipment so as to realize intelligent manufacturing applications such as intelligent fault warning, intelligent efficiency optimization, and CPS model mapping of equipment.
A. DEVELOPMENT STATUS OF LOOM MONITORING SYSTEM
DORNIER's loom control system adopts industrial interconnection technology, which can precisely control and monitor the critical quality parameters of the loom. The Picanol BlueBox, an electronic platform for looms from Picanol, Belgium, integrates on-board diagnostic systems, intelligent production efficiency optimization systems, user interfaces, device browsers, and remote fault diagnosis systems. It better realizes the loom's innovative management and online fault diagnosis [1]. Vladimir et al. built a fabric visual inspection VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ system through a web camera and a microcomputer and designed a feasible scheme for automatic online monitoring of fabric quality [2]. P. P. J designed a shuttle loom control module with the peripheral interface and microcontroller, which can monitor and measure errors in warp and weft [3]. Dong and Shi adopted the redundant structure of dual-bus and dual-region processors to realize the self-diagnosis of the loom system, the automatic loading of working parameters, and the automatic shielding and recovery of loom interface faults. This system has high reliability and maintainability, but the degree of intelligence is insufficient [4]. Sun et al. used the fault tree to judge the real-time data, which improved the data transmission and processing performance of the realtime information detection system of the loom [5]. The above works have designed the control and monitoring system of the loom, but these systems have the problem of insufficient intelligence. The focus of these researches does not involve data fusion and feature extraction of loom attributes, loom status recognition and loom data sharing. In order to realize the condition monitoring of the loom, it is necessary to analyze the entire mechanical structure of the loom, establish a dynamic model of the core components of the loom, and determine the data source and data type of the loom monitoring data according to the model. It is also necessary to analyze a large amount of data collected by combining intelligent condition monitoring methods and fault diagnosis methods to support the system in making correct decisions.
B. DEVELOPMENT STATUS OF CONDITION MONITORING AND FAULT DETECTION TECHNOLOGY
The new state monitoring system typically includes data acquisition, analysis; diagnosis and output; data transmission; and communication [6]- [10]. Existing condition monitoring systems typically replace single-sensor signal monitoring with multisensor data fusion monitoring. Caporuscio et al. analyzed troubleshooting solutions for many heterogeneous connected devices and summarized the effect evaluation criteria of status detection and fault diagnosis systems from different perspectives [11]. Reference [12] comprehensively used multiple sensor signals to construct a health monitoring and fault diagnosis system for hydromechanical transmission. Experiments proved that the condition monitoring method based on multisensor data fusion has lower hysteresis and a more accurate condition recognition effect than the traditional ferrography analysis and vibration monitoring methods. In power systems, condition monitoring technology has matured in many applications, and there is a separate online condition monitoring method for each device in the system [13]- [15]. It can be seen that the specificity of the condition monitoring technology is very high, the condition monitoring of different manufacturing equipment needs to be studied in a targeted manner, and the monitoring points and data volume of the condition monitoring task are massive. A more efficient data processing method is needed. The set and data sharing requirements are higher [16], [17].
In the field of condition monitoring systems, the research on algorithms for mining deep hidden information combined with deep learning methods has become a hot topic. Reference [18] proposed using the fuzzy threshold method to classify the state data of wind turbines and then using the fuzzy neural network (FNN) and fast Fourier transform to evaluate and make decisions about monitoring data. Reference [19] proposed using likelihood learning to process the warpage feature of the wind turbine gearbox vibration signal. These cases show that algorithms such as deep learning can effectively mine deeply hidden information in state data but are strongly dependent on the dataset.
How to quickly obtain high-quality condition monitoring data sets and improve the generalization performance of control decisions is a practical problem that needs to be faced. Chen constructed a wavelet neural network algorithm based on the niche genetic algorithm for fault diagnosis of rapier loom bearings [20]. Zhang et al. proposed a grid fault localization and fault type identification method that combines Variational Mode Decomposition (VMD) and Convolutional Neural Networks (CNN) [21]. Zhang et al. proposed using a multi-group quantum genetic algorithm (MQGA) to optimize the combination of quality factor parameters for fault detection of aero-engine bearings [22]. Zhang et al. proposed using deep convolution generative adversarial network (DCGAN) to generate virtual fault data, expand the training samples, and then use the residual connected convolutional neural network (RCCNN) model to extract and classify stator current data to achieve fault diagnosis [23]. The above works have proposed different fault diagnosis and condition monitoring methods. However, the above work has a general effect of dealing with uncertain information with complex data sources. It is not suitable for the loom system in which each institution's data is relatively independent, and a consistent interpretation and description of the equipment status cannot be obtained. In order to improve the processing effect of multi-source uncertain information, it is necessary to introduce multisource data fusion technology, such as rough set theory, grey system theory, Bayesian reasoning, etc.
Rough set theory (RST) has been successfully applied to machine learning, decision analysis, process control, approximate reasoning, pattern recognition, data mining and other intelligent information processing fields. Zhang et al. discussed the methods and applications of multisource information fusion based on rough set theory, reporting that rough sets are suitable for classifying noisy, inaccurate, or incomplete data [24]. RST does not rely on prior information. It can make full use of available information to identify basic knowledge and rules of target problems from sample data to keep the classification ability of the information system unchanged [25]. Muralidharan et al. proposed using a discrete wavelet transform (DWT) to calculate wavelet features from vibration signals and then using rough set generation rules to achieve fault diagnosis of single-body centrifugal pumps [26]. Wang et al. proposed a fault diagnosis scheme combining rough set theory and fuzzy logic [27].
These examples show that using the rough set method to process data can extract effective decision-making rules from data with complex sources and ambiguous relationships. However, the rules extracted by the simple rough set method may have new objects that cannot be matched. Therefore, it is necessary to integrate other suitable algorithms to improve the accuracy and adaptability of the constructed decision rule base. The steps of the rough set method include attribute reduction, rule extraction, and rule matching. Some scholars propose integrating other algorithms to improve the results of these steps to improve decision-making using the rough set method. For example, Suo et al. proposed using a datadriven loss function matrix to design a neighbourhood decision theory rough set model to improve the results of attribute reduction [28]. Rajeswari et al. proposed using a genetic algorithm (GA) and a rough set-based method to select the best input features to improve the reduction efficiency [29]. Hu et al. introduce different weights into neighbourhood relations and construct a Weighted Neighborhood Rough Set (WNRS) model based on weighted neighbourhood relations. The experimental results show that WNRS has higher classification accuracy and compression ratio [30]. Xie et al. propose a new weighted neighbourhood probabilistic rough set model based on data binning [31]. Although the above work improves the rough set method, it is not entirely suitable for processing loom attribute data. The state data of the loom is continuous in the time domain, which cannot be processed by the rough set method. In addition, the above work mainly improves the attribute reduction step of the rough set method, and the rule extraction step and the rule matching step of the rough set method are less improved. When the rough set method processes the loom state data, there are many reduction results, and the new object cannot match the unique rule in the rule base. In order to solve these two problems, this paper proposes to use the genetic algorithm to improve the attribute reduction step of the rough set method so as to improve the classification accuracy of decision-making, and to use Dezert-Smarandache Theory (DSmT) theory for uncertainty reasoning so as to increase the discrimination degree of decision support probability.
To sum up, the main contributions of this paper are as follows.
1) According to the analysis of the loom mechanical equipment and weaving principle, the dynamic model of the main shaft of the rapier loom is established, and the monitoring data collection position and type of the loom state monitoring are clarified through the model. In addition, this paper proposes that the loom state analysis can be based on the warp tension of the loom and the commonly used vibration data. 2) Aiming at the problem of many reduction results when the rough set method is directly applied to loom attribute data processing, this paper proposes a genetic multiobjective optimization method fused with the genetic algorithm. The method uses the genetic algorithm to improve the reduction step in the rough set method to optimize the reduction result and improve the classification accuracy of the system decision. 3) Aiming at the problem of matching multiple rules for new objects in the rough set method, this paper proposes to use DSmT theory combined with rough set theory to carry out uncertainty reasoning and increase the distinguishability of decision support probability. In order to verify the feasibility and effect of the method proposed in this paper, a step-by-step comparison experiment and a system experiment on an actual rapier loom are carried out in this paper. The loom condition monitoring mapping network can obtain a better identification effect and improve the performance of the loom condition monitoring system.
II. DYNAMIC MODEL OF LOOM SPINDLE
In this section, the structure and weaving process of the loom are analyzed, the dynamic model of the loom is constructed, and the monitoring data source of the condition monitoring system is determined accordingly.
A. WEAVING PRINCIPLE OF LOOM
A rapier loom can be divided into five significant mechanisms: opening, weft insertion, beating-up, let-off and coiling. The mechanical structure of the rapier loom is shown in Fig. 1. Different angles of the weaving machine spindle correspond to different weaving process processes, which is the core of the weaving machine operation. Fig. 2 shows a table of the spindle position angles of the weaving machine when performing different processes. The mechanical transmission relationship of the rapier loom spindle is shown in Fig. 3. The primary motor transmits the power to the weaving machine spindle through the belt pulley, brake clutch system, and reduction gear to drive the weaving machine spindle movement. The primary shaft drives the wefting cam to complete the wefting process.
B. SPINDLE DYNAMICS MODEL
The main shaft of the loom is formed by a series of mass elements, beam elements, and support elements connected one after another in the form of a chain-like structure. It is convenient and practical to use the transfer matrix method to analyze and calculate the vibration system of this form [32]. The dynamic model diagram of the spindle of the rapier loom is shown in Fig. 4. As shown in Fig. 4, the loom spindle system is decomposed into several relatively simple subsystems with a single degree of freedom. The state vectors represent the generalized forces and generalized displacements on the connected end faces of each subsystem, and the transfer matrix is used to represent the relationship from the input end of the system to the output end. The primary shaft system of the loom is calculated according to the concentrated mass point, beam and support point separately.
When the system performs simple harmonic motion since m i is a rigid body, its left and right lateral displacements y are equal, which can be expressed as: The rotation angle θ on the left and right sides of m i is equal, which is expressed as: m i is a large volume part whose rotational inertia cannot be neglected. Thus, the bending moment of the subsystem can be expressed as: The sheer force transfer relationship between the left and right sides of the subsystem can be expressed as: In these equations, the R and L superscripts indicate the location of the parameters (i.e., the right or left side of the subsystem), M is the bending moment received by the subsystem, J is its moment of inertia, ω is the angular frequency of system motion, and Q is the shear force experienced by the subsystem. Therefore, the state transfer relationship between the left and right sides of m i can be expressed as: which can be abbreviated as: where [P] i is called the point transfer matrix of the concentrated mass point m i . Denote beams with l i . Separating l i from the system and ignoring the mass of the beam itself, the transfer relationship of the bending moment on the beam is as follows based on a balance of forces: The shear force transfer relationship of the beam is: The force analysis of beam l i uses the left end point of the beam as the coordinate origin, the radial direction of the beam is the x direction, and the tangential direction is the y direction. According to the mechanics of materials, we know: where E is the elastic modulus of the beam, and I is the moment of inertia of the beam. From the set relation, we know: Therefore, the angle transfer relationship between the left and right ends of the beam is: The lateral displacement transfer relationship between the left and right ends of the beam is: 59726 VOLUME 10, 2022 The left end rotation angle of the middle beam on the right side of (12) is represented by the rotation angle of the right endpoint of its adjacent subsystem, and it is substituted into (11) to obtain: The rotation angle of the left end of the beam is expressed by the lateral displacement of the right end point of its adjacent subsystem and is substituted into (12) to obtain: Equations (7), (8), (13), and (14) can be represented in matrix form as: which can be abbreviated as: where [F] i is the field transfer matrix of massless beam segment l i . Using (5) for ideal rigid points and (15) for ideal massless beams, the state vector relationship model of each point from the left end to the right end of the system can be established.
Assuming that the equivalent stiffness of the support point m i is k, the equivalent viscous damping coefficient is ζ , and the concentrated mass of the support is m i , the displacement relationship between the left and right sides of the support point is: Therefore, the transfer relationship of the state vectors on the left and right sides of the support point of m i is: The bending moment relationship is expressed as: The shear force relationship is expressed as: Therefore, the transfer relationship of the state vectors on the left and right sides of the support point is: which can be abbreviated as: where [T ] i is the transfer matrix of the state vectors on both sides of the support point.
According to the structure of the main shaft of the rapier loom and the working principle of the loom, using the parameters listed in Table 1, the dynamic model of the main shaft of the rapier loom is established.
Thus, the input from one end of the primary shaft can be transmitted to any position of the primary shaft, and the VOLUME 10, 2022 natural frequency and primary vibration shape of the primary shaft of the loom can be calculated.
III. DATA PROCESSING AND FUSION OF THE STATE DETECTION SYSTEM
Because many of the data collected on the loom are incomplete and uncertain, these data cannot be directly used for decision-making on the loom status [33], [34]. The collected data must be preprocessed before the data are fused to construct a rough set decision table.
A. LOOM STATE FEATURE DESCRIPTION AND DATA PREPROCESSING
The fluctuation characteristics of the warp tension of the loom can describe the working state of the opening mechanism, the beating-up mechanism, and the roll feeding system. The primary vibration of the loom mechanical equipment can systematically reflect the overall operation state of the equipment. Therefore, this system focuses on the fluctuation of the warp tension. Data and body vibration data are processed as source data, and a decision table is constructed.
1) CHARACTERISTIC DESCRIPTION OF THE WARP TENSION FLUCTUATION OF LOOMS
The warp yarn tension data of the loom sampled by the sensor is represented only by the function y i = f(t i ) (i = 0, 1, . . . , n) of a series of points t i in the corresponding time interval. In order to evaluate the fluctuation characteristics of warp tension, it is necessary to estimate the function value corresponding to any time node. This system selects the cubic spline interpolation method to solve the approximation function of the warp tension fluctuation curve of the loom. Because all the weaving actions of the loom are performed based on the spindle position angle, the spindle position angle is selected as the independent variable of the warp tension fluctuation function of the loom, and the definition domain is [0, 360).
The interpolation curve of the warp tension fluctuation composed of the warp tension of the loom in one cycle is shown in Fig. 5, which indicates that the interpolation effect of 49 interpolation nodes is better than that of 24 interpolation nodes. Also, the cubic spline interpolation curve using 49 interpolation nodes can more accurately describe the measured fluctuation of warp tension on the loom and the nature of the curve. Therefore, in the onboard condition monitoring system of the loom, six attributes of the three extreme point positions of the 49-point interpolation curve, the relative stability interval of the tension, and the mean value of the tension sampling points are selected to describe the warp tension fluctuation of the loom.
2) CHARACTERISTIC DESCRIPTION OF THE VIBRATION SIGNAL OF THE LOOM SPINDLE
From the analysis of the weaving principle of the loom above, we can see that the main shaft of the loom is the power source of the weaving mechanism, and its vibration can reflect the condition of the entire loom. The vibration of the main shaft of the loom can be obtained by measuring the vibration of the journals at both ends of the main shaft. The sensor used in this paper is an IEPE acceleration sensor. The physical diagram of the journals and the sensor at both ends is shown in Fig. 6. The operating speed of the main shaft is generally below 10 r/s, that is, below 10 Hz. Therefore, the fundamental frequency of vibration of the main shaft of the loom is below 10 Hz. The operating speed of the spindle is called the fundamental frequency of the spindle, denoted as X. Taking the spindle fundamental frequency X as the unit, the vibration generated by the spindle has multiple harmonics in addition to 1 times the spindle fundamental frequency 1X. The failure of the loom is often reflected in the multiple harmonics. Below 1X is the sub-low-frequency area, 1X to 10X is the lowfrequency area, and above 10X is the high-frequency area. If the amplitude in the sub-low-frequency region is high, the fault types include oil swirl and so on. If the amplitude in the low-frequency region is high, the fault types are unbalance, misalignment, looseness, etc. If the amplitude in the highfrequency region is high, the gear bearing is often faulty, such as wear, cracks, etc. In this paper, the vibration signal of the main shaft is divided into six attributes according to the amplitude: 0∼0.4X, 0.4∼0.6X, 0.6∼1X, 1X, 2X and greater than 2X. Since there are two measurement points of the spindle vibration, the left journal and the right journal, there are 12 attributes to describe the spindle vibration signal. The vibration attribute table of the main shaft of the loom is shown in Table 2.
1) CONSTRUCTION OF THE ROUGH SET DECISION TABLE
Assuming that the loom state feature monitoring system contains five first-generation attribute items, denoted as A = {a 1 , a 2 , a 3 , a 4 , a 5 }, the definition set can be denoted V = {V 1 , V 2 , V 3 , V 4 , V 5 }, where V i is the attribute A set of values for a i , i = 1, 2, . . . , 5. The function f : ai −−> Vi represents the relationship between the attribute a i and its value. The existing state instance set U is collected according to the first-generation attribute items. According to rough set theory, the loom state monitoring information system can be expressed as IS = U, A, V, f .
Considering the identification of the spindle wear degree as an example, we let the attribute a 1 represent the wear degree of the loom spindle, where V 1 = {severe, slight, normal}; the attribute a 2 represents the measured value of the weft yarn tension in a certain weaving cycle, which is divided into five levels as V 2 = {2, 1, 0, −1, −2}; attribute a 3 represents the weaving performance of the loom at a certain stage, and the weaving performance of the loom is divided into four types, where V 3 = {excellent, good, medium, poor}; and attributes a 4 and a 5 are other parameter attributes of the loom. The resulting decision table is shown in Table 3. For the loom parameter monitoring problem, the loom weft tension monitoring decision table is constructed as shown in Table 4.
2) REDUCTION OF ROUGH SET DECISION TABLES
After the rough set decision table is constructed, the attribute reduction of the information system can be carried out. The core idea of attribute reduction is to reduce redundant attributes and obtain attribute reduction subsets according to the size of attribute dependence or importance without affecting the classification and decision-making ability of the decision information table. Attribute reduction of VOLUME 10, 2022 information systems can maintain the classification ability of the information system and simplify the system's expression and optimize the system's operational performance. This paper adopts a heuristic attribute reduction algorithm based on attribute importance. The algorithm first finds the kernel of the decision-making system, then calculates the importance of each conditional attribute and adds it to the attribute set according to specific requirements until the set is a reduction. In the information system IS = U, A, V, f , if the indistinguishable relation induced by attribute set A is equal to the indistinguishable relation induced by its subset, then IND(A) = IND(A − {a}). Then, attribute a is a redundant attribute of attribute set A, and the process of eliminating redundant attributes in the attribute set is called knowledge reduction. An information system can have multiple attribute reductions, and the intersection of all reductions is called the core of the information system. Attribute reduction of information systems can maintain the classification ability of the information system and simplify the system's expression and optimize the system's operational performance.
In the onboard condition monitoring system of the loom, some attributes make only small contributions to the loom state decision and can even lead to the wrong state decision; thus, these irrelevant and misleading attributes must be eliminated. In the information system IS = U, C, D, V, f , the importance of attribute a relative to decision attribute D based on attribute set B can be determined by the following equation: where SIGr(a, B, D) describes the contribution of attribute a to decision attribute set D based on conditional attribute set B: the larger the value is, the more important attribute a is to decision-making. The heuristic attribute reduction algorithm flow based on attribute importance is shown in Fig. 8. The reduction algorithm based on attribute importance is applied to decision table 3, and the reduction result is: 1) Attribute set C a11 = {a 3 , a 4 , a 5 }; 2) Attribute set C a12 = {a 2 , a 4 }; 3) Attribute set C a13 = {a 2 , a 3 , a 5 }; Similarly, the reduction result of decision Table 4 is: 1) Attribute set C a21 = {a 1 , a 3 , a 5 }; 2) Attribute set C a22 = {a 1 , a 4 }; 3) Attribute set C a23 = {a 3 , a 4 , a 5 };
C. MACHINE LEARNING GENERALIZATION PERFORMANCE CONTROL
Because an industrial site where data are collected is typically noisy, this decision-making algorithm may lead to overfitting.
In order to ensure more reliable state monitoring results, the reduction results of rough set methods need to be improved. According to the structural risk minimization (SRM) principle, to reduce the structural risk of the rough set method, both empirical risk and confidence range must be minimized concurrently. However, empirical risk minimization and confidence range minimization are often contradictory. Therefore, the structural risk control of the rough set method is a typical multiobjective optimization problem, and there are multiple relative optimal solutions. Under the premise of not deteriorating other goals, these solutions can only ensure that one or a few core goals that are most concerned are optimal.
1) EMPIRICAL RISK CONTROL ALGORITHM BASED ON GENETIC MULTIOBJECTIVE OPTIMIZATION
A genetic algorithm is a computational model of an optimization problem that simulates Darwin's natural evolution process. It has been widely used in machine learning, neural network, optimal control and other fields. It has the characteristics of a flexible search process, wide application range, global optimization ability and high optimization process efficiency. In this paper, the genetic multi-objective optimization method is used to optimize the reduction steps of the rough set method. The application steps of the genetic algorithm can be summarized as follows: 1) Determination of the initial population: we encode the candidate solutions in the feasible domain, randomly select a part as the first generation encoding group, and then calculate the fitness of each encoding concurrently; 2) Selection operation: Similar to natural evolution, the selection mechanism of the genetic algorithm should ensure that individuals with high fitness are more easily selected. Usually, the probability of an individual being 59730 VOLUME 10, 2022 selected can be described as: 3) Crossover and mutation operation: The crossover operator selects some bits of the two individual samples of the population to generate the next generation of individuals by random exchange, and the mutation operator randomly reverses a specific bit in the individual samples. Typically, crossover and mutation should be applied concurrently to avoid prematurely trapping the algorithm into a local optimum. 4) Optimal population output: We repeat this process of selection, mutation, and crossover until the end condition is met, and the new generation of the population obtained is the optimal solution set found by the algorithm. When applying the genetic algorithm, selection, inheritance, and mutation operations are standard processes, but the coding process and fitness evaluation must be designed according to different applications. When using the genetic algorithm to evaluate the structural risk of the attribute subset of the decision table, each condition attribute subset is regarded as a candidate solution and coded. The coding rule is as follows: when the ith condition attribute in the decision table is included in the subset condition attribute subset, the ith bit of the code is 1; otherwise, it is 0.
Fitness describes the individual's contribution to the optimization goal, and a suitable fitness evaluation mechanism is the key to the success of the genetic algorithm. When using the genetic algorithm to control the structural risk of the rough set method, the empirical risk corresponding to the conditional attribute subset is defined as: The algorithm flow for obtaining the fitness of the candidate solution is shown in Fig. 9.
2) PERFORMANCE TEST OF THE GENETIC OPTIMIZATION ROUGH SET METHOD
In order to verify the effectiveness of the empirical risk control algorithm based on multiobjective genetic optimization, 10 UCI datasets were used for testing. Table 5 shows the classification accuracy of the conventional rough set method for each UCI dataset when the threshold is 0.95, the attribute importance index parameter is 0.5, as well as the classification accuracy of the algorithm using the multiobjective genetic optimization empirical control method.
The experimental results show that the rough set method optimized by the genetic algorithm more easily obtains high classification accuracy with different datasets.
3) EXAMPLE OF ATTRIBUTE REDUCTION BASED ON GENETIC MULTIOBJECTIVE OPTIMIZATION
The reduction with the smallest number of attributes is typically considered the best reduction. However, not all the reductions of decision tables have only a unique reduction result with the smallest number of attributes. In order to choose the best reduction result of the decision table, the empirical risk control algorithm based on multiobjective genetic optimization is used to evaluate the fitness of Tables 3 and 4, and the results are shown in the following table. 1) The selection basis of the attribute reduction results of decision Table 3 is shown in Table 6.
2) The selection basis of the attribute reduction results of decision Table 4 is shown in Table 7. VOLUME 10, 2022 After evaluation, the best-reduced attribute set of decision Table 3 is C a12 = {a 2 , a 4 }, and the best-reduced attribute set of decision Table 4 is C a22 = {a 1 , a 4 }. In addition, if the risk of taking an attribute value is relatively high, the order of the decision attribute rule set can be appropriately adjusted according to the dominant order value.
IV. JUDGMENT AND DECISION-MAKING METHOD OF MONITORING SYSTEM STATUS A. EXTRACTION METHOD OF DECISION RULES BASED ON ROUGH SET THEORY
After attribute reduction is completed, decision rules can be constructed by reducing attribute sets and attribute values. After processing by the rough set rule extraction algorithm, the corresponding relationship between the condition attribute and the decision attribute of the information system is shown in Fig. 10.
For information system IS = U, C, D, V, f , we let its attribute reduction result B be the proper subset of attribute set C, U/B means classify U according to B set, U/D means that U is classified according to D set, X belongs to U/B, and Y belongs to U/D. Thus, the decision rule of the information system can be expressed as: where DES(X, B) is called the rule antecedents, which is the condition part of the rule; DES(Y, D) is called the rule consequences, which is the decision part of the rule; and DES(X, B) is defined as: where a = f(x, a) represents the essential elements of the class description. Due to data inconsistency and uncertainty, there may be a deterministic or possible relationship between rule antecedents and rule posteriors. According to generalization decision theory, the objects with the same generalization decision value in the universe are divided into the same set K. If the generalization decision of an object subset K is |δB(x)| = 1, then all objects in K can be accurately classified under the knowledge level given by B. If the generalization decision of the object subset K is |δB(x)| > 1, then all objects in K cannot be accurately classified under the knowledge level given by B. The generalization decision of object x in U concerning attribute set B is defined as: where v i is the decision attribute. Therefore, the rough set method's minimum rule set extraction algorithm flow is shown in Fig. 11. In order to obtain the best classification accuracy, it should be ensured that a smaller set of rules is obtained when the decision rules are extracted. The following takes the spindle wear identification and weft tension parameter monitoring as examples to extract the minimum ruleset.
1) The rule set extracted from decision Table 3 is shown in Table 8.
2) The rule set extracted from decision Table 4 is shown in Table 9.
B. UNCERTAINTY REASONING
When there is a new object for rule matching, three situations may occur: (1) The new object matches only one rule.
(2) The new object matches multiple rules.
(3) The new object does not match any rules. Case (1) can directly determine decision-making attributes, and case (3) can typically be converted into the case (1) or (2) using the partial rule matching method. In case (2), the most common method is to score the support of each matching rule for the new object and then select the rule with the most considerable support for the decision of the new object.
1) UNCERTAINTY REASONING METHOD BASED ON DSmT THEORY
The new object can infer the decision attribute value by matching the rules in the rule base. However, when the new instance matches a nondeterministic rule with credibility lower than 1, the unique decision attribute value cannot be obtained. In order to deal with the uncertainty of system rule matching, this paper incorporates the DSmT uncertainty inference method. In this application, the recognition framework of DSmT is a collection of new object matching rules, defined as: The superpower set DU generated by the union and intersection of elements in the identification frame satisfies the following conditions: 1) φ, θ 1 , θ 2 , . . . , θ n ∈ D U .
2) If sets A and B belong to the superpower set DU, then the intersection and union of A and B also belong to DU; 3) Only if the elements satisfy conditions (1) and (2) Then, m(A) is called the basic probability assignment of A, which represents the exact degree of confidence in proposition A. The trust function on U is defined as: The function BEL(A) represents the total trust of A. The likelihood function that defines the recognition frame U is: where PL(A) represents the degree of trust of A. When three rules are initially matched, the description of the implementation process of the DSmT-based rule uncertainty inference method is shown in Fig. 12.
2) EXAMPLE OF UNCERTAINTY REASONING FOR LOOM CONDITION MONITORING
Taking the uncertainty rules 6 and 8 in Table 8 as examples, the decision rule matching reasoning is carried out. 1) We let the new instance be as shown in Table 10.
2) According to the rule matching method, instance x k matches rules 6 and 8 in table 8, and its corresponding decision attributes may be d 1 = 1 and d 2 = 2; thus, the identification framework for determining the target is U = {d 1 , d 2 }. Then, the superpower set is: In the information table, the proportion of support of the rule, the prior probability of the value of the decision attribute, and the proportion of the value of attributes a 3 VOLUME 10, 2022 These calculations indicate that various rule evaluation methods can be integrated through fusion, which can markedly increase the discrimination of decision support probability. If a threshold is introduced, the decision value of the newly added objects listed above can be determined to be d 2 .
V. TEST OF THE ON-BOARD CONDITION MONITORING SYSTEM OF LOOMS A. LOOM STATE DATA COLLECTION TEST
The onboard condition monitoring platform of the loom collects data according to the characteristics of the first generation and simultaneously displays data of different sampling frequencies. After analyzing the data collected by the loom's analog quantity, the loom's average spindle speed is 330.01 RPM, which is near the actual system setting. Also, the maximum deviation is 1.01 RPM, and the relative error is less than 0.31%, which meets the measurement requirements of the spindle speed of the loom. Similarly, the relative errors of other analog acquisition parameters of the system are calculated, and the relative errors can be lower than 0.35%.
The set values of the loom parameters and the calculation results of the relative error are shown in Table 11. According to the index of the attribute data in the table, it can be known that the data sampling results of each attribute item of the monitoring system can genuinely reflect the actual situation of the loom parameters. The system can also realize real-time synchronization of loom state characteristic data, providing data support for state reasoning.
B. RECOGNITION RATE TEST OF THE ONBOARD CONDITION MONITORING SYSTEM OF LOOMS
In order to test the effect of the proposed loom on-board condition monitoring method, taking the identification of main shaft system wear as an example, the system was installed on two looms of the same model with different degrees of wear. The loom state data were collected for 1000 seconds, respectively. The 2000 data instances are randomly divided into ten groups, each containing 200 data instances. Table 12 compares recognition rates between the decision network constructed by the traditional rough set method and the decision network constructed by the improved rough set method.
The above test experiments on the recognition rate of spindle wear show that the average recognition rate of the loom state mapping network constructed by the traditional rough set method for spindle wear is 99.955%; after improvement, the average recognition rate can reach 99.998%. Compared with the traditional rough set method, the genetic multiobjective optimization and DSmT improved rough set method have a better recognition effect on the loom state decision.
VI. CONCLUSION
This paper presents a decision method for condition monitoring of high-speed rapier loom systems, mainly improving the attribute reduction step and the decision matching step of the traditional rough set method. A spindle dynamics model is developed for the spindle, the core of the weaving process, and a table of weaving machine spindle parameters are summarised to identify the sources of weaving machine monitoring data. The addition of warp vibration monitoring is proposed to improve the monitoring system's decision accuracy and compensate for the lack of data sources for conventional weaving machine monitoring. The data is analyzed using the rough set method. For discontinuous data types in the time domain, the data are first pre-processed using interpolation methods, which give a complete picture of warp tension fluctuations. For the problem of complex loom attribute data and the unclear association between data content and decision making, the rough set method is used for processing. Experiments show that the loom attribute data can be effectively extracted using the rough set method from the loom state decision rules. However, the pure rough set method suffers from weak machine generalization and uncertain rules for matching new objects. In order to improve the generalization performance of machine learning, this paper proposes a genetic multi-objective optimization method incorporating the rough set method with a genetic algorithm, in which each conditional attribute subset of the decision table is treated as a candidate solution encoded. The genetic algorithm is applied to evaluate the structural risk of the attribute subset of the decision table to construct a weaving machine state decision rule base with a good recognition effect and high decision accuracy. In order to reduce the uncertainty of matching rules for new objects, this paper proposes a rough set method incorporating the uncertainty derivation method of DSmT to assign basic probabilities and two-level source combinations to the rules that may match so as to derive the decision with the highest accuracy, and experiments show that this method increases the discrimination of decision support probabilities. Finally, the condition monitoring system is tested on a actual loom. The experimental test shows that the average recognition rate of the spindle wear problem is 99.955% with the loom state mapping network constructed by the traditional rough set method. After the rough set method is improved by genetic algorithm and DSmT theory, the average recognition rate of spindle wear identification is 99.998%. Compared with the traditional rough set method, the improved rough set method can achieve better recognition results for the weaving machine monitoring mapping network. Overall, this paper proposes a series of improvements and implementations for condition monitoring of high-speed rapier weaving machines, hopefully providing some reference for researchers in this field.
WEILING LIU received the bachelor's and master's degrees in mechanical engineering from the Hebei University of Technology, in 1995 and 1998, respectively, and the Ph.D. degree from Tianjin University, in 2006.
From 2009 to 2011, she conducted postdoctoral research with Bao Jingling at the Tianjin Academy of Environmental Sciences. She is currently an Associate Professor with the Hebei University of Technology. Her main research interests include artificial intelligence and automation control.
PING LIU received the graduate degree from the Hebei University of Technology, in 2020, where she is currently pursuing the master's degree. Her research interests include intelligent control and health management.
XIAOLIANG WANG is currently pursuing the master's degree in electronic information with the Hebei University of Technology. His research interests include embedded technology and intelligent control. | 10,137.4 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Abelian combinatorial gauge symmetry
Combinatorial gauge symmetry is a principle that allows us to construct lattice gauge theories with two key and distinguishing properties: a) only one- and two-body interactions are needed; and b) the symmetry is exact rather than emergent in an effective or perturbative limit. The ground state exhibits topological order for a range of parameters. This paper is a generalization of the construction to any finite Abelian group. In addition to the general mathematical construction, we present a physical implementation in superconducting wire arrays, which offers a route to the experimental realization of lattice gauge theories with static Hamiltonians.
Introduction
Physically realizing topological ordered [1] gapped quantum spin liquids in the laboratory remains a problem at the frontier of our knowledge in both quantum materials and quantum information sciences.Control over such states would constitute a major step forward in building stable quantum memories, material simulation, and ultimately scalable quantum computation.Over the past decades, major theoretical advances have been made by formulating models within the framework of lattice gauge theories [2], for example the well-known 2 toric code [3].Recently, 2 quantum spin liquids have been demonstrated by preparing a state like the quantum ground state of the static system and its elementary excitations as non-equilibrium states in quantum simulators [4,5].However, as these states are prepared dynamically, they do not exist in the absence of a an external drive or modulation, and thus they do not possess the topological stability of the ground states of the static Hamiltonians.
A set of proposals utilizing the "combinatorial gauge symmetry" (CGS) construction plots a potential alternative path to overcoming these obstacles [22,23].CGS is a framework for constructing Hamiltonians out of realistic two-body interactions where the gauge symmetry is exact and non-perturbative for the full range of parameters.As such there is hope that a suitable range of parameters can be found where the topological phase is sufficiently gapped to be observable and stable.Earlier work has centered on special cases of small Abelian groups ( 2 and 3 ).By generalizing the framework to all Abelian groups we further enlarge the scope of possible implementations and parameters where one can look for such states.The mathematical construction developed in this paper to handle the generalization is interesting in its own right.
From a mathematical point of view, central to the construction is a classification of twobody interactions that can be written as matrices W i j that couple two quantum variables indexed by i and j (e.g., spins or Josephson phases).The allowed sets of interactions W must possess several properties in order to satisfy the requirements of a lattice gauge theory exactly.This paper is a systematic exposition of the general requirements.
As a high level preview, the gauge symmetry arises from automorphisms of the form W = L ⊤ W R where the matrices L and R must be monomials in order to preserve the commutation relations of the underlying quantum variables.Further, because of the geometry of the lattice configuration, R will be required to be diagonal, but L will not.Effectively, the problem of classifying the allowed W matrices will reduce to a study of how group actions relate to permutations of invariant sets.
The outline of the paper is as follows.First we introduce in Sec. 2 a simple example of a model with a combinatorial gauge symmetry corresponding to a 2 gauge theory on the triangular lattice to illustrate key elements of our construction.We summarize the main results of this paper in Sec. 3. In Sec. 4 we develop in detail the general construction for any Abelian gauge group, as well as modifications to the models in specific cases, and then provide illustrations for these in Sec. 5. Finally, we discuss experimental implementations in Sec. 6.
A motivating example
In this section we consider a 2 gauge theory on a triangular lattice with only two-body interactions.Before that, it should be made clear what kind of gauge symmetry such a theory possesses.In an ordinary 2 gauge theory, e.g., the classical toric code model, the degrees of freedom are located on the links of the lattice and the Hamiltonian is where the plaquette operator B p is the product of σ z operators acting on the links bounding the plaquette p.In the full toric code model, this Hamiltonian has gauge symmetries generated by star operators, that is, products of σ x operators along a path through the dual lattice that forms a domain wall around a subset of vertices on the lattice.At the level of individual degrees of freedom, such symmetries act by the transformation σ z → σ x σ z σ x = −σ z .Thus, whenever the support of a plaquette operator intersects with that of a gauge transformation, the degrees of freedom involved will be affected by the gauge transformation, but the plaquette operator will be invariant overall.(See Fig. 1(a).)Since in the Abelian case such a symmetry operation is always realized by a phase factor, we may represent its effect on a plaquette term by a diagonal matrix multiplying the vector of operators contained in this term.For example, if we collect the z-components of the three spins around a star onto a vector σ z ⊤ ≡ (σ z 1 , σ z 2 , σ z 3 ) ⊤ , then the action of the symmetry R1 that flips the first two spins is where R is the diagonal matrix Under the action of R the product , because the phase factors introduced by the two spin flips multiply to one.
Having understood the action of the gauge symmetry on individual degrees of freedom and on gauge invariant operators, we now construct a two-body Hamiltonian that possesses the same gauge symmetry.To this end, in addition to the gauge spinoperators σ z as before, we introduce matter spin operators µ z associated with each plaquette term in the pure gauge theory.More precisely, in this example we introduce four matter spins µ z a , a = 1, 2, 3, 4, which we also collect into a vector We would like the Hamiltonian to (i) have only one-and two-body interactions (here of the Z Z type) and (ii) possess a 2 gauge symmetry when acting on the gauge spins.Condition (i) is consistent with an interaction term of the form −J 4 a=1 3 i=1 W ai µ z a σ z i .Using the vector notation, this interaction can be written compactly as (µ z ) ⊤ W σ z .As for condition (ii), when the gauge symmetry acts on the σ i 's via σ z → Rσ z , the interaction is transformed into (µ z ) ⊤ W Rσ z , which is equivalent to multiplying the coupling matrix W on the right by R. If there exists a matrix L that we can multiply on the left of W so that L ⊤ W R = W , then L is the action of the gauge symmetry on µ z , so that the transformation (σ z , µ z ) → (Rσ z , Lµ z ) leaves the system invariant.The L and R matrices arise from gauge transformations, which ultimately come from conjugating spins operators by some symmetry operator, so the transformation represented by L should preserve the commutation relations between the spin operators.Therefore, we restrict our considerations only to L's that permute the spins or flip µ z 's.In other words, the matrix L should be a monomial matrix with entries ±1.
In the triangular lattice example, we can construct an interaction that satisfies all of the above conditions with four matter spins.The coefficient matrix is From the lattice geometry, we see that gauge transformations acting on the gauge spins around a triangular plaquette are generated by the following two diagonal matrices, The superscripts denote the sites where the spins are flipped.The corresponding L matrices that leave W invariant under W → L ⊤ W R are labeled by the same superscripts as the corresponding R matrices.(See Fig. 1(c) for an illustration of the effect of R 12 and L 12 on the operators σ z i and µ z a .)Next we take a closer look at the properties of these matrices.
By design W in (4) has the symmetry L ⊤ W R = W .The rows of W are the triples of ±1 that multiply to 1.These are precisely the triples that can be obtained from (+, +, +) by flipping the signs of two entries at a time.More formally, the rows of W form the orbit of (+, +, +) under the action of the group of gauge symmetries.Orbits under a group action are examples of invariant subsets, which means that if we flip the sign of any two columns of W via W → W R, i.e. act on all elements of the invariant subset by a group element, the effect is the same as permuting the rows of W , i.e. the elements of the invariant subset.Therefore, for each R, there will always be a permutation matrix L that "undoes" its effect by permuting the rows back into their original positions, so that W R = (L ⊤ ) −1 W is satisfied.For example, with the pair of transformations R 12 and L 12 , we have and Flipping the signs of all entries in the first two columns of W has the same effect as exchanging the first row of W with the last and the second with the third.This interaction term needs to be supplemented with additional terms, so the Hamiltonian of the entire system takes the form where the subscripts p on the operators µ x,z p and σ x,z p denote the plaquette that the operators are located in, the indices a ∈ p label the matter spins in plaquette p, and the indices i label the gauge spins on the set of links L of the lattice.If a large transverse field −Γ p a∈p µ x a is applied on the matter spins, it will effectively integrate out the matter degrees of freedom, so that the interaction term −J p µ z p ⊤ W σ z p will perturbatively generate a pure gauge theory.The lowest order term in perturbation theory will be of the form p i∈p σ z i .In addition, a star term can also be generated perturbatively through the application of the transverse field on the gauge spins −Γ ′ p i∈p σ x i , thus fully reproducing the pure gauge theory we started with.We stress that the gauge symmetry of the system described by Eq. ( 9) is preserved throughout, not merely emergent in a perturbative limit in which we justify integrating out the matter fields, meaning that a topological phase may exist within a large area of the parameter space of couplings J, h, Γ and Γ ′ .
For this system to be equivalent to a 2 gauge theory whose ground state is a spin-liquid state, it is not sufficient to only have the gauge symmetry.We would like ground state in absence of the σ x term to be an eigenstate of the plaquette operators B p = i∈p σ z i with eigenvalue 1, because this enforces a zero-flux condition that corresponds to the confined phase in the pure gauge theory. 2 However, in our motivating example on the triangular lattice, the B p = 1 state is always degenerate with the B p = −1 state, because a global spin flip of both matter and gauge spins commutes with the Hamiltonian but anticommutes with B p .In order to split this degeneracy, we apply a field −h p a∈p µ z a to the matter spins, with the sign of h chosen so that the B p = 1 sector is preferred.Without this term, the ground state manifold is spanned by the flux-0 state {σ, µ} = {(+, +, +), (+, −, −, −)} and its gauge symmetry partners, along with the flux-1 state {(−, −, −), (−, +, +, +)} and its symmetry partners.Since these two sets of states have distinct matter spin magnetization (indeed, the CGS preserves the magnetization of matter spins because it acts as a permutation on the matter spins), we are able to distinguish the flux states by applying a uniform field on the matter spins.
In the above example we outlined how to construct a combinatorial gauge theory whose ground state has topological order.In the following sections we will generalize these ideas.
Summary of results
The construction presented in the previous section can be repeated in full generality for all finite Abelian gauge groups by considering cyclic gauge groups.This is because any finite Abelian group G can be decomposed into a direct product of cyclic groups i k i .Focusing on the on the cyclic group k , we work with operators σ Z and σ X acting on a k clock variable, which satisfy the relation Schematically, the one-and two-body Hamiltonian for such a combinatorial gauge theory takes the following form: where we have extended the vector notation of the example of the previous section to write the Z Z term in the first line of Eq. (11), by collecting all q gauge spins and m matter spins on each plaquette into vectors (11) is the gauge-invariant interaction term.A gauge transformation is represented by a pair of matrices L p and R p acting on the vectors of operators associated with each plaquette, i.e. σ Z,X p → L p σ Z,X p and µ Z,X p → R p µ Z,X p .The diagonal R p matrices are determined by the behavior of the pure gauge theory under gauge transformations, and the L p matrices are monomial and constructed according to the requirement L ⊤ p W R p = W .This last requirement ensures the gauge invariance of this interaction term.It is possible to construct the coupling matrix W for any gauge group.Moreover, by exploiting internal symmetries of matter degrees of freedom, the number of matter variables, or equivalently, the size of the matrix W can be reduced.A CGS Hamiltonian on a lattice with coordination number q will have at least q parameters that we will be able to tune to adjust the spectrum, so that the correct flux sector becomes the ground state.The transverse field −Γ p a∈p µ X a induces dynamics in the matter degrees of freedom.When this term is strong (Γ ≫ J), the matter degrees of freedom are integrated out, and the effective Hamiltonian to the lowest order in perturbation theory is exactly the plaquette term in the pure gauge theory, since that will be the lowest order term in perturbation theory that respects the gauge symmetry.Next if we apply a weak transverse field −Γ ′ i∈L σ X i , the generators of gauge transformations, that is, the star terms in the pure gauge theory, will be produced.Note that as long as the transverse field on the matter spins is uniform in each plaquette, the gauge symmetry is preserved.Thus the Hamiltonian (11) will always possess an exact gauge symmetry.This means that even though the effective Hamiltonian only becomes exactly equal to the pure gauge theory in the Γ → ∞ limit, we should expect the existence of a topological phase that can be connected adiabatically to the confined phase of the pure gauge theory even at finite values of Γ .Indeed, in the case of 2 gauge theory on a square lattice, it has been shown that such a quantum spin liquid phase exists in this limit.[24] This provides evidence that under the Γ ′ ≪ J ≪ Γ energy hierarchy, the topological phase is robust.Since the gap is finite, as long as symmetry breaking perturbations are small compared to this gap, the splitting between the topologically degenerate ground states, or indeed within any flux sector, is exponentially small in the system size, so the topological phase should be preserved.The presence of the term −h p a∈p µ Z a in Eq. ( 11) is required to break accidental degeneracies when certain gauge-violating transformations leave the interaction term invariant.This is another measure to ensure the energetic separation of different flux sectors.
Combinatorial gauge symmetry
Suppose we want to construct a gauge-invariant two-body interaction whose behavior under gauge transformations mirrors that of a pure gauge theory where the first group of terms represents charges, i.e., generators of gauge transformations defined on a star labeled by s, and the second group enforces the constraint that the ground state is flux-free on each plaquette labeled by p.To describe this behavior algebraically, we need to consider three algebraic constructs.First, the gauge fields can be represented by variables σ that take value in the gauge group G and this group naturally acts on these variables. 3When an element of the gauge group g ∈ G acts on a gauge variable σ, we write it schematically as g • σ.
On the entire lattice, the total Hilbert space is a tensor product of all the local gauge variables i σ i , and is acted on by the direct product of the gauge groups i G i , where the factor G i acts only on the variable σ i .Not all elements of this direct product are valid gauge transformations, but only the ones that are generated by the charge operators A s .We call this subgroup G the group of gauge transformations.An operator is gauge invariant if it is invariant under the action of this group.Nevertheless, in order to test if a local operator O is gauge invariant, we need not test it against all possible gauge transformations, but only the ones that act nontrivially on the support S of O.This means that we are in fact interested in the quotient group G under the relation ∼ that identifies two gauge transformations T = i g i and Now we assume that in the pure gauge theory, the flux operator B p is a product of q operators σ 1 , . . ., σ q .However, since our aim is to construct a two-body interaction term involving a set of auxiliary, or "matter", degrees of freedom µ 1 , . . ., µ m , the resulting operator should not be a product of all relevant operators.The most natural form is a quadratic direct coupling between the gauge and matter variables, i,a µ a W ai σ i .Thus we write the gauge and matter degrees of freedom as vectors of operators σ and µ, so that the action of a gauge transformation T ∈ G p on σ can be represented by a diagonal matrix R, where and its action on the matter variables by a matrix L, i.e. µ → Lµ.Now we define that the term i,a µ a W ai σ i = µ ⊤ W σ has a combinatorial gauge symmetry that reproduces the gauge symmetry of the flux operator B p of the pure gauge theory H pure = −λ B p B p , if for every gauge transformation T in the local group of gauge transformations G p of the flux operator B p = i∈p σ i that acts on σ p via the matrix R, we can find a matrix L acting on a vector of matter variables µ, such that (Lµ) ⊤ W (Rσ) = µ ⊤ W σ, or L ⊤ W R = W .The symmetry L of the matter variables should be a physical symmetry of the corresponding physical degrees of freedom.
The overall invariance of µ ⊤ W σ is equivalent to the invariance of W under the multiplication by R on the right and a corresponding L on the left.Since an R matrix multiplies W on the right, it can be thought of as the same transformation applied to each row of W . Thus if the rows of W form an invariant set under all R-transformations then multiplying W on the right by a matrix R has the same effect as permuting the rows, so we can "undo" this transformation by multiplying W on the left by a permutation matrix L ⊤ .This gives us the invariance and consequently the invariance of µ ⊤ W σ under the transformations µ → Lµ and σ → Rσ.
The number of matter degrees of freedom µ is then determined by the number of elements in this invariant set.Since invariant sets are unions of orbits, such a coupling matrix W can be constructed by choosing one or more rows of initial entries, applying all R matrix representations of transformations in G p to obtain the orbits, and combining the results to form W .We can lay out the construction more concretely using k as the gauge group.σ could be represented by a clock variable . The action of the gauge group is generated by the X operator, associated to a k × k permutation matrix that is, take any generator x of the group G and represent it by the X operator, then for any group element g, which can be written as x c for some integer c, its action σ → g • σ is represented by Z → (X † ) c Z X c .In particular, since X Z X † = ζ k Z, such an action on Z reduces to multiplication by the phase ζ c k = e 2πic/k .Thus, the action of G p that we have represented schematically as a matrix multiplication in fact is a matrix multiplication by a diagonal R matrix whose entries are k-th roots of unity.For the flux term B p = i∈p X i , gauge invariance requires that the entries of R multiply to 1.If B p is supported on q sites, the local group of gauge transformations G p will be a quotient of q−1 k .In many cases, G p will be precisely q−1 k , and when k is prime, G p will always be a direct sum of fewer than q copies of k . 4This includes the case of k toric codes, but also models with much more complicated geometry, for example the Haah's code, which will be discussed in the following section.
Supposing G p is of the form q−1 k , we can construct the coupling matrix W by choosing the entries in the first row and generating the other rows under the action of all R ∈ G p .Suppose we take the first row of W to be (1, 1, . . ., 1), then the entries of W will also be k-th roots of unity, and the rows of W consist precisely of all q-tuples of k-th roots of unity that multiply to 1.There are N = k q−1 such combinations, so the CGS model will require k q−1 matter degrees of 4 Each charge operator will generate a subgroup of the local group of gauge transformation isomorphic to the gauge group.The relation among them is of the form R , where R i is the local gauge transformation generated by the i-th charge operator.When k is prime, this relation can be used to express R m in terms of the other generators, but if not, we may only be able to eliminate a power of R m , resulting in a local group of gauge transformation whose direct summands may include non-trivial quotients of k .freedom.Explicitly, where a n, j are integers such that q j=1 a n, j ≡ 0 (mod k), or equivalently q j=1 ζ a n, j k = 1, for all n.This matrix will satisfy the CGS invariance condition (15).In this case a general gauge transformation acts on the gauge degrees of freedom via a multiplication on the right of W k,q by the following diagonal matrix: where c j are integers satisfying the condition q j=1 ζ c j k = 1 or q j=1 c j ≡ 0 (mod k), so that the product of each row of W k,q is preserved.By construction, this right multiplication on W by R has the effect of permuting its rows, so that an N × N permutation matrix L exists to rearrange the rows back in their initial order, thereby satisfying the condition L ⊤ W R = W .
For a specific example, consider the k = 5, q = 3 case, that is, take 5 as gauge group and a triangular lattice.Starting with (1, 1, 1) as the initial row of W , the remaining rows can be obtained by the action of the local group of gauge transformations 2 5 , generated by ), acting on W by right multiplication, producing a 25-element orbit.(See Appendix B for the full form of this matrix.)In the following discussion we will denote such a combinatorially symmetric coupling matrix by W k,q [r 1 , . . ., r n ], where r 1 , . . ., r n are representatives of distinct orbits forming the set of its rows, so the 5 example above is W 5,3 [(1, 1, 1)].(When context makes it clear, we may also omit some or all of the arguments of W .)
Reducing the number of matter spins
In general, we do not impose anything other than permutation symmetry for the matter variables, so in principle the µ's in the Hamiltonian can be any local operator.However, if the matter variables are realized as spin- 1 2 or n degrees of freedom, they could possess an internal symmetry, which generalizes L's to monomial matrices.In some cases, this allows us to reduce the number of matter variables.To be precise, this reduction is possible whenever G p contains a subgroup consisting of elements of the form R = g I, which we may call "uniform" gauge transformations, forming a subgroup G U p of G p .Such uniform gauge transformations multiplies all entries by the same group element g from the right and permutes rows of the coupling matrix differing by an overall phase factor from the left.We can factor out all such uniform transformations and let them act as an internal symmetry of the matter variables, or equivalently quotienting G p by G U p .This reduces the size of G p and therefore the number of matter variables by a factor of G U p .Starting from a fully constructed CGS model with uniform gauge symmetries, we can partition rows of W into sub-orbits consisting of rows related by a uniform gauge transformation and choose only one representative from each to constructed a reduced W red .For this coupling matrix, when the action of R maps a row of W red into rows that are not present in the original, it will always be compensated for by a non-identity entry in the L matrix.We will see examples of this below.
Accidental symmetries
A subtle issue can arise related to uniform gauge transformations.When the matter variables possess internal symmetries, and when such symmetries coincide with elements of the gauge group, they may combine into spurious symmetries of the Hamiltonian that are not members of the local group of gauge transformations.In the language of the coupling matrix W and L, R matrices, this situation is described by a pair of transformations (L, R) = (g −1 I, g I), when g I itself is not a valid gauge transformation.For example, this could occur when the matter variables are spin- 1 2 variables, the gauge group contains an element that is represented by a phase shift by −1, and the flux operator B p contains an odd number of factors.The motivating example discussed in Section 2 falls into this case.Such a spurious symmetry will collapse flux sectors that should remain distinct, so we need to break it with an additional field −h p a µ z p,a on the matter spins to remove the internal symmetry.(See Appendix C for a demonstration that applying such a field is sufficient to lift the spurious symmetry.) Note that uniform spurious symmetries arise in a situation complementary to the situation discussed above where uniform gauge transformations allow for a reduction of the coupling matrix, so breaking the internal symmetry in the former case will not come at a cost of increasing the number of matter variables.
Product gauge groups
The framework for CGS is applicable to general Abelian gauge groups, but it is sufficient to consider the more restricted case of cyclic groups, because any finite Abelian group can be decomposed into a direct product of cyclic groups.A CGS Hamiltonian whose gauge group is a direct product can be built up from CGS Hamiltonians with the factor groups as the gauge group.When the gauge group G is a direct product G 1 ×G 2 , the group of gauge transformations and the local groups of gauge transformations are also direct products.This means that if p has combinatorial gauge symmetry with respect to a G 1 gauge theory defined on a lattice and so does H 2 = −J p (µ 2 p ) ⊤ W 2 σ 2 p with respect to a G 2 gauge theory on the same lattice, we can take the tensor product of the two systems, and write where id 1,2 are identity operators acting on the two tensor sectors.This Hamiltonian will then be a G 1 × G 2 combinatorial gauge theory.
Note that even in cases where the gauge group is already cyclic, the tensor product construct could reduce the number of matter spins.For example, a 6 gauge theory on the triangular lattice can be constructed with 36 matter spins, and the number can be reduced to 12 if the matter degrees of freedom have a 3 internal symmetry.If the gauge group 6 is treated as 2 × 3 so that the gauge theory is constructed through the tensor product construction, only 13 matter spins are needed, with 4 matter spins for the 2 sector (see Section 2) and 9 for the 3 sector (see Section 5.2).The coupling matrix for the 3 sector can be further reduced (also see Section 5.2), so that in total only 7 matter degrees of freedom is required, of which 3 need to be 3 variables.
Ground state flux and tuning the spectrum of the CGS coupling term
We note that there is no fundamental constraint on the initial row(s) of the coupling matrix W , so each of these entries can be regarded as a tunable parameter, which gives us a lot of freedom to adjust the spectrum of the CGS coupling term.This is fortuitous, because while the combinatorial gauge symmetry construction guarantees that any model we generate has an exact gauge symmetry, this does not mean that the ground state of our two-body model also corresponds to the ground state of the gauge theory we are constructing, as we saw in the case of a 2 gauge theory on a triangular lattice.To find the energy of a particular flux sector, we can fix a set of value of the σ Z operators that satisfy the flux constraint, then (supposing that the matter variables are spin-1 2 's) choose the signs of the µ z 's, so that the energy is minimized, Due to the combinatorial gauge symmetry, this expression only depends on the flux i∈p σ Z i and the initial coupling values that generate W .Given any specific combinatorial gauge symmetry, these initial couplings can be tuned to find a coupling matrix that has the zero flux sector as the lowest energy one.This argument also shows that when the matter degrees of freedom have internal symmetries, transforming them under such symmetries does not change the spectrum of the CGS Hamiltonian.In other words, if the matter degrees of freedom are spin-1 2 's, for example, flipping the sign of entire rows of the coupling matrix W will not affect the spectrum.
Take the coupling matrix (17) as an example, if a more general set of coupling constants (b 1 , b 2 , . . ., b q ) are taken as the starting point of the construction, the resulting coupling matrix will become W k,q with column j rescaled by b j , and the ground state energy will now become In the W 5,3 [(1, 1, 1)] example mentioned earlier, the ground state contains both flux-0 and flux-1 and sectors.If instead r = (−1, 1, 1) or r = (0.5, 1, 1) is the first row, which corresponds to rescaling the first column of the coupling matrix (40) shown in the appendix by −1 or 0.5, respectively, the ground state will be a flux-0 state.
Examples of combinatorial gauge symmetric Hamiltonians
In examples to follow, we will examine a few CGS Hamiltonians.Through these examples we shall further illustrate both the flexibility of these models and the necessary conditions for realizing a topological state.
Uniform gauge transformations and reduction of number of matter spins
Consider a 2 gauge theory on a square lattice, e.g. the toric code.Using the initial row (−1, 1, 1, 1), we will get the 8 × 4 interaction matrix On the other hand, the 4 × 4 Hadamard matrix with the same initial row was used to construct a 2 gauge theory [22].Both matrices possess combinatorial gauge symmetries corresponding to a 2 gauge theory on a square lattice, but (22) has twice as many rows as (23).We will show that the two constructions are equivalent, and W 4,2 can be reduced to W ′ procedurally.(Thus we may refer to the latter as W which not only permutes the rows of W 4,2 red but also flips the signs of two of them.However, if we relax the restriction that the L matrices be permutations, but also allow for −1 in addition to 1 in its non-zero entries, i.e., by letting L be monomial matrices, we can recover the combinatorial gauge symmetry.Physically, this corresponds to not only permuting, but also flipping the matter spins.This operation commutes with a transverse field, so the 2 gauge symmetry is preserved in the full Hamiltonian.What makes this manipulation possible is that while W 4,2 red does not include every element of the orbit under the action of the gauge transformations, it does include all such entries up to overall signs, so that the action of an R matrix permutes the rows of W 4,2 red up to signs, which can be compensated for by the signs of the entries in the L matrix.In this particular case, the corresponding L matrix is The full and reduced versions of the coupling matrices, W 4,2 and W 4,2 red , are mapped to each other as follows.To reconstruct W 4,2 , append to each row r a ∈ W 4,2 red a row −r a .Conversely, observe that W 4,2 consists of four pairs of rows that differ by an overall sign, so by choosing one representative from each pair, we form W 4,2 red .This procedure also produces a correspondence between the full and reduced versions of L matrices.A reduced L matrix is augmented into the full version by replacing each entry by a 2 × 2 block.Zeros and ones become zero and identity matrices, while −1 becomes a 2 × 2 permutation matrix that swaps the two rows that differ by an overall sign.For example, from L 12 red in (25), we get Compare the transformation of the full W matrix under the action of R 12 , This is a permutation of the rows 1 and 3, 2 and 4, 5 and 8, and 6 and 7, which precisely represented by L 12 .To reduce from a full L matrix, we first sort the rows of the coupling matrix so that rows related by a uniform gauge transformation are adjacent to each other, so that the full L matrix will take a block form.Using a permutation representation of 2 , each block in the full L matrix is mapped to +1 or −1.
Tuning the spectrum of a CGS interaction
Next we shall analyze the case that has been studied in [23], namely a 3 gauge theory on a triangular lattice (see Appendix A for the change in underlying lattice) in the context of the general method of building a CGS model as well as techniques for reducing the number of matter degrees of freedom and adjusting the spectrum.
A plaquette in a triangular lattice has coordination number 3 and we are working with the gauge group 3 , so starting from the initial row (1, 1, 1), we construct a W matrix where ω = e 2πi/3 .Note that the rows can be partitioned into three sets, and within each set the rows differ by an overall phase factor.Following the reduction procedure illustrated by the previous example, we pick one representative from each set to construct a reduced coupling matrix, and take the matter degrees of freedom to be 3 clock variables.However the ground state of this CGS Hamiltonian does not lie in the 3 flux 0 sector, instead it is in the flux 1 and flux 2 sector, which can be verified from equation (20).To fix this issue, we can adjust the initial row of the W matrix, for example to (−1, 1, 1).An alternative solution is presented in the appendix of [23], that is, a W matrix that combines two orbits under the gauge transformations, In Sec. 4 we noted that as long as the rows of W forms an invariant set under the action of the gauge transformations, it possesses a combinatorial gauge symmetry.The simplest invariant sets are orbits of a single row, but they can also combine multiple distinct orbits.This is not obviously advantageous, because the size of such a W matrix is not minimal.However, the above W for the 3 CGS turns out to have the correct flux state as the ground state, while the simple W matrix formed out of the orbit of (1, 1, 1) does not.Thus we have another method of adjusting the spectrum of CGS Hamiltonians.
Gauge theories with more complex structure
To illustrate the power of the CGS construction, we now construct a model with the symmetry of Haah's code.Recall that Haah's code is defined on three-dimensional cubic lattice, where there are two spins on each site.The Hamiltonian is the sum of two types of generators of a stabilizer code, each supported on each cubic cell, c, of the lattice, Each operator A c and B c is a product of σ z 's and σ x 's, respectively, acting on a subset of the eight spins on the cubic cell, as illustrated in Fig. 3. Reformulated as a gauge theory, we may treat the A's as charge operators and the B's a generators of the gauge symmetry.Thus to construct a CGS model with the same type of symmetry, we need to compute the group of local gauge transformations on an A-type operator generated by conjugating it by B-type operators.This is computed in Appendix D. Despite the complex arrangement of sites, the local group of gauge symmetries is very simple, 7 2 .This means that we could emulate an A-type operator by a CGS model of the (2, 8)-type.Figure 4: wire arrays that implement combinatorial gauge symmetries for (a) a 2 gauge theory on a square lattice [25] and (b) a 3 gauge theory on a honeycomb lattice [23].(a) The array is a grid of "waffles" in which horizontal wires extending across neighboring waffles (in black) represent the gauge degrees of freedom, and the vertical wires (in gray) are the matter degrees of freedom.Within each waffle, the wires are coupled by Josephson junctions with no phase shift that corresponding to +1 in the coupling matrix (black cross on white square) and junctions with a π phase shift that correspond to a −1 in the coupling matrix (white cross on black square).(b) In the 3 case, the grid of waffles is constructed by the same principle, but within each waffle, the wires are coupled via Josephson junctions, and the phase shift required by the CGS Hamiltonian is realized by a uniform magnetic field that generates a flux of Φ = (n + 1 3 )Φ 0 within each elementary plaquette of the waffle, where Φ 0 = h/2e is the flux quantum.
Implementations
Proposals have been made for realizing a 2 CGS model on a square lattice with a superconducting circuit [25], and a classical 2 spin liquid with the same combinatorial gauge symmetry was built and observed [26].These implementations suggest generalizations.Following the approach in [25], superconducting wires can be arranged into an array of "waffles", or grids of intersecting wires, as illustrated in Fig. 4(a).Generally, such wire arrays are described by the Hamiltonian (see also [23]) : : : with the Josephson coupling and a quadratic capacitance term H C that depends on the conjugate charges to the phase variables θ i , φ a .When the coupling coefficients W ai are those of a CGS model, the Josephson possesses a combinatorial gauge symmetry, where the action of gauge groups corresponds to shifts in the location of the minima of Josephson energies.(One can ensure that H C does not break the symmetry by requiring permutation invariance of the capacitance matrix.)In the regime where H J is dominant, the minima of the Josephson energy correspond to the minima of the CGS model.
To get the phase factors W ai in the Josephson Hamiltonian, two strategies are available.The junctions can be built as Josephson junctions or π-Josephson junctions [27][28][29][30] when the W matrix consists only of +1 and −1 entries, or by a more general approach given in [23], shown in Fig. 4(b), where the W matrix can be realized by threading a uniform flux of 1 3 flux quantum in every elementary plaquette of the waffle.The latter strategy can in fact be applied to other CGS Hamiltonians.
Generically, the Hamiltonian of a waffle of superconducting wires is the sum of the Josephson energies at the intersections of the wires, whose minima will acquire phase shifts in the presence of a magnetic field.Let θ i be the superconducting phase of the wire representing the i-th gauge variable, and let φ a be the phase of the a-th matter wire, both measured at the edges of the waffle (the left edge for φ a and the top edge for θ i ).The Josephson junction energy at the intersection of the θ i and φ a wires is equal to where is the phase accumulated by integrating the vector potential A, which originates from magnetic fluxes, along the path from the top of gauge wire i to the left of matter wire a (See Fig. 5(a)), and Φ 0 is the flux quantum.A gauge transformation A → A + ∇ϕ shifts the value of A ai , up to a factor of 2π/Φ 0 , by Since these two terms on the right-hand side of (33) only depend on the position where θ i and φ a are measured, they can be absorbed into θ i and φ a , respectively.Thus fixing a gauge amounts to choosing the reference point for the phases in the matter and gauge wires.We choose the gauge such that in which case A ai is tied to the total magnetic flux Φ ai within the area enclosed by the left and top edges of the waffle and gauge wire i and matter wire a: These shifts in the Josephson energy minima can be used to implement the coupling matrix W of a CGS model.To obtain the most general W matrix, all junctions are regular Josephson junctions with the exception of those involving the top wire of the waffle.These latter junctions need to be tunable, so that the Josephson phase is that of W 1i .This can be achieved using the asymmetric DC SQUID design in [25].Then an appropriate magnetic field should be applied, so that at the junction between the θ i and φ a wires a phase shift Φ ai is induced, satisfying This relation gives us a one-to-one correspondence between W matrices and flux configurations in waffles, because excluding the first column and first row, an n × m coupling matrix contains (n−1)×(m−1) entries, and there are precisely (n−1)×(m−1) elementary plaquettes in a waffle with m gauge wires and n matter wires.
For example, the 2 triangular CGS coupling matrix (4) corresponds to the following flux configuration.
, where Φ 0 = h/2e is the flux quantum.Note that a coupling matrix with its rows reordered or its rows transformed according to an internal symmetry of the matter degrees of freedom will have the same combinatorial gauge symmetry and spectrum, but in general it will correspond to a different flux configuration in the waffle realization.For example, if we swap the second and third rows of ( 4) and flip the sign of the second and last row of the resulting matrix, the coupling matrix will correspond to the following flux arrangement (as illustrated in 5(b)), Similarly, while the waffle corresponding to ( 30) is threaded by 1 3 Φ 0 and 2 3 Φ 0 fluxes, switching the fourth and fifth rows allows us to implement it as (see also Fig 5(c)). .
Given the total Josephson energy of a waffle that corresponds to a CGS model with W as its coupling matrix, the minimum is achieved at by a phase configuration (θ 1 , . . ., θ q ) on the gauge wires that corresponds to the ground state of the CGS Hamiltonian.(Compare to (20).)And as argued in [23], the phases (φ 1 , . . ., φ N ) on the matter wires are tethered to the θ i 's via in the same way that the signs of the matter spins in the ground state are fixed by the configuration of the gauge spins.This shows that the ground state degeneracy of the waffle system corresponds exactly to that of the CGS Hamiltonian, so when the Josephson energy is dominant, the superconducting wire array will exhibit a topological phase described by the CGS Hamiltonian.
Summary and outlook
We have presented a general method for constructing a class of one-and two-body-interaction Hamiltonians with an exact gauge symmetry for any given Abelian group.The work systematically encompasses earlier sporadic examples of combinatorial gauge symmetry.We illustrated the principles and the construction for a variety of groups and lattice geometries.
The core physical concept was to build a lattice with matter and gauge degrees of freedom, which can be spins or superconducting wires.The core mathematical concept was to classify the allowed interactions between the matter and gauge degrees of freedom.This problem reduced to ensuring consistency between the interaction matrix and action of the gauge group on the underlying quantum variables.
The Hamiltonians with the exact gauge symmetry should contain gapped topological phases for a wide range of parameters.We are hopeful that the construction here is sufficiently general that one or more of these examples can lead to realizations of topologically ordered quantum spin liquids in engineered or programmable quantum devices.At the very least, the mathematical construction here may offer a direction for further theoretical explorations.gauge theory) and by the NSF Grant DMR-1906325 (work on interfaced topological states in superconducting wires).
A Lattice gauge theory conventions
In this section we make a comment on the lattice gauge theory convention adopted in this paper and contrast it with the conventions in previous work on combinatorial gauge theory [22,23].Briefly, in previous examples of 2 and 3 combinatorial gauge theories, the flux variables of the gauge theory are located on vertices, so the flux operators are supported on stars, of a square lattice and a honeycomb lattice, respectively.Under the conventions of this paper, these constructions would be for a square plaquette-based 2 flux operator, and a triangular plaquette-based 3 flux operator.
In 2D Abelian lattice gauge theories, the flux and charge variables are dual to each other.Under a transformation that maps each face of graph to a vertex of its dual graph and vice versa, the flux variables can be interpreted as the charge variable of a theory defined on the dual graph.This means that, in terms of physical properties, it makes no difference if the flux variables are supported on the plaquettes of a graph or on the vertices.The more popular convention is to put flux variables on plaquettes, which is also the convention adopted in this paper.See Fig. 6 for an illustration of the 2 gauge theory in Section 2. Note that the position of the degrees of freedom and the Hamiltonian do not depend on the underlying lattice chosen.
We also note that the convention of situating flux variables on vertices of the lattice is the source of the terminology "matter" degrees of freedom.These matter variables are directly coupled to the gauge degrees of freedom, and are analogously supported on the vertices of the lattice.While strictly speaking individual "matter" variables don't carry representations of the gauge group, their Hilbert spaces may be rearranged into a direct sum of subspaces that do.Regardless of the true nature of these matter variables, in this paper we are interested in the confined phase of the gauge theory, which is most straightforwardly achieved when the matter variables are integrated out, so we need not dwell on their role in the lattice gauge theory.
B Further examples of CGS coupling matrices
Here we show two further examples of CGS coupling matrices.
For a 5 gauge theory on a triangular lattice, the following coupling matrix is constructed from the initial row (1, 1, 1), Here ζ = e 2πi/5 is one of the fifth roots of unity.The group of gauge transformations are generated by where perm{σ} denotes the permutation matrix corresponding to the permutation σ.
For a 2 gauge theory on a triangular lattice, using the initial row (1, 1, 1, 1, 1, 1) we can construct a 2 6−1 × 6 = 32 × 6 CGS coupling.However, since gcd(2, 6) = 2, we can reduce the number of rows of this matrix by a factor of 2, to the following 16 × 6 matrix, The generators of the gauge transformation can be taken to be This example corresponds to the color code [31] on a honeycomb lattice.
C Breaking spurious degeneracies for 2 CGS theories
For a 2 CGS theory on a lattice with an odd coordination number q, the basic construction outlined in Sec. 4 gives us a 2 q−1 × q W matrix.The rows of W consist of q-element vectors of ±1 that contain an even number of −1's.To find the ground state of this CGS Hamiltonian, we need only minimize the energy under the constraint σ = 1, since all other flux 0 states are gauge symmetry partners of this state, and the flux 1 states can be obtained by a global spin flip.In this case, the Hamiltonian reduces to − a µ a i W ai .This is minimized when each µ a takes the sign that makes µ a i W ai negative, which is in turn determined by the sign of i W ai .Among these, there are q 2 j rows with 2 j negative signs.When 2 j < q/2, the sum of all entries in the row is positive, otherwise it is negative.Thus in the ground state, the total µ-magnetization is ⌊(q−1)/4⌋ j=0 q 2 j − (q−1)/2 j=⌈(q−1)/4⌉ q 2 j .
Note that the first sum is over the even binomial coefficients up to (q − 1)/2, and the latter sum is over the remaining even binomial coefficients.Since q 2 j = q q−2 j and q is odd, the latter sum is equivalent to a sum over the odd binomial coefficients up to but not including the (q − 1)/2-th one.This means that we can rewrite the total µ-magnetization as (q−1)/2 j=0 (−1) j q j .
This shows that the µ-magnetization in the CGS Hamiltonian for k = 2 and q odd is always non-zero, thus guaranteeing that a uniform field on the matter spins will serve to split the ground state degeneracy between flux 0 and flux 1 states.Since B is a product of Pauli x-operators, the matrices R are diagonal matrices with ±1 along the diagonal.Thus for A to be under this transformation, there must be an even number of −1 along the diagonal.In other words, the local group of gauge symmetries must be a subgroup of the group consisting of operations flipping an even number of the spins in (Z 1 , . . ., Z 8 ), which we shall denote by G.A generating set of G can be taken to be {P i,i+1 } i=1,..., 7 , where P i,i+1 can be represented by the diagonal matrix This also shows that G ∼ = 7 2 .Now by showing that every element in this generating set can be expressed in terms of symmetry operations generated by type-B operators, we can prove that the local group of symmetries G A is exactly equal to G. The generators of G and their decompositions in terms of B-operators are tabulated in Table 1.Note that there are 14 type-B operators that generate non-trivial symmetries of A, so this computation also shows that they are not all independent of each other.
Figure 1 :
Figure 1: Illustration of the 2 gauge symmetry on a triangular lattice.(a) A gauge transformation flips certain spins, but plaquette terms remain invariant overall, since there is always an even number of spin flips.(b) A two-body operator satisfying the combinatorial gauge symmetry corresponding to the gauge symmetry depicted in (a) requires four matter degrees of freedom.(c) Under the action of the gauge symmetry in (a), the combinatorial gauge symmetric operator in (b) stays invariant by flipping the gauge spins just as in the pure gauge theory, while also permuting the matter spins.
Figure 2 :
Figure 2: Two gauge transformations T and T ′ that represent the same element of the local group of gauge transformation (magenta) for a plaquette operator (highlighted in black).
that is, their actions coincide on the support of the operator O.We define this group as the local group of gauge transformations G[O] for the operator O. (See Fig. 2 for an illustration of the relation between the group of gauge transformations and the local group of gauge transformations in the 2 gauge theory of Section 2.) For compactness, when O is the flux operator B p , we shall write G p instead of G[B p ].
incomplete orbit under gauge transformations might suggest that W 4,2 red cannot have a combinatorial gauge symmetry.Indeed, multiplying W 4,2 red on the right by R 12 = diag{(−, −, +, +)} we get W
Figure 3 :
Figure 3: Illustration of the two types of operators in the Hamiltonian of the Haah's code.
Figure 5 :
Figure 5: Flux-based superconducting wire array realizations.The general configuration of gauge and matter wires in a waffle and the phase shift through a Josephson junction induced by a magnetic field is depicted in (a).Correspondences between CGS coupling matrices and flux configurations are shown in (b) for the 2 CGS model on a triangular lattice and a waffle with a uniform flux Φ 2 = 1 2 Φ 0 except for one plaquette, and (c) for the 3 CGS theory with the coupling matrix W 3,3 red [(1, 1, ω), (1, 1, ω)] and a waffle with a uniform flux Φ 3 = 1 3 Φ 0 except for one plaquette and one junction (represented by a dotted box) that has a 2π 3 phase shift.
1 Figure 6 :
Figure 6: A gauge theory on a triangular lattice where the flux operators are supported on plaquettes (blue lattice) can be reinterpreted as a theory on the honeycomb lattice with flux operators supported on the vertices (yellow lattice).
Figure 7 :
Figure 7: Notations used for writing down the generators of the local group of gauge symmetries.The sites that are acted on non-trivially by type-A operators are numbered from 1 through 8 as in the diagram.The type-B operators that generate nontrivial transformations of one particular A-operator are labeled by their coordinates relative to it, in the coordinate system illustrated on the right.
D
Group of local gauge symmetries of the A-type operators in Haah's codeTo describe the local group of gauge symmetries G A of a type-A operator in the Haah's code Hamiltonian, it is convenient to label sites that it acts on as in Fig.7.The gauge symmetry generated by a type-B operator can then be written as the a multiplication by a matrix R,
Table 1 :
Generators of the group of local gauge symmetries. | 13,235.6 | 2022-12-07T00:00:00.000 | [
"Physics"
] |
Ensuring reliable operation of stranding machine for cable production through the use of intelligent technologies
. The article discusses the reliability of cable manufacturing equipment by diagnosing its technical condition. The solution of this issue is provided by the use of intelligent systems with expert assessment and prediction of probable failures in operation. The built-in working environment for the management of cable manufacturing equipment makes it possible to solve the issues posed by the reliability of cable making machines with a long service life by integrating artificial intelligence elements into a system with limited resources. The implementation of the solution to the problem is carried out on the stranding machine for cable production, on the example of which the options are considered that provide the necessary level of not only the reliability of the equipment in the manufacturing technology, but also the reliability in operation of the finished cable product.
Introduction
The use of artificial intelligence in the operation of technological systems is an urgent task focused on centralized management of production processes provided with modern high-tech equipment.
Like any modern technological process, cable production is a flexible technology aimed at the production of finished cable products of various nomenclature, which is determined by the volume of the production order, depending on the contracts concluded by the sales service for the supply of cable or wire to consumers.
The main purpose of the production service is to ensure the preparation of production (PP) for the frequent changeability of the nomenclature of cable and wire products (CWP), taking into account the high productivity of the technological equipment involved.This is achieved by purchasing highly efficient in-line cable lines built on modern information technologies capable of providing the necessary level of automation of the entire production technology as a whole.However, the cable industry contains a large fleet of obsolete technological equipment that has been put into operation for more than 10 years (Table -1), which is not able to meet the requirements of technology: to maintain complex mechanization and automation at the proper level both for modern cable production and for a single technological process of manufacturing a single production order for a cable product.The new, modern technological equipment has input x(t) and output y(t) control signals focused on working in a single (centralized) control system for the production process of manufacturing finished products, which ensures the smooth introduction of digital technologies into the cable industry.
The cable company today is a production process of mastering new types of high-quality CWP, having a high level of operational reliability, with minimizing the set deadlines for both the PP and the manufacture of the finished product, ensuring the seriality of the products with a reduced level of labor intensity of the technological process and reducing the cost of the CWP.The task set by the production workers is solved by the acquisition of modern, high-tech production complexes, which include several cable lines that perform technological operations sequentially and are linked to a single automated system with numerical control.The peculiarity of modern cable making machines is that they freely adapt to a single automated process control system (APCS) within the entire cable plant, providing accurate and maximally predictable control of the manufacturing technology of the finished cable product in the context of the fulfillment of the general production order received from the sales service.Thus, the modern production of the CWP, oriented to new technological equipment, capable of providing complex mechanization and creating an efficiently functioning production structure with unconditional provision for rapid change of the manufactured range of cable products.All this is achievable only with the availability of a high technical level of the manufacturing technology used, which is equipped with new cable manufacturing equipment that provides the finished cable product at all stages of production: high quality, operational reliability, as well as energy and resource saving in general for the considered technological cycle of production.
However, the engineering services of cable companies do not seek to dismantle cable manufacturing equipment that has a long service life for several reasons: economic (lack of financial opportunity to upgrade the entire fleet of cable making machines at a time) and industrial (the technical condition of the equipment allows to fulfill a production order, even with zero automation).In this regard, the issue of increasing the efficiency of operation of previously installed technological equipment located at a cable company with a long operational life is particularly acute (Table -1).Table-1 demonstrates that the analysis of the technological equipment fleet of JV OJSC "Uzkabel" (Uzbekistan, Tashkent) showed that 58.8% of the cable making machines involved in the production process of the CWP manufacturing has a service life of more than 10 years and requires modernization not only of the EMC but also of the automated control system, and therefore the implementation of measures for the introduction of automated control systems is ineffective on the production areas under consideration.A similar situation is developing at other cable companies in Uzbekistan with extensive production experience in the CWP manufacture.The work on the introduction of full automation of cable production is effective only if measures are taken to improve not only the EMC of the above-mentioned technological equipment, but also its control system as a whole.Against the background of rapidly developing cable technologies, such machines belong to systems with limited resources, because they work within the framework of a separate technological operation and lose greatly in terms of productivity, energy efficiency, energy and resource conservation, which causes great production difficulties in fulfilling the established production task and meeting its deadlines.An individually designed built-in control environment for each technological unit will allow achieving the solution of the task by ensuring the reliability of cable making machines with a long service life, due to the integration of artificial intelligence elements into a system with limited resources.All of the above forms one common production problem -the creation of a single production process that coordinates the operation of the entire fleet of installed cable making machines with different commissioning dates and greatly differing in technological, technical and production parameters: the volume of production, production time of cable product manufacturing, estimated machine time, etc.The solution of the task is possible not only through the introduction of intelligent control systems (ICS) focused on the expert assessment of the technological parameters of each unit of cable making machines, but also ensuring the regulation of operating parameters in combination with the ability of independent analysis, as well as the learning process based on the prediction of probable failures in the operation of technological equipment, its EMC and possible failures of installed technological modes.All of the above confirms the relevance of the chosen research direction for the production process of the CWP manufacturing.At the same time, it is necessary to develop an individual system for a cable making machine that has a long service life and ensures its unconditional integration into a single production system on a par with modern cable manufacturing equipment, through its modernization.
Research methods
To date, there are a large number of directions for the implementation of intelligent systems for various types of technological objects.The most realistic, in terms of improvement and modernization of technological equipment with a long service life, is the development and implementation of a control system using artificial neural networks of technological parameters for a single cable making machine.
The cable making machine, which has a long service life, is focused on one technology (drawing, stranding, extrusion, etc.), which is carried out through the interaction of several units consisting of complex, interconnected mechanisms.During the entire period of operation for this group of machines, repair services carried out repeated, partial replacement of components and assemblies, which leads to a deterioration in the performance characteristics of the machine compared to the passport ones (Table -3).During the technical diagnostics of the equipment (stranding machine of general stranding 630C 1-6-12-18-24), a discrepancy between the passport data and real indicators -output parameters was revealed.It should be noted that for the other units of cable making machines, the same similar situation develops.Solving the issues of modernization of cable making machines with a long service life is complicated by the fact that the existing element base corresponds to modern technologies, and therefore, obtaining performance characteristics close to the passport data for the machine in question is a complex engineering task.It is possible to carry out post-repair configuration of such equipment, as well as to maintain technological modes in conditions of strict compliance with technical regulations (regulatory and technical documentation) and production relationships only by developing an expert control system -an intelligent neural network (INN) operating for a specific cable making machine.
The considered type of INN refers to the RBF network, which has a target learning function focused on decision-making and technology management by approximating and predicting the technological parameters of both the EMC and the cable facility.
The creation of a digitalization system for cable manufacturing equipment is impossible without the development of a mathematical model that shall include a large array of input x(t) and output y(t) parameters linked to the production and technological indicators of not only the upgraded machine, but the input and output parameters of other cable making machines that are interfaced in the production chain as part of the overall production order -"end-to-end vision" of production technology.Thus, the mathematical model shall be based not only on the input and output parameters of the technological process, but also on the analytical data of equipment operation and technology execution in previous periods of operation (hour, shift, quarter, year), as well as disturbing signals z(t) -changes in external conditions (ambient temperature, humidity, dustiness, vibration), which is especially important for regions with elevated ambient temperatures, especially for Uzbekistan in the summer, when the temperature in the sun reaches 55-60°C).A lot of research has been devoted to solving the problem under consideration on approaches to the formation of the INN, namely the inverse-direct (D.Psaltis) [1][2][3][4] and inverse-indirect (M.Kawato) [1][2][3][4] neural network model of the control object.The main task of integrating the INN into the control system (CS) of a technological object is to adjust for optimal control of the technological process through regulation of the main operating parameters of cable manufacturing equipment, followed by setting up the system for independent control of the entire technology of cable product manufacturing [5,6], including the transition to a new production order.In a generalized version, the INN is an automatic control system with negative feedback, developed on the basis of a controller with a "teacher" function.Fig. 1 presents a block diagram of the learning process in the process of technology implementation by cable manufacturing equipment.
The implementation of the solution to the issue has been carried out using the example of stranding machine for cable production, within which it is necessary to consider options that ensure a given level of not only the reliability of the equipment in the manufacturing technology, but also the reliability in operation of the finished cable product.
As the object of the study, a rigid stranding machine (SM) of the disc type of cable production has been selected, which strands the conductive core (CC) of a cable product from a variety of previously cast (copper or aluminum) wires stranded according to the stranding system: 1+6+12+18+24+36.
The developed mathematical model is presented as a system of differential equations in operator form, which describes the operation of the SM EMC with maintaining the tension of the cable billet, with the specified control parameters: (1) where, M -the moment acting in the system under consideration; ω -the angular velocity of carriage rotation, v -the linear velocity of workpiece movement; P -pulling force; Ǫ -tension.The block diagram of the mathematical model, compiled in accordance with (2) is shown in Fig. 2, reflects the essence of the process of forming forces during CC stranding, causing tension of the cable billet, as one of the main adjustable parameters in cable production.
The integrated INN in the SM EMC shall be maximally adapted in terms of the technological and production features of the technological operation "CC stranding" [7,8].Functionally, it is a multi-level control system that is built both on optimization and regulation of critical technological parameters (step and multiplicity of stranding, stranding ratio, compaction ratio, linear speed of the workpiece, rotation speed of the carriage, pulling force, tension, etc.), and on the forecast model -a recommendation service that works in real time (24/7).So in the real operating mode, the number of controlled system parameters is 125 pcs., each of which is considered important for the operation of the equipment, because it has a tightly controlled control range, which ultimately determines the quality of the technology performed.Therefore, the main intellectual load developed by the ICS is an expert assessment of the correctness of maintaining the technology, in order to fulfill the final production task -the manufacture of high-quality cable products.
All existing expert systems (ES) are based on computer programs, realize the main goal -to exclude the human factor (the opinion of the technologist) in making decisions in technology, as a separate (narrow) area of tasks (Fig. 3).The introduction of the ICS ES into the operation of the SM EMC will increase the speed of the SM operation by increasing the operating speeds, the correct selection of equipment operating modes, the amount of technological information processed, as well as the correct choice of solutions for extreme operating modes of equipment and the exclusion of the "human factor".
The results obtained
The production process of CWP manufacturing is considered as a system mechanism that has a very tight time frame limited by the timing of the production order, and this complicates the task of performing the operation of the CC stranding due to time constraints associated with the execution of the production order.Thus, the development of an algorithm for real-time ICS EC is a more complex task than the description of an automated control system having a static structure of equipment operation.Embedded and integrated into the general system, the ICS shall have many subsystems that are functionally connected and interfaced into a single system of interaction in terms of decision-making time and have strict limits on the amount of allocated memory, as well as execution time [8,[10][11][12].At the same time, the integrated system shall also make the necessary edits independently and coordinate the calculation time of the control signal.
In addition, the SM ICS shall be adapted to work in a common network, i.e. it is compatible with other units of equipment included in the technological process of manufacturing a cable product, both within the production area, as well as the workshop and the company [7][8][9][10][11][12].The block diagram of the integrated control system is shown in Fig. 4. The results of mathematical modeling made it possible to test the operating and technological modes of cable manufacturing equipment -stranding machine, model "630C 1-6-12-18-24".The results of modeling the transition process in the current and upgraded system are shown in Fig. 5, both for systems in various situations and controlled by conditionally defined (selected) parameters.
Conclusion
Thus, the result of the research work done is to obtain, by mathematical modeling, the calculated characteristics of the operation of the technological equipment -the stranding machine, which allowed to conclude that the developed built-in INN is able to ensure the reliability of the stranding machine for cable production through the use of intelligent technologies.The implementation of the proposal was carried out at a specific stranding machine for cable production, and the necessary level of reliability of the technological equipment was obtained, but also the reliability of the finished cable product was increased.
Fig. 1 .
Fig.1.Block diagram of a generalized version of the INN:x(t) -input signal; е 0б (t)-learning error, which is the difference between the output value and the control actions u(t); y(t) -output action of the object.
Fig. 2 .
Fig.2.Block diagram of the mathematical model of stranding machine that provides tension maintenance.
Fig. 4 .
Fig. 4. Block diagram of the SM integrated control system.
Fig. 5 .
Fig. 5.The results of modeling the transition process in the current and upgraded system: y(t) -the output parameter of the system.
Table 1 .
TheIt should be noted that the analysis of the state of the technological equipment fleet of cable companies of the Republic of Uzbekistan has a similar situation (Table-2) in terms of technical security of the production cycle.
Table 2 .
The share of cable manufacturing equipment installed at cable companies of the Republic of Uzbekistan having different service life.
Table 3 .
The results of the technical diagnostics of the operating parameters of the 630C 1-6-12-18-24 stranding machine installed at JV OJSC "Uzkabel" (Tashkent, Republic of Uzbekistan) | 3,883.6 | 2023-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Electrospinning Nanofiber Mats with Magnetite Nanoparticles Using Various Needle-Based Techniques
Electrospinning can be used to produce nanofiber mats containing diverse nanoparticles for various purposes. Magnetic nanoparticles, such as magnetite (Fe3O4), can be introduced to produce magnetic nanofiber mats, e.g., for hyperthermia applications, but also for basic research of diluted magnetic systems. As the number of nanoparticles increases, however, the morphology and the mechanical properties of the nanofiber mats decrease, so that freestanding composite nanofiber mats with a high content of nanoparticles are hard to produce. Here we report on poly (acrylonitrile) (PAN) composite nanofiber mats, electrospun by a needle-based system, containing 50 wt% magnetite nanoparticles overall or in the shell of core–shell fibers, collected on a flat or a rotating collector. While the first nanofiber mats show an irregular morphology, the latter are quite regular and contain straight fibers without many beads or agglomerations. Scanning electron microscopy (SEM) and atomic force microscopy (AFM) reveal agglomerations around the pure composite nanofibers and even, round core–shell fibers, the latter showing slightly increased fiber diameters. Energy dispersive X-ray spectroscopy (EDS) shows a regular distribution of the embedded magnetic nanoparticles. Dynamic mechanical analysis (DMA) reveals that mechanical properties are reduced as compared to nanofiber mats with smaller amounts of magnetic nanoparticles, but mats with 50 wt% magnetite are still freestanding.
Among the broad range of nanoparticular materials, magnetic nanoparticles can be applied to diverse purposes, either practically for electromagnetic shielding, hyperthermia therapy or as catalysts [25][26][27], or as a model system for basic research on magnetic properties of nanofibers and nanofiber composites [28][29][30].
In all these cases, the distribution of the magnetic nanoparticles and their amount are highly relevant. In many cases, homogeneously distributed nanoparticles are favored [31][32][33]. The nanoparticle distribution is especially relevant, since it defines the magnetic properties of the nanofiber mats. Single magnetic nanoparticles, or nanoparticles in sufficient distance to their neighbors, often show magnetic properties different from the bulk material or larger agglomerations of nanoparticles. Bulk magnetic materials typically have domain walls, and magnetization reversal is usually performed by domain wall nucleation and propagation [34,35]. Small nanoparticles often consist of only one domain, enabling coherent rotation of the magnetization and thus a completely different magnetization reversal process, resulting in different coercive fields and potentially different remanence [36,37]. The sizes of such single-domain nanoparticles differ, depending on the material and respective magnetic properties, but also on the nanoparticles' shapes [36,37]. Single nanoparticles, in which the magnetization can rotate freely, become superparamagnetic, i.e., the hysteresis loop is closed, and the coercive field is zero [38,39]. Both these effects, however, are strongly affected by agglomerations of nanoparticles inside a matrix, e.g., in a nanofiber [40]. Investigating the nanoparticle distribution inside a nanofiber mat is thus highly important for estimating its magnetic properties.
In contrast, for most applications, a high amount of magnetic nanoparticles inside the polymeric matrix is favorable. A previous study revealed that poly(acrylonitrile) (PAN) nanofiber mats including magnetite or nickel ferrite nanoparticles (25 wt% in the spinning solution), electrospun with a needleless machine "Nanospider Lab," resulted in the formation of large beads in which the magnetic nanoparticles agglomerated [41]. Here, we report on nanofiber mats containing 50 wt% magnetite nanoparticles in the spinning solution, electrospun with a needle-based machine, as either common fibers or core-shell fibers, on a flat or a rotating collector. Our results show that, while the nanofiber mats spun with a common needle contain large irregularities and the fibers are mostly deformed, the core-shell fibers and the corresponding nanofiber mats show a highly regular morphology without beads or large agglomerations.
Electrospinning
The electrospinning solution was produced by dissolving 13 wt% PAN (X-PAN, Dralon, Dormagen, Germany) in dimethyl sulfoxide (DMSO, min 99.9%, S3 Chemicals, Bad Oeynhausen, Germany), which was then mixed with a magnetic stirrer at room temperature for 1 h. Magnetite nanoparticles (Fe 3 O 4 , particle size 50-100 nm, Merck KGaA, Darmstadt, Germany) of 50 wt% of the previous spinning solution were added to the solution by manual stirring, followed by ultrasonic treatment at 35 • C and a frequency of 37 kHz for 30 min. The amount of magnetite in the overall spinning solution was thus 33.3 wt%. As a reference, a PAN/magnetite spinning solution containing 20 wt% Fe 3 O 4 as well as pure PAN nanofiber mats from 16% PAN in DMSO were prepared.
Electrospinning was performed by a needle-based electrospinning system (Spinbox, from Bioinicia, Paterna, Valencia, Spain), applying a voltage of max. 18 kV along a tipcollector distance of 20 cm. Besides a flat collector, a rotating collector (diameter 100 mm, 300-400 rpm) was used. The flow rate through the needle with an inner diameter of 0.6 mm was set to 10 µL/min for spinning with one solution. A coaxial needle with an inner diameter of 0.6 mm and an outer ring diameter of 0.5 mm was used to produce core-shell fibers, with flow rates of 10 µL/min for the core (13 wt% PAN) and 10 µL/min for the shell (13 wt% PAN + 50 wt% magnetite). Spinning was performed at a chamber temperature of 20-23 • C and a relative humidity of 27-34%. The samples prepared are named MF (PAN/magnetite, flat collector), MR (PAN/magnetite, rotating collector), CSF (core-shell, flat collector), CSR (core-shell, rotating collector), and R (reference, pure PAN), respectively.
Characterization
The samples' morphology was investigated by a confocal laser scanning microscope (CLSM) VK-8710 (Keyence) for large-area scans, a Zeiss Sigma 300 VP scanning electron microscope (SEM) and a FlexAFM Axiom (Nanosurf, Liestal, Switzerland) in tapping mode, using Tap190Al-G (CSR and CSF samples) and Multi 75M-G (MF and MR samples) tips. The nanoparticle distribution was measured by energy dispersive X-ray spectroscopy (EDS) with a Quantax 70 EDX unit (Bruker Nano GmbH, Berlin, Germany), attached to an SEM Hitachi TM-3000 (Hitachi High-Technologies Corporation, Tokyo, Japan). Nanofiber diameters were measured in SEM micrographs by ImageJ (version 1.53e, 2021, National Institutes of Health, Bethesda, MD, USA), using 100 fibers per sample.
An Excalibur 3100 (Varian, Inc., Palo Alto, CA, USA) with a spectral range of 4000 cm −1 to 700 cm −1 was used for chemical investigation by Fourier transform infrared (FTIR) spectroscopy. Data were averaged over 32 scans and corrected for atmospheric noise.
Dynamic mechanical analysis (DMA) was performed by a Q800 (TA Instruments, Eschborn, Germany), applying a preload force of 0.001 N, followed by a force-ramp of 0.05 N/min until break. Measurements were performed at 23 • C, using a sample width of 5.3 mm. While mechanical investigations of nanofibrous composites are often complicated due to the constraints to clamp components [42], the nanofiber mats under investigation in this study could unambiguously be measured in this way.
Results and Discussion
For an overview of the produced nanofiber mats with a relatively large field of view, Figure 1 depicts CLSM images of the different magnetic samples as well as of a pure PAN nanofiber mat for comparison. Here, it is clearly visible that the MF and MR samples have a much more distorted and irregular morphology than the core-shell fiber samples CSF and CSR. The latter, nevertheless, have much larger fiber diameters than the pure PAN nanofiber mat. No large difference is visible between samples electrospun on the flat and on the rotating collector. Apparently, the interaction between high voltage, tip-collector distance and rotational speed of the collector is not sufficient for aligning the fibers, as it is often reported in the literature for high rotation speeds [43][44][45]. However, fiber alignment was not the aim of this study and is thus not further optimized. Proceeding from mat morphology to fiber morphology, Figure 2 depicts SEM images, taken at a magnification of 5000×. As could already be estimated from Figure 1, the single fibers in the MF and MR samples do not show the desired nanofiber morphology, but are highly agglomerated, with nanoparticles protruding from the areas between the fibers. Similar agglomerations are often seen in nanofiber mats containing magnetic nanoparticles [40,41]. One possible explanation for this effect is that the spinning solution was not homogeneous enough to enable the formation of perfect fibers. Conversely, it must be taken into account that a large amount of magnetite nanoparticles, as applied here, will significantly alter the viscosity, conductivity and surface tension of a spinning solution, so that even in case of perfectly distributed nanoparticles, the preparation of unaltered nanofibers cannot be expected.
The images of the core-shell fibers CSF and CSR, however, show the desired round nanofibers with only few agglomerations. While only the shells of these nanofibers contain a large number of magnetic nanoparticles, i.e., overall, there is a smaller amount of magnetic material per fiber length than in the MF and MR samples' fibers, the magnetic properties of such nanofibers are more dependent on the distribution of the nanoparticles than on the overall amount [40], making these fibers relevant for various applications. A look onto the nanofiber surfaces with even higher resolution is made possible by AFM, as depicted in Figure 3. Again, the MF and MR samples show significant deviations from the desired nanofiber morphology. Agglomerations of nanoparticles are visible along the fibers and between them. The core-shell fibers, in contrast, show only few nanoparticles protruding from the fibers again, underlining that these nanofiber mats have the desired morphology with only very few deviations from perfectly even, round nanofibers, as they can be produced with smaller amounts of magnetic nanoparticles [46]. The root mean square (RMS) values, giving an idea of the surface roughness, are (237 ± 110) nm (MF), (311 ± 80) nm (MR), (523 ± 82) nm (CSF), and (686 ± 62) nm (CSR), respectively. These values indicate that the nanofiber mats from core-shell fibers (CSF and CSR) are "rougher" than the simple MF and MR fibers, which is consistent with the observation in Figure 3 that there are no large pores in the MF and MR nanofiber mats, while the separated nanofibers in CSF and CSR surround large, deep pores in which the next fibers are much lower.
Taking into account the pure fibers without agglomerations, the AFM images show that the core-shell fibers have a larger diameter than the fibers of the MF and MR samples. The corresponding fiber diameter distributions are depicted in Figure 4, showing indeed a tendency towards thicker fibers for the coaxial spinning process. It should be mentioned that the fiber diameters are generally much larger than those of previously produced nanofibers from spinning solutions containing 20 wt% magnetite or nickel ferrite, with average diameters of approx. 100 nm for both magnetic nanoparticles [46].
Aside from their morphology, the nanoparticles' distribution is of utmost importance. Figure 5 thus depicts EDS maps of the magnetic samples. While the amount of carbon, indicating the polymeric part of the nanofiber mats, shows some local variations in the nanofibrous structure and the beads and agglomerations, especially visible in the MR sample (Figure 5b), the iron, indicating the magnetite nanoparticles, is well distributed, with no significant agglomerations visible. This is contrary to [41] where agglomerations in the beads and smaller amounts of magnetic nanoparticles in the fibers were found. The chemical investigation by FTIR revealed no unexpected properties of the PAN/magnetite nanofiber mats. Figure 6 depicts the typical peaks of PAN, i.e., CH 2 bending and stretching vibrations at 2938 cm −1 , 1452 cm −1 , and 1380 cm −1 , the stretching vibrations of the nitrile group at 2240 cm −1 , and the carbonyl stretching peak at 1731 cm −1 [47]. It should be mentioned that metals generally cause deviations from a flat baseline, as visible here for small wavenumbers, while the artifact around 2100 cm −1 stems from the incompletely compensated absorption of the diamond ATR crystal. As expected, no differences were visible between nanofiber mats spun on the flat and the rotating collector. Finally, DMA measurements were performed, comparing MF and MR PAN/magnetite nanofiber mats with the CSF and CSR core-shell samples as well as with a composite nanofiber mat out of 16% PAN and only 20% magnetite. Figure 7 depicts the measurement principle and exemplary results. While the smaller amount of only 20% magnetite results in a clear increase in forces-as could be expected, since fewer nanoparticles disturbed the fiber continuity-the nanofiber mats composed of core-shell fibers unexpectedly showed slightly smaller forces at break than the MF and MR PAN/50% magnetite samples. This can be attributed to the large agglomerations of nanoparticles/polymer composite material between the fibers (cf. Figure 2a,b), which reduced the fiber quality on the one hand, but resulted in a better connection between the existing fibers on the other. Most important, however, is the fact that all nanofiber mats investigated in this study were freestanding, i.e., could be separated from the substrates unambiguously. This finding shows that coaxial electrospinning, in particular, allows for producing fibers with a large number of magnetic nanoparticles, which are stable enough to be used as freestanding parts of batteries or other applications.
Conclusions
PAN/magnetite nanofiber mats were electrospun as composite fibers and as core-shell fibers with a PAN core and PAN/magnetite shell. While the first show strongly altered morphology of fibers and mats, the core-shell fibers are straight and round as pure PAN nanofibers. All nanofibers with magnetic nanoparticles have larger diameters than pure PAN nanofibers. The magnetic nanoparticles were evenly distributed in the nanofiber mats. DMA tests showed that the large number of magnetic nanoparticles reduced the mechanical properties of the nanofibers containing 50 wt% magnetite in the shell or in the whole fiber, but all magnetic nanofiber mats were still freestanding, allowing for their use as freestanding electrodes in applications such as batteries. Data Availability Statement: All data gained during this study are reported in the paper. | 3,183.6 | 2022-01-28T00:00:00.000 | [
"Materials Science"
] |
Genetic susceptibility to Candida infections
Candida spp. are medically important fungi causing severe mucosal and life-threatening invasive infections, especially in immunocompromised hosts. However, not all individuals at risk develop Candida infections, and it is believed that genetic variation plays an important role in host susceptibility. On the one hand, severe fungal infections are associated with monogenic primary immunodeficiencies such as defects in STAT1, STAT3 or CARD9, recently discovered as novel clinical entities. On the other hand, more common polymorphisms in genes of the immune system have also been associated with fungal infections such as recurrent vulvovaginal candidiasis and candidemia. The discovery of the genetic susceptibility to Candida infections can lead to a better understanding of the pathogenesis of the disease, as well as to the design of novel immunotherapeutic strategies. This review is part of the review series on host-pathogen interactions. See more reviews from this series.
When a PRR recognizes its corresponding ligand, adaptor molecules engage with the receptor. Different types of PRRs use different adaptor molecules, which transduce a signal by activating a kinase cascade, in order to induce the transcription of proinflammatory cytokines. Dectin-1 signals through Syk (Rogers et al, 2005) and caspase recruitment domain 9 (CARD9) (Gross et al, 2006). Dectin-1 can induce cytokine production independently of other receptors, as well as synergize with TLRs for an optimal stimulation of the cell. When ligands are recognized by TLRs, signals are transduced intracellularily through adaptor proteins like myeloid differentiation factor (MYD)88. Subsequently, a mitogen-activated protein kinase (MAPK) response is activated leading to the nuclear translocation of transcription factors like NF-kB and c-Jun, inducing the transcription of cytokines and chemokines (Akira et al, 2006). Interestingly, depending on the fungal burden and amount of hyphae formation a second MAPK phase, consisting of MKP1 and c-Fos activation, can be initiated, further promoting proinflammatory responses (Moyes et al, 2010).
The recognition of C. albicans by cells of the innate immune system will lead to phagocytosis (Heinsbroek et al, 2008) and killing of the invading pathogen. At the same time, the production of cytokines is induced that on the one hand activate inflammation, and on the other hand engage and direct the adaptive immune response. Activation of the caspase-1 component of the inflammasome, mediated by the intracellular activation of the NOD-like receptor NLRP3, is a central event leading to the processing of pro-IL-1b and pro-IL-18 into their respective bioactive cytokines, directing the induction of Th17 and Th1 responses, respectively (Cheng et al, 2011;Lalor et al, 2011). IFN-g production by Th1 cells, and IL-17 production by Th17 cells are important characteristics of the Candida-induced immune response (Netea et al, 2008). Inflammasome and Th17 activation is considered to be a central event for the discrimination of colonization versus invasion with C. albicans at the level of the mucosa (Gow et al, 2011).
General risk factors for Candida infections
C. albicans is an opportunistic fungal pathogen. In healthy individuals, the immune response will usually clear infections, but an immunocompromised immune system causes a significant increase in the risk for Candida infections. Das et al demonstrated that 92% of Candida bloodstream infections are preceded by a course of broad-spectrum antibiotics (Das et al, 2011), which suppress the growth of the normal bacterial flora and eliminates natural antagonism of fungal colonization of the mucosa. There are several other examples in which Candida acts as an opportunistic pathogen. For example, almost all AIDS and oncologic patients with neutropenia suffer from oropharyngeal candidiasis (Grabar et al, 2008;Viscoli et al, 1999). Furthermore, 41% of patients undergoing hematopoietic stem cell transplantation, for which the immune system is destroyed beforehand, suffer from one or more bloodstream infections within the first ten years after transplantation, 4% of which are caused by Candida spp. The crude mortality rate associated with these Candida-infections is 42% (Ortega et al, 2005). Also Review www.embomolmed.org Genetic susceptibility Glossary Autosomal-dominant Mode of inheritance in which the presence of only one copy of a gene on one of the 22 autosomal-non-sex chromosomes, will result in the phenotypic expression of that gene.
Candidemia
The presence of Candida species in the blood.
Candidiasis
Fungal infection with any of the Candida species. Includes candidemia (in case of systemic infection).
Chronic mucocutaneous candidiasis (CMC)
An immune disorder characterized by chronic infections with Candida that are limited to mucosal surfaces, skin and nails.
Genetic variation
Variations of genomes between members of species or between groups of species. Includes SNP (in case it is a common genetic variant), mutation (in case it is a rare genetic variant) and copy-number variation.
Immunocompromised
State in which the immune system is not functioning properly, increasing susceptibility to infection.
Immunodeficiency
A state in which the immune system's ability to fight infectious disease is compromised or entirely absent.
Immune paralysis
A state in which induction of tolerance is due to injection of large amounts of antigen that remains poorly metabolized.
Neutropenia
An immune disorder characterized by an abnormally low level of neutrophils.
Pathogenesis
The mechanism by which the disease is caused.
Pathogen recognition receptors (PRRs)
Proteins expressed by cells of the innate immune system, which recognize pathogen-associated molecular patterns (PAMPs) from microbial pathogens.
Polymorphism
Having multiple alleles of a gene within a population, usually linked to different phenotypes.
Single nucleotide polymorphism (SNP)
DNA sequence variation occurring when a single nucleotide in the genome differs between members of a biological species or paired chromosomes in an individual. patients with systemic lupus erythematosus (SLE), which are treated with glucocorticoids and other immunosuppressive agents, have an increased risk for invasive fungal infections (IFI), which are predominantly caused by Candida spp. (Fan et al, 2012). Not only a weakened immune system increases the risk for Candida infections, also the extent to which individuals are colonized with pathogens plays a significant role in the development of candidiasis. Candidiasis typically affects patients with prolonged hospitalization. Fifty-one percent of Candida blood-stream infections is associated with being admitted to the ICU (Das et al, 2011). The mean time of onset of systemic Candida infections is 22 days after hospitalization (Wisplinghoff et al, 2004). Furthermore, when barriers to the outside world are damaged or breached by medical devices or surgery, this creates a portal of entry for pathogens like C. albicans. For instance, major abdominal surgery poses an increased risk for systemic Candida infections, which is underlined by the observation that in a cohort of 107 patients with candidemia, 50% underwent recent surgery (Das et al, 2011). Another factor contributing to systemic candidiasis is the fact that Candida spp. can form biofilms on many medical devices like central venous catheters (CVC), contact lenses, intrauterine devices (IUDs) (Donlan & Costerton, 2002) and pacemakers (Glöckner, 2011). Candida can even cause prosthetic joint infections, although they are considered to be rare (Springer & Chatterjee, 2012). Indeed, neonates on the intensive care unit (ICU) with a central line often suffer from infections, with the third most causative pathogen being Candida spp. Fortunately this incidence is decreasing due to the use of anti-fungal prophylaxis (Chitnis et al, 2012).
Genetic risk factors for Candida infections
In spite of the important role played by these risk factors, they do not explain all Candida infections, and only a minority of individuals at risk will eventually develop a fungal infection. It is therefore believed that also genetic factors must play an important role in determining the susceptibility to Candida infections. Indeed, mutations in single genes were found to be responsible for severe Candida infections in several primary immunodeficiencies that display the clinical picture of monogenetic disorders. However, these disorders are rare, and in the majority of patients no sole causative genetic factor can be found. In most patients a combination of gene polymorphisms and/or environmental factors will determine whether a patient will develop a Candida infection. The genetic susceptibility to more common Candida infections such as RVVC or candidemia is likely polygenic, but the understanding of the genetic factors that determine it is nevertheless crucial for future immunotherapeutic approaches in these patients.
Monogenetic disorders
Several monogenetic disorders have been described in the literature to be associated with an increased susceptibility to fungal infections. Glocker et al described that a homozygous mutation in the CARD9 gene, coding for a protein downstream of Dectin-1, results in an increased susceptibility to both mucosal and invasive Candida infections (Glocker et al, 2009;Lanternier et al, 2012). Disease severity in these patients is likely explained by the fact that CARD9 is also involved in the downstream signaling of several other CLR receptors, such as Dectin-2 and Mincle (Robinson et al, 2009;Saijo et al, 2010;Strasser et al, 2012;Yamasaki et al, 2008), implying that CARD9 is a central mediator of anti-Candida host defense.
Another monogenetic disorder that results in an important primary immunodeficiency associated with Candida infections is CMC. Both autosomal recessive and autosomal dominant variants of the disease have been described. Mutations in the CC-domain of STAT1, a signaling molecule downstream of the type I and type II IFN receptor (Darnell et al, 1994), but also IL-23 and IL-12 receptors (as heterodimer with STAT3 or STAT4), have recently been demonstrated to be the main cause of autosomal-dominant CMC , and these findings were confirmed by several other research groups (Depner et al, 2012;Hirata et al, 2012;Liu et al, 2011;Martinez-Martinez et al, 2012;Moreira et al, 2012;Smeekens et al, 2011). In addition to STAT1 mutations, Puel et al demonstrated the presence of mutations in IL-17RA and IL-17F in some unexplained CMC cases . In contrast, patients with autosomal recessive autoimmune polyendocrinopathy candidiasis ectodermal dystrophy (APECED) not only suffer from CMC, but also experience autoimmune phenomena (Lilic, 2002). APECED has been linked to mutations in the autoimmune regulator (AIRE) gene (Björses et al, 1998) that result in a loss-of-function phenotype, causing the production of neutralizing autoantibodies against important cytokines with antifungal properties such as IL-17E, IL-17F and IL-22 (Puel et al, 2010).
Another monogenetic defect resulting in a primary immunodeficiency syndrome associated with Candida infections of the skin is hyper-IgE syndrome (HIES). HIES was first described as Job's syndrome and is characterized by high serum IgE levels, eczema, recurrent mucosal infections with C. albicans, and skin and pulmonary infections with Staphylococcus aureus (Davis et al, 1966). There are a number of mutations known to be associated with HIES. Several mutations have been found in STAT3 (Holland et al, 2007;Minegishi et al, 2007), a signaling molecule downstream of the IL-23 receptor, resulting in absent IL-17 production (de Beaucoudrey et al, 2008;Ma et al, 2008;Milner et al, 2008;Sharfe et al, 1997). Other genes which have been associated with HIES include dedicator of cytokinesis (DOCK)8 that codes for a protein involved in Th17 polarization (Engelhardt et al, 2009) and TYK2 (Minegishi et al, 2006), coding for a Janus kinase (JAK) downstream of the IL-12 receptor (Shimoda et al, 2000). All in all, defective Th17 responses underlie both CMC and HIES, two immunodeficiencies associated with severe, chronic, mucosal Candida infections. This emphasizes the importance of the Th17 response in mucosal Candida immunity.
Also mutations in genes coding for cytokines and their receptors have been described to be associated with Candida infections. For example, IL-12Rb1 deficiency has been linked to Review www.embomolmed.org Genetic susceptibility mucocutaneous Candida infections, and these patients also have increased susceptibility for invasive candidiasis (Rodríguez-Gallego et al, 2012). Sharfe et al described a patient with a deletion in the CD25 gene, suffering from esophageal candidiasis. CD25 is the a-subunit of the IL-2 receptor, which is constitutively expressed on T regulatory cells (Sakaguchi et al, 1995). Furthermore, IL-2 is involved in the differentiation of effector T cells. Although Sharfe et al only described a single patient, this again emphasizes the importance of T cells in the anti-Candida host response. A complete overview of monogenetic disorders causing fungal infections is depicted in Table 1 and Fig 1. Common genetic variants and susceptibility to Candida infections Despite the presence of primary immunodeficiency syndromes with fungal infections, the vast majority of fungal infections is not present in these individuals, but are common diseases with a polygenic pattern of increased susceptibility. Several studies have been published showing a link between genetic variation and an increased risk for Candida infections, with different genetic pattern being discerned between mucosal and systemic candidiasis. An example of this dichotomy is the role of a Dectin-1 polymorphism for susceptibility to mucosal, but not systemic, candidiasis. We have recently described a family in which its members suffered from recurrent vulvo-vaginal candidiasis (RVVC) and onychomycosis. Their symptoms could be explained by an early stop codon in Dectin-1 (Y238X) that resulted in defective b-glucan recognition and Th17 responses. Interestingly, this polymorphism is present in up to 8% of the Europeans and up to 40% of some sub-Saharan African populations ), being associated with mucosal Candida colonization and treatment in haematopoetic patients ), but not with systemic candidiasis .
Genetic variation localized in other PRRs, such as the TLRs, has also been associated with an increased susceptibility to fungal infections. Three single nucleotide polymorphisms (SNPs) in the TLR1 gene have been shown to influence susceptibility to candidemia, presumably mediated by decreased levels of IL-8 and IFN-g . However, these findings need to be replicated in independent studies, and it is unclear which component of Candida is recognized by TLR1. A similar observation has been made for TLR2 and TLR4, which recognize phospholipomannans and O-linked mannans, respectively. The R753Q TLR2 polymorphism increased the risk for candidemia in one small study through decreased IFN-g and IL-8 levels (Woehrle et al, 2008), and two SNPs in the TLR4 gene were shown to be a risk factor for candidemia through increased IL-10 production ( Van der Graaf et al, 2006), but these observations were not replicated in a larger study of patients . Nahum et al suggested that the L412F TLR3 polymorphism increases the risk for CMC, an effect mediated by decreased IFN-g production (Nahum et al, 2011). Furthermore, variable number of tandem repeats in MBL2 gene that codes for the soluble PRR MBL has been linked to RVVC in two separate studies (Babula et al, 2003;Giraldo et al, 2007). Finally, length polymorphisms in the NLPR3 gene, coding for the receptor subunit of the NLRP3 inflammasome, can increase the risk for RVVC (Lev-Sagie et al, 2009).
In addition to the first step of pathogen recognition, genetic variation in several cytokines has been linked to an increased risk for Candida infections. Choi et al demonstrated that the À1089T/G, À589C/T and the À33C/T polymorphisms in IL-4 are associated with chronic disseminated candidiasis (Choi et al, 2003). Interestingly, the À589T/C SNP has also been www.embomolmed.org Review Sanne P. Smeekens et al. demonstrated to pose a risk for RVVC (Babula et al, 2005). The À1082A/G polymorphism in the anti-inflammatory cytokine gene IL-10 and the 274INS/DEL polymorphism in IL-12b, are associated with persisting candidemia . These data strongly suggest that the balance between pro-and anti-inflammatory cytokines represent an important component of host defense against both mucosal and systemic candidiasis. The À44C/G polymorphism in DEFB1, coding for betadefensin 1, is correlated with increased Candida carriage (Jurevic et al, 2003). The exact underlying mechanism is unclear, but in general beta-defensins are secreted by neutrophils and epithelial cells and contribute to epithelial immunity. The R620W polymorphism in PTPN22, a protein involved in T-cell and B-cell receptor signaling, was suggested to be associated with an increased risk for CMC. Although the potential mechanism of this association is unclear (Nahum et al, 2008). A complete overview of common genetic variants associated with fungal infection is depicted in Table 2
Future developments
The current body of evidence has provided many new insights into the working mechanism of the anti-Candida immune response. These new insights can pinpoint novel potential targets for immunotherapy. For example, several studies have demonstrated a correlation between decreased IFN-g levels and an increased risk for systemic Candidiasis Woehrle et al, 2008). A double-blind, randomized, placebocontrolled study is currently being performed using adjuvant IFN-g therapy in sepsis. It would be also very relevant to try and reverse the immunoparalysis (Leentjens et al, 2012). This suggests that IFN-g is a promising treatment option in sepsis-induced immune paralysis. We are currently investigating the efficacy of recombinant IFN-g in patients with Candida sepsis.
Despite the significant progress of the last few years for uncovering susceptibility to fungal infections, there are still a significant number of Candida infections for which the environmental and/or genetic risk factors are not yet deciphered. Even more importantly, in spite of current treatment regimens, mortality rates associated with systemic infections are still very high, and in order to improve diagnostic-and treatment options, future efforts should be directed towards gaining more insight into the anti-Candida host immune response. This can be achieved in several ways. Discovering novel mutations that underlie monogenetic disorders associated with Candida infections can generate crucial information about a particular gene or protein, and the pathway in which this protein is involved. For example, the use of next generation sequencing and whole exome sequencing to discover STAT1 mutations as a cause of CMC , has also led in the understanding of its role for the generation of Th1 and Th17 responses and the anti-Candida host defense . This discovery can lead to novel approaches to the therapy of CMC, some of them being currently tested.
Of course, the list of existing monogenetic disorders is relatively small, as the majority of Candida cases are likely polygenic and/or multifactorial. In order to investigate this type of disorders other methods will have to be employed such as genome-wide association studies (GWAS), deep sequencing, and systems biology. We have recently used a combination of transcriptional analysis and functional genomics to demonstrate that type I IFNs play an important role in the anti-Candida host defense (Smeekens et al, 2013). Stimulation of circulating leukocytes with C. albicans led to a transcription profile with Review www.embomolmed.org Genetic susceptibility Increased IL-10 production Increased susceptibility to candidemia Van der Graaf et al (2006) overrepresentation of genes from the type I IFN pathway. Subsequently, we showed that polymorphisms in these genes modify Candida-induced cytokine production and influence susceptibility to systemic Candida infections. Furthermore, validation studies showed that type I IFNs skew Candidainduced cytokine responses from Th17 toward Th1, while STAT1-deficient CMC patients display defective expression of genes in the type I IFN pathway. This 'systems approach', that integrates the information on anti-Candida host defense from several types of studies, provides information with respect to potential novel anti-Candida immune responses that may represent targets for immunotherapy. It is to be expected that an integration of efforts from immunology, genetics, microbiology and systems biology will represent the novel level of understanding of host defense against fungal (and other) pathogens, improving the outcome of these severe infections.
The authors declare that they have no conflict of interest. | 4,352.6 | 2013-04-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Baryon transport and the QCD critical point
Fireballs created in relativistic heavy-ion collisions at different beam energies have been argued to follow different trajectories in the QCD phase diagram in which the QCD critical point serves as a landmark. Using a (1+1)-dimensional model setting with transverse homogeneity, we study the complexities introduced by the fact that the evolution history of each fireball cannot be characterized by a single trajectory but rather covers an entire swath of the phase diagram, with the finally emitted hadron spectra integrating over contributions from many different trajectories. Studying the phase diagram trajectories of fluid cells at different space-time rapidities, we explore how baryon diffusion shuffles them around, and how they are affected by critical dynamics near the QCD critical point. We find a striking insensitivity of baryon diffusion to critical effects. Its origins are analyzed and possible implications discussed.
I. INTRODUCTION
One of the primary goals of nuclear physics [1] is studying the phase diagram of Quantum Chromodynamics (QCD), which is generally mapped to a plane expanded with temperature T and baryon chemical potential µ axes [2]. First principles calculations from Lattice QCD show that, at zero µ, the phase transition from a deconfined quark-gluon plasma (QGP) phase to a confined hadron resonance gas (HRG) phase from high to low temperature is a rapid but smooth crossover [3][4][5][6]. At large µ, calculations of the phase transition using Lattice QCD are not yet available, since there the standard techniques suffer from the "sign problem" [7,8]. Nevertheless, theoretical models indicate that at large chemical potential, the phase transition is first order [2], and this implies that a critical point exists at non-zero chemical potential [9,10], at the end of the first order phase transition line. Confirming the existence and finding the location of the hypothetical QCD critical point have attracted tremendous amount of attention over the last two decades [11,12].
Heavy-ion collisions are the main method to tackle these unsolved problems [9][10][11][12][13][14][15][16][17][18][19][20][21][22]. Such collisions have been carried out at different experimental facilities, such as the Large Hadron Collider (LHC) at CERN and the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory, at various beam energies, and large sets of data have been accumulated. One of the most promising signatures of the QCD critical point is a nonmonotonic beam energy dependence of higher-order cumulants of the fluctuations in the net proton production yields [9,10,16,17,19,20]. This is based on the idea that these observables are more sensitive to the correlation length of fluctuations of the chiral order parameter which, in the thermodynamic limit, diverges at the critical point [23]. Fireballs created in heavy-ion collisions at different beam energies should freeze out with correlation lengths that depend non-monotonically on the collision energy, and this should be reflected in the net baryon cumulants. Strongly motivated by this, a Beam Energy Scan (BES) program has been carried out at RHIC during the last decade. During a first campaign that ended in 2011 (BES-I), Au-Au collisions were studied at collision energies √ s NN from 200 GeV down to 7.7 GeV (BES-I) [12,24,25]. A second campaign, BES-II, with significantly increased beam luminosity is expected to be completed this year, after having explored collision energies down to √ s NN = 3.0 GeV in fixed-target mode. Additional experiments at even lower beam energies, probing the phase diagram in regions with even higher baryon chemical potential, are planned at the newly constructed FAIR and NICA facilities. Unfortunately, the dynamical nature of the fireballs created in heavy-ion collisions renders attempts to confirm the above static equilibrium considerations experimentally anything but straightforward. Within their short lifetimes of several dozen yoctoseconds the fireballs' energy density decreases rapidly by collective expansion, from initially hundreds of GeV/fm 3 to well below 1 GeV/fm 3 at final freeze-out (see e.g. Refs. [11,26]). The rapid dynamical evolution of the thermodynamic environment keeps the latter permanently out of thermal equilibrium such that critical fluctuations never reach their thermodynamic equilibrium distributions. In addition, in those parts of the fireball which pass through the quark-hadron phase transition close to the QCD critical point the dynamics of critical fluctuations is affected by "critical slowing down" [27]. This is both a curse and a blessing: If critical fluctuations would relax quickly to thermal equilibrium, all memory of critical dynamics might have been erased from the hadronic freeze-out distributions by the time the hadron yields and momenta decouple. If, on the other hand, the dynamical evolution of fluctuations is slowed in the vicinity of the critical point, some signals of critical dynamics may survive until freeze-out but they will then most definitely not feature their equilibrium characteristics near the critical point [27].
Thus, to confirm or exclude the critical point via systematic model-data comparison, reliable dynamical simulations of off-equilibrium critical fluctuations and the associated final particle cumulants, on top of a wellconstrained comprehensive dynamical description of the bulk medium at various beam energies, are indispensable [28][29][30][31][32][33][34][35][36][37][38][39]. Recently, the Hydro+/++ framework [34,40] was developed for incorporating off-equilibrium fluctuations and critical slowing-down into hydrodynamic simulations, and some practical progress within simplified settings has since been made using this framework [21,22]. On the other hand, while a fully developed and calibrated (2+1)-dimensional multi-stage description of heavy-ion collisions (including initial conditions + pre-hydrodynamic dynamics + viscous hydrodynamics + hadronic afterburner) exists (see, e.g., the most recent versions described in [41][42][43][44]) and has met great phenomenological success at top RHIC and LHC energies [11,12,26], such a comprehensive and fully validated framework is still missing for collisions at the lower BES energies.
Compared to high-energy collisions at LHC and top RHIC energies, collisions at BES energies introduce a number of additional complications [45,46]. These include (i) a much more complex, intrinsically (3+1)dimensional and temporally extended nuclear interpenetration stage and its associated dynamical deposition of energy and baryon number [47][48][49][50][51], (ii) the need to account for and properly propagate conserved charge currents for baryon number and strangeness [51][52][53][54][55][56][57], (iii) a consistent treatment of singularities in the thermodynamic properties associated with the critical point [23,58], (iv) the aforementioned off-equilibrium nature of critical dynamics [21,22,34,38,40,59], and (v) the dynamical effects of nucleation and spinodal decomposition in the metastable and unstable regions associated with the first-order phase transition [60][61][62][63]. The situation is made even more complicated by the back-reaction of the non-equilibrium critical fluctuation dynamics on the bulk evolution of the medium. This back-reaction causes a potential dilemma: On the one hand, locating the critical point requires reliable calculations at various beam energies of critical fluctuations on top of a wellconstrained bulk evolution of the fireball medium; on the other hand, the back-reaction of the off-equilibrium critical fluctuations from a critical point whose location is yet to be determined onto the medium evolution might interfere with the calibration of the latter and turn it into an impossibly complex iterative procedure whose convergence cannot be guaranteed.
Guidance on how (and perhaps even whether) to incorporate critical effects when constraining the bulk dynamics is direly needed, not least since dynamical simulations for low beam energies are computationally very expensive. Some effects on the bulk medium evolution arising from singularities in the thermodynamic properties of the QCD matter [23,58], by adding a critical point to the QCD Equation of State (EoS) [53,[64][65][66][67] and/or explicitly including critical scaling of its transport coefficients, have been explored, with special attention to the bulk viscous pressure since critical fluctuations of the chiral order parameter, which couples to the baryon mass, can be directly related to a peak of the bulk viscosity near the critical point [68][69][70][71]. The authors of [72] showed that critical effects on the bulk viscous pressure have non-negligible phenomenological consequences for the rapidity distributions of hadronic particle yields, implying that critical effects might indeed play an important role in the calibration of the bulk medium.
In a similar spirit we study here critical effects on the bulk evolution in the baryon sector, by including the critical scaling of the relaxation time for the baryon diffusion current and of the baryon diffusion coefficient, as well as (in a simplified treatment) the critical contribution to the EoS [64,65,67]. We study the phenomenological consequences of baryon diffusion in a system both away from and close to the QCD critical point; the former is essential for modeling heavy-ion collisions at the high end of the BES collision energy range [51][52][53][54][55][56][57]. Including only the baryon diffusion while neglecting other dissipative effects helps us to study its hydrodynamical consequences in isolation. A more comprehensive study including all dissipative effects simultaneously is left for future work.
This paper is organized as follows. We discuss the hydrodynamic formalism with non-zero baryon diffusion current in Sec. II. In Sec. III, we illustrate our setup near the QCD critical point, particularly by discussing the critical behavior of thermodynamic and transport coefficients for the hydrodynamic evolution. After completing the setup of the framework in Sec. IV we discuss results for a fireball created at low beam energies in Sec. V, with a focus on baryon diffusion current effects, first away from (Sec. V A) and then close to (Sec. V C) the critical point, followed by a discussion of general features of the timeevolution of the diffusion current in Sec. V D. We summarize results and draw conclusions in Sec. VI. In the Appendices, we discuss causality near the QCD critical point in App. A, estimate the size of the critical region in App. B and, finally, validate the numerical methods used in this work in App. C.
Throughout this article we use natural units where factors ofh, c, and k B are not explicitly exhibited but implied by dimensional analysis. We also use Milne coordinates, x µ = (τ, x, y, η s ), where τ and η s are the (longitudinal) proper time and space-time rapidity, respectively, and are related to the Cartesian coordinates via t = τ cosh η s , z = τ sinh η s . We employ the mostly-minus convention for the metric tensor, g µν = diag (+1, −1, −1, −1/τ 2 ).
II. VISCOUS HYDRODYNAMICS WITH BARYON DIFFUSION
In this section we discuss the viscous hydrodynamic framework including baryon diffusion. Hydrodynamics is an effective theory for describing long-wavelength degrees of freedom, which, from a macroscopic viewpoint, can be formulated by conservation laws of hydrodynamic variables that are ensemble-averaged at certain coarsegrained scales [26,73]. In heavy-ion collisions, the conserved quantities are energy, momentum, and various charges, including net baryon charge, electric charge and strangeness. In this work, we only study the net baryon charge; the extension to incorporate other conserved charges is conceptually straightforward [56] and will be studied elsewhere. The conservation equations for energy-momentum and net baryon charge are formulated covariantly as where d µ is the covariant derivative in an arbitrary coordinate system (Milne coordinates here), T µν and N µ are the energy-momentum tensor and (net) baryon current, respectively. In a given arbitrary reference frame, they can be decomposed into the ideal and dissipative parts as where T µν id and N µ id are the ideal parts which are well defined in local equilibrium, while Π µν and n µ are the dissipative components describing the deviations from local equilibrium. The former can be expressed as where e and n are the energy density and baryon density in the local rest frame, u µ is the four-velocity of the fluid element (normalized as u 2 = 1), p the pressure given by the EoS, p = p(e, n), and ∆ µν ≡ g µν − u µ u ν . The local rest frame in this work is chosen as the Landau frame specified by the Landau matching conditions [74] which implies u µ n µ = 0 and u µ Π µν = 0. The dissipative term Π µν in the energy-momentum tensor can be written as Π µν = −Π∆ µν + π µν , where Π is the bulk viscous pressure, and π µν the shear stress tensor. The dissipative term n µ in the net baryon current describes its non-zero spatial components in the local rest frame of the fluid. The evolution of these dissipative terms are governed by both microscopic and macroscopic physics, and thus their equations of motion can not be obtained directly from conservation laws. We use the evolution equations from the Denicol-Niemi-Molnar-Rischke (DNMR) theory [75,76], which uses the methods of moments of the Boltzmann equation. In this work, to isolate the effects from baryon diffusion current n µ clearly, we shall ignore the dissipative effects from π µν and Π, focusing only on n µ .
The equation of motion for n µ from the DNMR theory is an Israel-Stewart type equation, whereṅ µ ≡ ∆ µ νṅ ν (the overdot denotes the covariant time derivative D ≡ u µ d µ ), κ n is the baryon diffusion coefficient (also referred to as the baryon conductivity), α ≡ µ/T is the chemical potential µ in the unit of temperature T , and τ n is the relaxation time, on whose scale baryon current relaxes towards its Navier-Stokes limit: here ∇ µ ≡ ∆ µν d ν is the spatial gradient in the local rest frame. The term J µ contains higher order gradient contributions [75,76]. Rewriting Eq. (5) as a relaxation equation,ṅ shows that ∇ µ α is the driving force for baryon diffusion while κ n controls the strength of the baryon diffusion flux arising in response to this force. τ n characterizes the response time scale. We note that both τ n and κ n depend on the microscopic properties of the medium, which have been calculated in various theoretical frameworks, including kinetic theory [52] and holographic models [77,78]. In principle, they can also be constrained phenomenologically by data-driven model inference, but as of today such studies are still very limited for baryon evolution. Rewriting Eq. (5) more specifically for baryon diffusion, we arrive at where θ ≡ d · u, the n µ θ-term arises from J µ (the only term we keep from J µ as given in [54]), δ nn is the associated transport coefficient, and the last two terms come from rewritingṅ µ explicitly, with Γ µ αβ being the Christoffel symbols. Eq. (8) is the equation we use to evolve the baryon diffusion current in this work. We remark that the Navier-Stokes limit of the baryon diffusion current can be rewritten in terms of density and temperature gradients, where the two coefficients are Here χ ≡ (∂n/∂µ) T is the isothermal susceptibility, and w = e + p is the enthalpy density. We note that the gradient expansion is not unique, and writing it in different ways can be used to explore individual contributions separately (see App. C 2). We will discuss the benefits of decomposing the Navier-Stokes limit as Eq. (9) where critical singularities manifest themselves in Sec. III. For later convenience of discussing critical behavior, we also introduce the heat diffusion coefficient, where c p ≡ nT (∂m/∂T ) p is the specific heat, with m ≡ s/n, i.e., the entropy per baryon density [34,79]; λ T is the thermal conductivity, which can be related to the baryon diffusion coefficient κ n by Using Eq. (11) one can also relate the heat diffusion coefficient D p to D B by The above system of hydrodynamic equations is closed by the EoS, either in the form p(e, n) or, equivalently, through the pair of relations µ(e, n), T (e, n) . Again, the EoS is controlled by microscopic physics. We discuss the EoS used in this study in Sec. IV below.
III. CRITICAL BEHAVIOR
In this section we focus on the effects of the QCD critical point on baryon transport in a relativistic QCD fluid. Critical phenomena, albeit ubiquitous, can be classified into certain universality classes determined by the effective degrees of freedom and symmetries of the system. It has been well argued that the QCD critical point belongs to the static universality class of the 3-dimensional Ising model [13,14], and the dynamical universality class of Model H in the Hohenberg-Halperin classification [23,79]. It is also believed that the critical point, if it exists, is beyond the reach of first-principles approaches such as Lattice QCD, and consequently little else is known beyond its universality classification.
Much work has been done to lift the fog from the physics lurking beyond the above universality argument. First, a robust construction of a family of EoS exhibiting the appropriate universal critical properties and matching to existing Lattice QCD calculations has been proposed [65,67]. The EoS encodes the microscopic properties of the QCD matter and is indispensable for solving the macroscopic hydrodynamic equations. Second, a hydrodynamic framework incorporating fluctuations and critical slowing-down has been established, in order to overcome the breakdown of hydrodynamics near the critical point. A deterministic framework, known as hydrokinetic theory, extends conventional hydrodynamics by consistently including fluctuations as additional dynamic degrees of freedom (modes) [80][81][82][83]. The feedback of fluctuating modes renormalizes the bare hydrodynamic variables and gives rise to a delayed response in the form of so-called "long-time tails". The hydro-kinetic approach can be implemented in the critical regime where the fluctuating modes relax to equilibrium on parametrically long time scales [34,40] (see also the reviews [46,84]).
Of course, the inclusion of fluctuations does not by itself cure pathological issues such as acausality or instability of the underlying hydrodynamic framework that arise when straightforwardly extending the nonrelativistic Navier-Stokes equations into the relativistic domain [85,86]. The most-widely used resolution of these issues follows the approach pioneered in Ref. [87], by elevating the dissipative components of the energymomentum tensor to dynamical degrees of freedom subject to their own relaxation-type equations (for which Eq. (7) for the baryon diffusion current is an example). In this approach the causality and stability conditions can be shown to continue to hold in the proximity of the critical point (see App. A).
In the following subsections we discuss how we implement the static and dynamic universal critical behavior, using a simplified setup. Possible future improvements using a more realistic implementation will be discussed in the conclusions (Sec. VI).
A. Implementation of static critical behavior One significant feature of critical phenomena is that, when the system approaches a critical point adiabatically, the equilibrium correlation length, which is typically microscopically small, becomes macroscopically large and eventually diverges. With the purpose of identifying qualitative signatures of a critical point, we characterize all equilibrium quantities exhibiting critical behavior in terms of their parametric dependence on the correlation length. 1 We parametrize the correlation length as follows: Here ξ 0 (µ, T ) is the non-critical correlation length (measured far away from the critical point) while ξ max is an infrared cutoff regulating the divergence at the critical point by implementing a maximum value for the correlation length. The crossover between the critical and non-critical regimes is characterized by the hyperbolic In the above expression (T c , µ c ) is the location of critical point, and ∆µ and ∆T characterize the extent of the critical region along the µ and T axes of the phase diagram; α 1 is the angle between the crossover line (h = 0 axis in the Ising model) and the negative µ axis (see Fig. 1); ν = 2/3, β = 1/3 and δ = 5 approximate the critical exponents of the 3-dimensional Ising universality class [13,14]. Eq. (13) is designed to ensure the following properties: 4) ξ = ξ 0 when |µ − µ c | ∆µ and/or |T − T c | ∆T .
To limit the number of free parameters, we ignore the T and µ dependence of the non-critical correlation length ξ 0 and parametrize the crossover line as [88] T (µ) where T 0 = 155 MeV and κ 2 = 0.0149 are the transition temperature and the curvature of the transition line T (µ) at µ = 0. Location of the critical point (T c , µ c ) is assumed to be on the crossover line [65], and as a consequence Thus T c and α 1 are determined once µ c is provided. Based on the above discussion we choose the following parameter values: parameters, we visualize the correlation length as function of (µ, T ) in Fig. 2.
Several comments are in order: First, our parametrization of the correlation length applies to the crossover region in the left part of the QCD T -µ phase diagram, at µ < µ c , and not to the presumed first-order phase transition at µ > µ c where the theoretical description is complicated by possible phase coexistence and metastability [63]. This suggests choosing the collision beam energy sufficiently high to avoid the latter situation, but not too high to be far from the critical point. Motivated by experimental hints [89] and earlier theoretical studies [52,72] we here set √ s NN = 19.6 GeV. Second, although the correlation length diverges in the thermodynamic limit, heavy-ion collisions create small, rapidly expanding QGP droplets in which finite-size and finitetime effects as well as the critical slowing down [10,27] prevent the correlation length from growing to infinity. A robust estimate for the largest correlation length the system might achieve in this dynamical environment is about 3 fm [27]. The system will thus never get close to our static infrared cutoff ξ max = 10 fm, and our final predictions turn out not to be sensitive to the precise value of this cutoff. Once all thermodynamic quantities and transport coefficients (introduced in the following subsection) are parametrized in terms of ξ as given in Eq. (13), they are defined in both the non-critical and critical regions and thus ready for use in dynamical simulations describing the trajectory of the QGP fireball through the phase diagram. For economy we include in the following discussion not only the dynamic (transport) coefficients but also the thermal susceptibility χ and the specific heat c p which are static (thermodynamic) coefficients.
Near the critical point, fluctuations at the length scale ξ significantly modify the physical transport coefficients, giving rise to their correlation length dependence. In Model H, the shear stress tensor and bulk viscous pressure play important roles in critical dynamics through fluid advection [21,22]. We shall only focus on critical dynamics arising from fluctuations in the hydrodynamic regime (i.e., carrying small wavenumbers/frequencies, q < ∼ 1/ξ), as large-wavenumber fluctuations equilibrate fast compared to the hydrodynamic evolution rate ω ∼ c s k, with c s being the speed of sound. Feedback from off-equilibrium fluctuations that are non-analytic in ω or k, commonly referred to as long-time tails, is suppressed by phase space [21,22] and will be neglected. In other words, the scaling parametrizations below derive from equilibrium fluctuations for thermodynamic quantities and from analytic non-equilibrium fluctuations for transport coefficients.
In the presence of a bulk viscous pressure, the critical contribution to bulk viscosity diverges as ξ z where z = 3 for the QCD critical point [23]. Besides, the associated relaxation time for the bulk viscous pressure also diverges as ξ 3 [72]. In this case, the relaxation rate of the fluctuation modes contributing to bulk viscous pressure is much smaller than the typical hydrodynamic frequency, and they can no longer be treated hydrodynamically, requiring instead an extended framework such as Hydro+ [34,40]. In this work we focus on effects from baryon diffusion and neglect bulk and shear stress effects on the bulk evolution dynamics of the fireball; however, we use the same critical scaling laws for the transport coefficients in Model H as if they were restored (i.e., with non-vanishing shear viscosity η and bulk viscosity ζ).
The following second-order thermodynamic coefficients (isothermal susceptibility χ and specific heat c p ) as well as the first-order transport coefficients (baryon diffusion coefficient κ n and thermal conductivity λ T ) scale with the correlation length as [23] where the exponents are rounded to their nearest integers for simplicity. Therefore, according to Eqs. (9b) and (10), In this work we apply the following parametrizations: where ξ 0 is the non-critical correlation length, κ n,0 is the non-critical value of baryon diffusion coefficient (see Eq. (37) below), χ 0 is the isothermal susceptibility evaluated in the non-critical region, χ 0 ≡ (∂n 0 /∂µ) T where n 0 is the non-critical baryon density. 2 With Eq. (20) the parametrizations with critical scaling for D B , D T and λ T are readily obtained from Eqs. (9b) and (11). We now turn to the critical behavior of the relaxation time τ n . It is worth remembering that the Israel-Stewart type equations (cf. Eq. (5)) provide an ultraviolet completion of the naive (Landau-Lifshitz) hydrodynamic theory. The microscopic relaxation times associated with the new dissipative dynamical degrees of freedom (such as, in our case here, the baryon diffusion current n µ ) play the role of ultraviolet regulators which modify the short-distance (high-frequency) behavior of the theory. For the baryon diffusion current n µ , τ n characterizes the relaxation time to its Navier-Stokes limit n µ NS (which is zero in a homogeneous background). Since n µ can only equilibrate as long as all fluctuating degrees of freedom contributing to n µ also equilibrate, τ n can be considered as the typical equilibration time scale of the slowest fluctuation mode near the critical point. Indeed, in Hydro+/++, the non-hydrodynamic slow-mode evolution equations for critical fluctuations with typical momenta q ∼ ξ −1 have the similar structure as the Israel-Stewart relaxation equations for the dissipative flows arising from thermal fluctuations with wavenumbers ξ −1 ∼ T (see Sec. III C for detailed discussion). Here we assume the scale hierarchy adopted in [40], i.e., the fluctuation wavenumber q is much bigger than the gradient wavenumber k, but still much small than the inverse of thermal length T , i.e., k q T . As already mentioned, Israel-Stewart type equations neglect the non-analytic contributions from long-time tails which we argued above to be negligible (see also Sec. III C).
According to Ref. [40], the slowest mode contributing to n µ is the diffusive-shear correlator between the entropy per baryon density fluctuations δm ≡ δ(s/n) and the flow fluctuations δu µ , i.e., G mµ ∼ δmδu µ . The relaxation rate for this mode with wavenumber q is given by Γ G (q) = (γ η + D p )q 2 where γ η = η/w. The two contributions to this rate stem from the relaxation of the shear stress and of the baryon diffusion, respectively. Near the critical point Γ G is dominated by contributions with typical wavenumbers q ∼ 1/ξ. Given D p ∼ ξ −1 and approximately While the notational distinction between χ and χ 0 is needed here for clarity, we generally drop the subscript "0" for thermodynamic quantities away from the critical region elsewhere to avoid clutter. 3 Another mode contributing to the baryon diffusion current is the pressure-shear mode Gpµ ∼ δpδuµ [40]. Its relaxation rate at wavenumber q is (γ ζ + 4 3 γη + γp)q 2 where γ ζ = ζ/w, γη = η/w, and γp = κnc 2 s T w(∂α/∂p) 2 m . In the presence of bulk viscosity (as we assume in order to ensure the correct scaling in Model H), the relaxation rate for Gpµ is dominated by γ ζ q 2 | q∼ξ −1 ∼ ξ considering γ ζ ∼ ζ ∼ ξ 3 , which is much faster than the rate for the diffusive-shear mode which scales like Γ G ∼ ξ −2 . Even in the absence of the viscosities (i.e. for η = ζ = 0), the dominated Thus it is natural to expect τ n ∼ τ G ∼ ξ 2 . We therefore parametrize τ n as where τ n,0 is the non-critical relaxation time (given explicitly below in Eq. (38)). As we shall discuss in App. A, the parametrization (21) ensures causality.
As an aside, let us comment on the consequences, had we tried to ensure the absence of shear stress by demanding that η = 0. In this case Γ G (q) = D p q 2 | q∼ξ −1 ∼ ξ −3 , and therefore τ n = Γ −1 n ∼ τ G ∼ ξ 3 , which is larger compared to that in the case with shear stress. This arises from the fact that, near the critical point, the shear mode (δu µ ) relaxes to equilibrium parametrically faster than the diffusive mode (δm). As a result, its dissipation changes the scaling exponent of the relaxation time of the diffusive-shear-mode correlator.
Substituting Eq. (13) into Eqs. (20) and (21) we arrive at a complete set of relevant thermodynamic quantities and transport coefficients (i.e., χ, κ n and τ n ) as explicit functions of T and µ that hold in the entire crossover domain of the QCD phase diagram, both far away from and within the critical region.
C. Connection to Hydro+
We close this section by discussing the connection between the Israel-Stewart-like second-order hydrodynamic equations (cf. Eq. (7)) and the evolution equations proposed in the Hydro+ formalism [34].
When comparing the conventional Israel-Stewart formalism without critical effects with the Hydro+ framework, we note that both approaches add nonhydrodynamic degrees of freedom to the conventional hydrodynamic framework, together with their associated relaxation time scales that are not negligible in rapidly evolving systems. The Hydro+ formalism distinguishes itself by the fact that the dynamics of some of these additional non-hydrodynamic modes, the critical slow modes, are controlled by a separate small parameter (different from the conventional Knudsen number for the hydrodynamic gradient expansion) that characterizes the relative "slowness" of the relaxation of the critical slow modes when compared with standard dissipative effects arising from thermal fluctuations.
In our study the critical scaling for the transport coefficients is included in the Israel-Stewart formalism, and consequently it becomes directly comparable to the Hy-dro+ framework with a single wavenumber (q ∼ ξ −1 ) Therefore, the contribution from the pressure-shear mode, which is not the slowest, can be neglected. slow mode. To derive the former from the latter, we introduce a non-hydrodynamic slow mode described by a vector field, in contrast to the scalar field considered as a primary example in the Hydro+ formalism [34]. 4 We denote this field by φ µ and demand that it is a transverse vector, i.e., u · φ = 0, for the sake of later convenience. This vector field will be treated as the slowest mode contributing to n µ , corresponding to G mµ ∼ δmδu µ discussed above.
Including this field the first law of thermodynamics is generalized as follows: Here and below the subscript (+) labels the generalized quantities by taking into account the additional contributions arising from the field φ µ (more precisely, from its deviation from its equilibrium valueφ µ ). The variable π µ is thermodynamically conjugate to φ µ , playing the role of a generalized thermodynamic potential with the constraint u · π = 0. β (+) and α (+) are the associated inverse temperature and chemical potential in units of the temperature, respectively. We require that in thermal equilibrium, where the slow mode φ µ reaches its equilibrium valueφ µ , the entropy density is maximized and equal to its standard equilibrium value In other words, deviations of the slow mode φ µ from its equilibrium valueφ µ reduce the entropy for a given hydrodynamic cell.
The form of F µ φ and A φ shall be specified below. In general, φ µ can also change in response to additional external forces (indicated by · · · in Eq. (24)), such as collective expansion of the background (the θ-term in Eq. (8) arising from J µ ), long-range electromagnetic fields, etc. Here, we focus on its change in response to the gradient of chemical potential in units of the temperature, α (+) , as relevant for our study of baryon diffusion.
The generalized (partial-equilibrium) entropy current is given by where ∆s µ describes a spatial non-equilibrium entropy current in the local rest frame. Using hydrodynamic equations we arrive at where Eqs. (22) and (24) were used. The second law of thermodynamics requires the above expressions to be positive semidefinite, resulting in the following constraints: Here γ π and κ n(+) are (positive semidefinite) transport coefficients. We note that in Eq. (27c) A φ π µ corresponds to the contribution to n µ arising from φ µ . Thus κ n(+) amounts to the baryon transport coefficient in the absence of φ µ (i.e., κ n(+) = κ n,0 ), and n µ approaches the conventional Navier-Stokes limit when φ µ is ignored. The constraint on Π µν (not displayed) leads to its conventional Navier-Stokes form. The last term in (26) can in general have either sign; to always satisfy the second law of thermodynamics we must require it to vanish, giving rise to Eq. (27d). The extended entropy can be decomposed as where we postulate the longitudinal entropy correction due to the mode φ µ as (motivated by, e.g., Ref. [87]) where π φ is the susceptibility. Eq. (29) is quadratic in the space-like vector φ µ and always decreases the entropy. Using the Landau-Khalatnikov formula, the susceptibility π φ can be rewritten as where Γ φ = γ π π φ is the relaxation rate of φ µ , and κ φ is introduced as a new transport coefficient whose physical meaning shall become clear right away. Substituting the above expressions into Eq. (24) and ignoring other external force terms we finḋ Now one can choose φ µ to represent the slowest mode contributing to n µ , i.e., G mµ ∼ δmδu µ and, near the critical point, think of φ µ as the "critical sector" of the baryon diffusion current. As discussed in Sec. III B, for φ µ ∼ G mµ with a typical wave number q ∼ ξ −1 , Γ φ ∼ ξ −2 and κ φ ∼ ξ; near the critical point, it controls the relaxation of the slowest mode contributing to the baryon diffusion. Considering the Navier-Stokes limit of the baryon diffusion given by Eq. (27c), we can write down the relaxation equation for n µ . It receives a contribution of typical wave number q ∼ ξ −1 from φ µ , where we have used the fact that Γ n Γ φ ∼ ξ −2 near the critical point. Here κ n ≡ κ n,0 + κ φ , with κ n,0 denoting the non-critical value of the baryon transport coefficient. Near the critical point κ φ ∼ ξ dominates κ n and consequently κ n κ φ ∼ ξ. The parametrization in Eq. (20) is designed to reproduce this behavior approximately. Finally α (+) is the normalized chemical potential with modifications from φ µ which generate critical behavior in the Equation of State near the critical point. One sees that the single-mode Hydro+ equation (32) (which includes only a single wave number q ∼ ξ −1 ) matches the Israel-Stewart type equation (7) for n µ when the critical scaling (20,21) in the critical regime is accounted for. 5 Eq. (32) can be solved in frequency (ω) space as where is the frequency-dependent baryon diffusion coefficient, with and κ n ≡ κ n (ω=0) being the frequency-independent part. While the imaginary part of Eq. (35) relates the baryonic analog of the electric permittivity, its real part gives rise to the frequency dependence of the baryon transport coefficient: One can infer from Eq. (36) that at small ω, κ n (ω) − κ n (0) ∼ κ n (0) ω 2 while at large ω, κ n (ω) ∼ κ n (0) ω −2 , different from the results given in Ref. [40]. 6 Eqs. (33)-(34) generalize the Navier-Stokes solution for the baryon diffusion current into the critical region where, due to critical slowing down, the slow critical fluctuation modes are not in thermal equilibrium. That is to say, these equations indicate that at large frequencies ω/Γ n > ∼ 1 baryon diffusion is suppressed, hence a naive extrapolation of hydrodynamics with the frequencyindependent coefficient κ n (0) ∼ ξ (see Eq. (18)) to this regime would overestimate the amount of baryon diffusion. Knowing from Eq. (21) that Γ n ∼ ξ −2 , this further implies that the suppression affects modes with frequencies ξ −2 < ∼ ω ξ −1 , i.e. inside the Hydro++ regime discussed in Ref. [40]. When analyzed within the Hydro+ framework with only a single slow mode, the switchingoff of critical contributions to baryon diffusion occurs at ω ξ 2 ∼ 1; this is consistent with the general analysis in Ref. [40] where the full wave number spectrum is taken into account. In other words, the switching-off of the critical contribution to baryon diffusion at finite frequency is taken care of by Eq. (32) (and similarly by Eq. (7)).
Summarizing briefly, we emphasize that in the critical regime the Hydro+ equation (32) leads to similar dynamics as the Israel-Stewart equation (7) for the diffusion current n µ since both equations correctly account for critical slowing down through Γ n = 1/τ n . This correspondence is expected since in the critical regime the off-equilibrium effects are dominated by fluctuations that are effectively frozen. However, Eq. (32) still differs from the Hydro++ equations presented in Ref. [40], since only a single representative mode (with the typical wave number q ∼ ξ −1 ) is analyzed. The exact asymptotic suppression behavior as well as the non-analytic frequency dependence of κ n (cf. footnote 6) arising in Hydro++ from the phase-space integration over all critical modes are both missed by Eq. (32). Indeed, the critical effects on κ n are overestimated by Eq. (32) at small ω (i.e., κ n is less suppressed compared to Ref. [40]). However, we shall see in the following that the resulting overestimation of critical effects is negligible when compared to much stronger suppression effects arising from different origins. For these reasons the single-mode Hydro+ formalism (or, equivalently, the generalized Israel-Stewart formalism amended by critical scaling) serves as a good prototype of the state-of-the-art Hydro+/++ theoryit is sophisticated enough to capture the phenomenologically important feature of critical slowing down while preserving computational economy.
IV. SETUP OF THE FRAMEWORK
In this section we set up the framework for simulating the evolution of a fireball close to the QCD critical point. The core of our framework is the hydrodynamic equations discussed in Sec. II. This requires specification of the EoS and the transport coefficients, as well as initial and final conditions. We also discuss the particlization process of converting the fluid dynamic output into par-ticles whose momentum distributions (after integrating over the conversion hypersurface) can be compared with experimental measurements.
Initial conditions. We start with the initial conditions which, from a physics perspective, describe the initial state of the systems while mathematically providing the initial data for solving the initial value problem associated with our coupled set of partial differential equations. At collision energies of order √ s NN ∼ O(10) GeV, the longitudinal interpenetration dynamics of the two colliding nuclei becomes complicated, in principle requiring a time-dependent, (3+1)-dimensional description of the initial energy-momentum deposition and baryon number doping processes that produce the QGP fluid.
Recently, several so-called "dynamical initialization" algorithms have been proposed to address this problem (see Refs. [47,48,51,90] and the reviews [45,46]). In this exploratory study, however, we try to establish a basic understanding of baryon diffusion dynamics for Au-Au collisions at √ s NN = 19.6 GeV in which we focus entirely on the longitudinal dynamics, modeling a (1+1)dimensional system without transverse gradients initiated instantaneously at a constant proper time τ i (see also Refs. [72,91]). More specifically, we evolve the system hydrodynamically using the longitudinal initial profiles e(τ i , η s ), n(τ i , η s ) provided in Ref. [52], starting at τ i = 1.5 fm/c. 7 The initial hydrodynamic profiles are shown as gray curves in Fig. 4 below. The initial energy density has a plateau covering the space-time rapidity η s ∈ [−3.0, 3.0] whereas the initial net baryon density features a double peak structure and covers a narrower region η s ∈ [−2.0, 2.0], reflecting baryon stopping. 8 For the initial longitudinal momentum flow we take the "static" flow profile u µ = (1, 0, 0, 0) in Milne coordinates (corresponding to Bjorken expansion [92] in Cartesian coordinates), and the initial baryon diffusion current is assumed to vanish, n µ = (0, 0, 0, 0). EoS and transport coefficients. Given these initial conditions, the hydrodynamic equations (1) and (8) are solved by BEShydro [54]. For the EoS at nonzero net baryon density we use neos [53] which was constructed by smoothly joining Lattice QCD data [93][94][95][96] with the hadron resonance gas model. As already mentioned, we here focus on baryon diffusion dynamics by ignoring shear and bulk viscous stresses. For the transport coefficients related to baryon diffusion we rely on 7 Ref. [52] provides longitudinal initial distributions for the entropy and baryon densities. We here adopt the functional form of their initial entropy profile as our energy profile, after appropriate normalization. 8 We note that baryon stopping affects the initial momentum rapidity y of the baryon number carrying degrees of freedom and is typically modelled by a rapidity shift ∆y ∼ 1 − 1.5, depending on system size and collision energy. To translate this rapidity shift into a shift in space-time rapidity ηs (as done in Fig. 4) requires a dynamical initialization model. Different such models yield different initial density and flow profiles [45-48, 51, 90]. 3. Initial longitudinal distributions of κn,0, corresponding to the initial profiles in Fig. 4 the theoretical work in Refs. [52,77,78,97] since phenomenological constraints are still lacking. Specifically, we here use the coefficients obtained from the Boltzmann equation for an almost massless classical gas in the relaxation time approximation (RTA) [52], which gives for the baryon diffusion coefficient
�� ������� �� ���
and for the relaxation time where C n is a free unitless parameter. 9 Throughout this paper, we set C n = 0.4; in Ref. [52] this value was shown to yield good agreement with selected experimental data. Following the kinetic theory approach [52] we also set δ nn,0 = τ n,0 in Eq. (8) as its non-critical value. In the limit of zero net baryon density, κ n,0 remains non-zero at non-zero temperature, lim µ→0 κ n,0 /τ n,0 = nT /(3µ) [52] -a feature also seen in holographic models; for example, using the AdS/CFT correspondence, the (baryon) charge conductivity of r-charged black holes translates into [77,91] κ n,0 = 2π Since κ n,0 is such an important parameter in our study, we offer some intuition about its key characteristics in Fig. 3, where its initial space-time rapidity profile is plotted for three of the theoretical approaches referenced above. 10 The figure shows that in the region |η s | < ∼ 2, where the net baryon density is non-zero (cf. Fig. 4d below), differences exist in κ n,0 (η s ) between the weakly and strongly coupled approaches: In the holographic approaches κ n,0 is suppressed by baryon density while in the kinetic approach it is enhanced. On the other hand we see that at large rapidity |η s | > ∼ 2.5, where the baryon density approaches zero (cf. Fig. 4d), the different models for κ n,0 yield similar distributions, all of them rapidly decreasing towards zero as the temperature decreases (cf. Fig. 4b).
This also implies that generically, as the fireball expands and cools down, all three approaches yield rapidly decreasing amounts of baryon diffusion, as will be discussed below in Sec. V D.
Particlization. After completion of the hydrodynamic evolution (results of which will be discussed in Sec. V) we compute the particle distributions corresponding to the hydrodynamic fields on the freeze-out surface Σ, using the Cooper-Frye formula [99]: where E, p are the energy and momentum of a particle of species i in the observer frame, g i is the spin-isospin degeneracy factor, d 3 Σ µ (x) is the outward-pointing surface normal vector at point x on the three-surface Σ, f eq,i is the equilibrium distribution for particle of species i, and δf diff,i the off-equilibrium correction resulting from net baryon diffusion (other viscous corrections are neglected in this paper). At first order in the Chapman-Enskog expansion of the RTA Boltzmann equation, the dissipative correction from net baryon diffusion δf diff,i is given by [52,100,101] where θ i = 1 (−1) for fermions (bosons), b i is the baryon number of particle species i, p µ ≡ ∆ µν p ν , andκ = κ n /τ n . We will discuss how the critical correction is included, as well as its effects on the final particle distributions, in Sec. V C 2. We evaluate the continuous momentum distribution (40) numerically using the iS3D particlization module [100], ignoring rescattering among the particles and resonance decays after particlization.
V. RESULTS AND DISCUSSION
In this section we discuss the dynamics of the fireball at a fixed collision energy of √ s NN = 19.6 GeV. First we study in Sec. V A baryon diffusion effects on its T -µ trajectories through the phase diagram for cells located at different space-time rapidities η s , and in Sec. V B on freeze-out surface and final particle distributions, in the absence of critical dynamics. Then, in Sec. V C, we discuss how the critical behavior described in Sec. III modifies this dynamics for cells whose trajectories pass close to the critical point. In Sec. V D we point out some generic features of the time evolution of baryon diffusion.
A. Longitudinal dynamics of baryon evolution
In Figure 4, we show snapshots of the longitudinal distributions of the hydrodynamic quantities at two times, the initial time 1.5 fm/c (gray solid lines) and later at τ = 5.5 fm/c, with and without baryon diffusion (red solid and blue dashed lines, respectively). The gray curves in panels (a) and (d) show the initial energy and baryon density distributions from Ref. [52]. The gray lines in panels (b) and (e) show the corresponding temperature and chemical potential profiles, extracted with the neos equation of state [53,[93][94][95][96]. The gray horizontal lines in panels (c) and (f) show the zero initial conditions for the longitudinal flow and baryon diffusion current. The temperature profile (panel (b)) shares the plateau with energy density (panel (a)), up to small structures caused by the double-peak structure of the baryon density (panel (d)) and baryon chemical potential profiles (panel (e)). The chemical potential in panel (e) inherits the double-peak structure from baryon density in panel (d). These structures are also reflected in the pressure (not shown). 11 We next discuss the blue dashed lines in Fig. 4 showing the results of ideal hydrodynamic evolution. Work done by the longitudinal pressure converts thermal energy into collective flow kinetic energy such that the thermal energy density e decreases faster than 1/τ (panel (a)). Small pressure variations along the plateau of the 11 In the very dilute forward and backward rapidity regions one observes a steep rise of the initial µ/T . This feature is sensitive to the rates at which e and n approach zero as |ηs| → ∞, and it is easily affected by numerical inaccuracies. Since both T and µ are close to zero there, the baryon diffusion coefficient κ n,0 also vanishes, and (as seen in Fig. 4f) the apparently large but numerically unstable gradient of µ/T at large ηs does not generate a measurable baryon diffusion current. distribution (caused by the rapidity dependence of µ/T ) lead to slight distortions of the rapidity plateau of the energy density as its magnitude decreases. Longitudinal pressure gradients at the forward and backward edges of the initial rapidity plateau accelerate the fluid longitudinally, generating a non-zero η s -component of the hydrodynamic flow at large rapidities (panel (c)). As seen in panels (a) and (c), the resulting longitudinal rarefaction wave travels inward slowly, leaving the initial Bjorken flow profile u η = 0 untouched for |η s | < 2.5 up to τ = 5.5 fm/c. For Bjorken flow without transverse dynamics baryon number conservation implies that nτ remains constant. Panel (d) shows this to be the case up to τ = 5.5 fm/c because, up to that time, the initial Bjorken flow has not yet been affected by longitudinal acceleration over the entire η s -interval in which the net baryon density n is non-zero. Panel (e) shows, however, that in spite of nτ remaining constant within that η s range, the baryon chemical potential µ/T decreases with time, as required by the neos equation of state.
The nontrivial evolution effects of turning on the baryon diffusion current via Eq. (8) are shown by the red solid lines in Fig. 4. The baryon diffusion current itself is plotted in panel (f) and will be discussed shortly. Panels (a)-(c) show that baryon diffusion has almost no effect at all on the energy density (and, by implication, on the pressure), the temperature, and the hydrodynamic flow generated by the pressure gradients. Given the weak dependence of pressure and temperature on baryon density through the EoS this is to be expected. Baryon diffusion does, however, significantly modify the rapidity profiles of the net baryon density (d), chemical potential µ/T (e). Generated by the negative gradient of µ/T , the baryon diffusion current moves baryon number from high-to low-density regions, causing an overall broadening of the baryon density rapidity profile in (d) while simultaneously filling in the dip at midrapidity [48,51,52,54,56,91]. Panel (e) shows how the chemical potential µ/T tracks these changes in the baryon density profile (panel (d)), and panel (f) shows the baryon diffusion current responsible for this transport of baryon density, with its alternating sign and magnitude tracing the sign and magnitude changes of −∇(µ/T ). As we shall see in Sec. V D, the smoothing of the gradients of baryon density and chemical potential contributes to a fast decay of baryon diffusion effects. Fig. 4 indicates non-trivial thermal, chemical, and mechanical evolution at different rapidities. Fluid cells at different η s pass through different regions of the QCD phase diagram and may therefore be affected differently by the QCD critical point [72,102,103]. This has led to the suggestion [104] of using rapidity-binned cumulants of the final net proton multiplicity distributions as the possibly sensitive observables of the critical point. 12 To illustrate the point we show in Fig. 5 the phase diagram 12 We caution that at BES energies the mapping between space- trajectories of fluid cells at several selected |η s | values, 13 both with and without baryon diffusion. As we move from mid-rapidity to |η s | = 2.0, the starting point of these trajectories first moves from µ 0.28 GeV at η s = 0 to the larger value µ 0.45 GeV at η s = 1.5, but then turns back to µ 0.2 GeV at η s = 1.75, and finally to µ 0 at η s = 2.0, without much variation of the initial temperature T i 0.25 GeV (see Figs. 4b,e). The difference between the dashed (ideal) and solid (diffusive) trajectories exhibits a remarkable dependence on η s : Both the sign and the magnitude of the diffusion-induced shift in baryon chemical potential depend strongly on space-time rapidity. In most cases, we note that the solid (diffusive) trajectories move initially rapidly away from the corresponding ideal ones, but then quickly settle on a roughly parallel ideal trajectory. A glaring exception is the trajectory of the cell at η s = 1.5, which starts at the maximal initial baryon chemical potential and keeps moving away from its initial ideal T -µ trajectory for a long period, settling on a new ideal trajectory only shortly before it reaches the hadronization phase transition. The reason for this behavior can be found in Fig. 4e, which shows that at η s = 1.5 the gradient of µ/T remains large throughout the fireball evolution. But almost everywhere else baryon diffusion effects die out quickly.
time rapidity ηs of the fluid cells and rapidity y of the emitted hadrons is highly nontrivial and requires dynamical modelling. 13 Cells at opposite but equal space-time rapidities are equivalent because of ηs → −ηs reflection symmetry in this work. Since ideal fluid dynamics conserves both baryon number and entropy, the dashed trajectories are lines of constant entropy per baryon. This is shown by the dashed lines in Fig. 6. Baryon diffusion leads to a net baryon current in the local momentum rest frame and thereby changes the baryon number per unit entropy. This is illustrated by the solid lines in Fig. 6. Depending on the direction of the µ/T gradients, baryon diffusion can increase or decrease the entropy per baryon.
We close this discussion by commenting on the turning of the dashed m ≡ s/n = const. trajectories in Fig. 5 from initially pointing towards the lower left to later pointing towards the lower right. This is a well known feature of isentropic expansion trajectories in the QCD phase diagram [53,105,106] that reflects the change in the underlying degrees of freedom, from quarks and gluons to a hadron resonance gas, at the point of hadronization as embedded in the construction of the EoS. Figure 5 is reminiscent of the QCD phase diagram often shown to motivate the study of heavy-ion collisions at different collision energies in order to explore QCD matter at different baryon doping (see, for example, the 2015 DOE-NSF NSAC Long Range Plan for Nuclear Physics [1]). What had been shown there are (isentropic) expansion trajectories for matter created at midrapidity in heavy-ion collisions with different beam energies, whereas Fig. 5 shows similar expansion trajectories for different parts of the fireball in a collision with a fixed beam energy. Fig. 5 thus makes the point that in general the matter created in heavy ion collisions can never be characterized a single fixed value of µ/T . At high collision energies space-time and momentum rapidities are tightly correlated, η s y, and different η s regions with different baryon doping µ/T can thus be more or less separated in experiment by binning the data in momentum rapidity y. This motivates the strategy of scanning the changing baryonic composition in the T -µ diagram by performing a rapidity scan at fixed collision energy rather than a beam energy scan at fixed rapidity [72,102,104]. This strategy fails, however, at lower collision energies where particles of fixed momentum rapidity can be emitted from essentially every part of the fireball and thus receive contributions from regions with wildly different chemical compositions, with non-monotonic rapidity dependences that are non-trivially and non-monotonically affected by baryon diffusion.
B. Freeze-out surface and final particle distributions The expansion trajectories shown in the previous subsection all end at the same constant proper time (see Fig. 6). In phenomenological applications it is usually assumed that the hydrodynamic stage ends and the fluid falls apart into particles when all fluid cells reach a certain "freeze-out energy density", here taken as e f = 0.3 GeV/fm 3 . 14 With such a freeze-out criterion, fluid cells at different η s freeze out at different times τ f (η s ). In this subsection we discuss this freeze-out surface and the distributions of particles emitted from it. Fig. 7 shows the freeze-out surface τ f (η s ) in panel (a) as well as the longitudinal flow, baryon chemical potential, and longitudinal component of the baryon diffusion current in panels (b)-(d). 15 Ideal and diffusive hydrodynamics are distinguished by blue dashed and red solid lines. Panel (a) shows that initially the longitudinal pressure gradient causes the fluid to grow in η s direction before it starts to shrink after τ > ∼ 4 fm/c due to cooling and surface evaporation. As seen in Fig. 4a, the core of the fireball remains approximately boost invariant while cooling by performing longitudinal work, until the longitudinal rarefaction wave reaches it. Once the energy density in this boost-invariant core drops below e f , it freezes out simultaneously, as seen in the flat top of the freezeout surface shown in panel (a). Slight deviations from boost invariance are caused by the effects of the boostnon-invariant net baryon density profile and its (small) effect on the pressure whose gradient drives the hydrodynamic expansion. Baryon diffusion has practically no effect on the freeze-out surface, nor on the longitudinal flow along this surface shown in panel (b), owing to the weak dependence of the EoS on baryon doping. The distributions of the baryon chemical potential and baryon diffusion current across this surface, on the other hand, are significantly affected by baryon diffusion, as seen in panels (c) and (d). It bears pointing out, however, that Given these quantities on the freeze-out surface, we use the iS3D module [100] to evaluate the Cooper-Frye integral (40,41) for the rapidity distributions of hadrons emitted from the freeze-out surface. Results are shown in Fig. 8. Panel (b) indicates that baryon diffusion has negligible effects on meson distributions. It affects only baryon distributions. Panel (a) shows that baryon diffusion significantly increases the proton and net-proton yields at mid-rapidity and also broadens their rapidity distributions at large rapidity. Both of these effects on the baryon distributions were also observed in earlier work that, different from our simulations, additionally included resonance decays and full hadronic rescattering [48,51,52,91]; furthermore, they were found to increase with the magnitude of the baryon diffusion coefficient κ n . The approximate boost-invariance of the longitudinal flow over a wide range of η s on the freeze-out surface (see Fig. 7b) maps the baryon diffusion effects seen in Figs. 4d,e and 7c as functions of space-time rapidity η s onto momentum rapidity y p in Fig. 8a [48,51,52,91]. We take advantage of the iS3D option to include both the Chapman-Enskog and 14-moment approximations for the dissipative corrections (41), comparing the two in Fig. 8. The difference is seen to be negligibly small, and even ignoring in Eq. (40) δf diff,i entirely does not make much of a difference (not shown). This reflects the tiny magnitude of the baryon diffusion current on the freeze-out surface seen in Fig. 7d. 16 We emphasize that the mapping of baryon diffusion effects seen as a function of spacetime rapidity η s in Figs. 4d,e and 7c onto momentum rapidity y p is expected to be model dependent, and may not work for initial conditions in which the initial velocity profile is not boostinvariant or the initial η s -distribution of the net baryon density looks different. This initial-state modeling uncertainty has so far prohibited a meaningful extraction of the baryon diffusion coefficient from experimental data (see, however, Ref. [52] for a valiant effort). Additional uncertainties from possible critical effects associated with QCD critical point on the bulk dynamics, especially through baryon diffusion, may further complicate the picture, in particular as long as the location of the critical point is still unknown. In the following subsection we address some of these effects arising from critical dynamics.
C. Critical effects on baryon diffusion
In this section, we explore whether the QCD critical point can have significant effects on the bulk dynamics, through the baryon diffusion current. For this purpose, we include critical effects as described in Sec. III, and explore effects from critical slowing down on the hydrodynamic transport (Sec. V C 1), as well as critical corrections to final particle distributions through the Cooper-Frye formula (Sec. V C 2).
Critical slowing down of baryon transport
As discussed in Sec. III, in the critical region baryon transport is affected by critical slowing down [23]. Outside the critical region all thermodynamic and transport properties approach their non-critical baseline described in Sec. V A, but as the system approaches the critical point its dynamics is affected by critical modifications of the transport coefficients involving various powers of ξ/ξ 0 > 1. We study this by incorporating the critical scaling of χ, κ n and τ n in Eqs. (20) and (21), with the correlation length ξ(µ, T ) parametrized by Eq. (13).
Before doing any simulations we briefly discuss qualitative expectations. Eq. (19) indicates that, as the correlation length grows, ξ/ξ 0 > 1, the coefficient D B is suppressed while D T is enhanced. According to Eqs. (9) a suppression of D B reduces the contribution from baryon density inhomogeneities while an enhancement of D T increases the contribution from temperature inhomogeneities to the Navier-Stokes limit n ν NS = κ n ∇ ν (µ/T ). 17 In addition to thus moving its Navier-Stokes target value, proximity of the critical point also increases the time τ n (see Eq. (21)) over which the baryon diffusion current relaxes to its Navier-Stokes limit -its response to the driving force is critically slowed down.
Repeating the simulations with the same setup as in Sec. V A, except for the inclusion of critical scaling, yields the results shown in Fig. 9. For the parametrization of the correlation length ξ(µ, T ) we assumed a critical point located at (T c = 149 MeV, µ c = 250 MeV). This is very close to the right-most trajectory shown in Fig. 9 17 We note that in the literature sometimes only the baryon density gradient term D B ∇n is included in the diffusion current (see, e.g., Refs. [23,79]) which then leads to its generic suppression close to the critical point. which should therefore be most strongly affected by it. 18 Surprisingly, none of the trajectories, not even the one passing the critical point in close proximity, are visibly affected by critical scaling of transport coefficients.
To better understand this we plot in Fig. 10 the history of the correlation length and baryon diffusion current at different η s . In panel (a) we see that ξ does show the expected critical enhancement, by up to a factor ∼ 4.5 at η s = 1. This maximal enhancement corresponds to τ n 20 τ n,0 and D B 0.22 D B,0 , naively suggesting significant effects on the dynamical evolution. However, the critical enhancement of the correlation length does not begin in earnest before the fireball has cooled down to a low temperature T < ∼ T c + ∆T . Fig. 10b shows at at this late time the baryon diffusion current has already decayed to a tiny value. 19 In other words, the largest baryon diffusion currents are created at early times when the temporal gradients are highest but the system is far from the critical point; by the time the system gets close to the critical point, thermal and chemical gradients have decayed to such an extent that even a critical enhancement of the correlation length by a factor 5 can no longer revive the baryon diffusion current to a noticeable level. 18 Since we do not have the tools here to handle passage through a first-order phase transition, we do not consider any expansion trajectories cutting the first-order transition line to the right of QCD critical point. 19 This statement remains true if one multiplies n η with the metric factor √ −g = τ to obtain the baryon diffusion current in physical units of fm −3 . This two-stage feature, with a first stage characterized by large baryon diffusion effects without critical modifications and a second stage characterized by large critical fluctuations [21,22] with negligible baryon diffusion effects on the bulk evolution, is an important observation. For a deeper understanding we devote Sec. V D to a more systematic investigation of the time evolution of the diffusion current, but not before a brief exploration in the following subsection of critical effects on the final singleparticle distributions.
Critical corrections to final single-particle distributions
How to consistently include critical fluctuation effects on the finally emitted single-particle distributions, at the ensemble-averaged level, is still a subject of active research. A solid framework may require a microscopic picture involving interactions between the underlying degrees of freedom and the fluctuating critical modes during hadronization [109]. In this work we employ a simple ansatz where critical corrections to the final particle distributions are included only via the diffusive correction δf diff from Eq. (41) appearing in the Cooper-Frye formula (40). In this subsection this dissipative correction is computed from the simulations described in the preceding subsection which include critical correlation effects through critically modified transport coefficients, specifically a normalized baryon diffusion coefficientκ ≡ κ n /τ n with critical scalingκ obtained from Eqs. (20) and (21) usingκ 0 = κ n,0 /τ n,0 . 20 Since we saw in the preceding subsection that the hydrodynamic quantities on the particlization surface are hardly affected by the inclusion of critical scaling effects during the preceding dynamical evolution, the main critical scaling effects on the emitted particle spectra arise from any critical modification thatκ might experience on the particlization surface. The space-time rapidity distribution of the correlation length ξ along the freeze-out surface, as well as the net proton rapidity distributions with and without critical scaling effects, are shown in Fig. 11. Panel (a) shows that ξ peaks near |η s | 1.0 on the freeze-out surface, consistent with Fig. 10a. Note that, although fluid cells at different η s generally freeze out at different times, the freeze-out surface in Fig. 7a shows that within η s ∈ [−1.5, 1.5] all fluid cells freeze out at basically the same time τ f ∼ 17 fm/c. Therefore Fig. 11a indeed corresponds to the ξ values at different η s at the end of the evolution in Fig. 10a.
Even though Fig. 11a shows a critical enhancement of ξ/ξ 0 2.7 near η s 1.0, corresponding toκ/κ 0 0.37, we see in Fig. 11b that the net proton distribution is modified by at most a few percent. The lower panel in Fig. 11b indicates that the largest critical corrections indeed correspond to regions of large ξ/ξ 0 , sign-modulated by the direction of the baryon diffusion current (cf. Fig. 7d). We also notice a thermal smearing when mapping the distribution of ξ in η s to the modification of net proton distribution in y p . The critical modification of the net proton spectra arising from the diffusive correction to the distribution function is very small also because δf diff in (41) is roughly proportional to the magnitude of n µ which is tiny. Such small modifications are certainly unresolvable with current or expected future measurement. Indeed, the magnitudes of these modifications depend on the correlation length on the freeze-out surface which in turn depends on the choice of the freeze-out energy density e f . Independence of the final results from this choice could be achieved by properly sampling the critical correlations on this surface and then propagating them to the completion of kinetic freeze-out with a hadronic transport code that appropriately accounts for critical dynamics; unfortunately, these options are presently not yet available.
In conclusion, critical scaling effects on both the hydrodynamic evolution of the bulk medium and the finally emitted single-particle momentum distributions are small, mostly because by the time the system passes the critical point and freezes out the baryon diffusion current has decayed to negligible levels.
D. Time evolution of baryon diffusion
In this subsection we further analyze the baryon diffusion dynamics and the origins of its rapid decay. We define Knudsen and inverse Reynolds numbers for baryon diffusion and display their space-time dynamics. The resulting insights are relevant for model building and for the future quantitative calibration of the bulk fireball dynamics at non-zero chemical potential.
FIG. 12. Same as Fig. 10b, but zoomed in onto the early evolution stage and including for comparison as dashed lines the corresponding Navier-Stokes limit n η NS of the longitudinal diffusion current.
Fast decay of baryon diffusion
As discussed in Sec. II, the diffusion current relaxes to its Navier-Stokes limit n ν NS = κ n ∇ ν (µ/T ) on a time scale given by τ n . General features of baryon diffusion evolution can thus be understood by following the time evolution of n µ NS , κ n and τ n . Here we focus on their evolution without inclusion of critical scaling since we established that the latter has negligible effect on the bulk evolution and therefore the non-critical values of n µ NS , κ n and τ n evolve almost identically with and without inclusion of critical effects. Fig. 12 shows a comparison of the longitudinal baryon diffusion current (solid lines) with its Navier-Stokes limit (dashed lines) at different space-time rapidities. One sees that the relaxation equation for n η tries to align the diffusion current with its Navier-Stokes value (which is controlled by the longitudinal gradient ∇ η (µ/T )) but the finite relaxation time delays the response, causing n η to perform damped oscillations around n η NS . This is most clearly illustrated in Fig. 12 by following the cell located at η s = 1.5 (uppermost): Initialized at zero, n η initially rises steeply, trying to adjust to its positive and rapidly increasing Navier-Stokes value, but at τ 1.7 fm/c the longitudinal gradient of µ/T switches sign and n η NS starts to decrease again. The hydrodynamically evolving n η follows suit, turning downward with a delay of about 0.3 fm/c (which, according to Fig. 13b below, is the approximate value of the relaxation time τ n,0 at τ = 2 fm/c), but soon finds itself overshooting its Navier-Stokes value. For the cell located at η s = 1.25, n η crosses its Navier-Stokes value even twice.
As long as the relaxation time τ n is short and not dramatically increased by critical slowing down, the rapid decrease of the dynamically evolving diffusion current is seen to be a generic consequence of a corresponding rapid decrease of its Navier-Stokes value: Fig. 12 shows that after τ ∼ 3.5 fm/c, n η basically agrees with its Navier-Stokes limit n η NS . Fig. 13b shows that in the absence of critical effects the relaxation time τ n,0 = C n /T increases by less than a factor of 2 over the entire fireball lifetime. The rapid decrease of n η NS is a consequence of two factors: (i) the gradients of µ/T decrease with time, owing to both the overall expansion of the system and the diffusive transport of baryon charge from dense to dilute regions of net baryon density, and (ii) the baryon diffusion coefficient κ n,0 decreases dramatically (by almost an order of magnitude over the lifetime of the fireball as seen in Fig. 13a), as a result of the fireball's decreasing temperature.
In summary, three factors contribute to the negligible influence of the QCD critical point on baryon diffusion: First, baryon diffusion is largest at very early times when its relaxation time is shortest and it quickly relaxes to its Navier-Stokes value; the latter decays quickly, due to decreasing chemical gradients and a rapidly decreasing baryon diffusion coefficient. Second, the relaxation time for baryon diffusion increases at late times, generically as a result of cooling but possibly further enhanced by critical slowing down if the system passes close to the critical point. This makes it difficult for the baryon diffusion current to grow again. Third, critical effects that would modify 21 the Navier-Stokes limit for the baryon diffusion current become effective only at very late times when n η NS has already decayed to non-detectable levels. The baryon diffusion current thus remains small even if its Navier-Stokes value were significantly enhanced by critical scaling effects.
Knudsen and inverse Reynolds numbers
We close this section by investigating the (critical) Knudsen and inverse Reynolds numbers associated with baryon diffusion. These are typically taken as quantitative measures to assess the applicability of second order viscous hydrodynamics such as the BEShydro framework employed in this work. Copying their standard definitions for shear and bulk viscous effects [52,54,111], we here set for baryon diffusion, where θ is the scalar expansion rate. Kn is the ratio between time scales for microscopic diffusive relaxation (τ n ) and macrosopic expansion (τ exp = 1/θ); the relaxation time τ n includes the effects of critical slowing down in the neighborhood of the QCD critical point. Re −1 is the ratio between the magnitude of the off-equilibrium baryon diffusion current and the equilibrium net baryon density in ideal fluid dynamics. Their space-time evolutions are shown in Fig. 14, together with the freeze-out (particlization) surface at e f = 0.3 GeV/fm 3 . Fig. 14a tells us that Kn > ∼ 1 happens only outside the freeze-out surface, in the fireball's corona where the fluid has already broken up into particles even at the earliest stage of the expansion. The short-lived peak in Kn near τ = τ i = 1.5 fm/c and η s ∼ 4 is caused by the rapid increase of τ n in the dilute and very cold corona of the fireball (note that τ n,0 = C n /T ). Critical slowing down near the QCD critical point causes the Knudsen number to increase somewhat around η s = 1 close to the freeze-out surface; this critical enhancement is barely visible as a light cloud on a blue background, indicating critical Knudsen numbers in the range Kn ∼ 0.5 − 0.7. Fig. 14b, on the other hand, indicates that Re −1 < ∼ 0.3 during the entire evolution, even close to the places where the Navier-Stokes value of the baryon diffusion current peaks at early times (see Fig. 12). After τ ∼ 5 fm/c (including the entire critical region around the QCD critical point) its maximum value drops below 0.1, reflecting of the rapid decay of the baryon diffusion current. The maximal Re −1 occurs around η s 2 shortly after the hydrodynamic evolution starts at τ i = 1.5 fm/c. From it emerges a region of sizeable inverse Reynolds number which ends at two moving boundaries where Re −1 = 0 (dark blue). The left boundary, moving towards smaller η s values, reflects a sign change of the baryon diffusion current (see Fig. 4f where at τ = 5.5 fm/c n η flips sign at η s 1). The right boundary, on the other hand, corresponds to where n µ decays to zero (which, according to Fig. 4f, happens at η s 2.5 when τ = 5.5 fm/c). The initial outward movement of the right boundary is a result of baryon transport to larger space-time rapidity. It stops moving after the diffusion current has decayed and no longer transports any baryon charge longitudinally.
The small values of Kn and Re −1 during the entire fluid dynamical evolution validate the applicability of second order viscous hydrodynamics, BEShydro, for describing flow and diffusive transport of baryon charge in the collision system studied here.
VI. CONCLUSIONS AND DISCUSSION
Baryon diffusion is an important dissipative effect in the hydrodynamic evolution of systems carrying a conserved net baryon charge. It smoothes out chemical inhomogeneities by transporting baryon charge relative to the momentum flow from regions of large to smaller net baryon density. It is driven by gradients of µ/T , but the corresponding transport coefficient characterizing the magnitude of the diffusive response, the baryon diffusion coefficient κ n (µ, T ), is still poorly constrained theoretically. In this work we studied a (1+1)-dimensional system without transverse gradients and flow to explore diffusive baryon transport along the longitudinal (beam) direction in heavy-ion collisions and thereby gain intuition about possible strategies to extract the baryon diffusion coefficient from experimental data. A fundamental difficulty for such an extraction is that baryon diffusion manifests itself as a transport of net baryon density in coordinate space while the experimental observations provide a snapshot of the hydrodynamic medium at the end of its lifetime in momentum space. In a rapidly expanding system, collective flow gradients map different space-time regions into different regions of momentum space, but the thermal momentum spread in the local rest frame blurs this map (less so for the heavier baryons than for the more abundant lighter mesons), and makes it difficult to reconstruct the movement generated by baryon diffusion in coordinate space from the final net baryon distribution in momentum space. In longitudinal direction local thermal motion results in a rapidity spread of order T / m T [112,113] where T is the kinetic freeze-out temperature and m T is the average transverse mass of the particle species in question. The flow-induced separation of different fireball regions in longitudinal momentum rapidity is easier at higher collision energies where the fluid medium created in the collision covers a wider rapidity range, i.e. a larger multiple of the thermal smearing width. At lower beam energies, such as those probed in the RHIC BES campaign, unfolding the measured final rapidity distribution into different regions of space-time rapidity, with possibly different net baryon densities, becomes much harder. And therefore the reconstruction of baryon-diffusion induced matter transport across space-time rapidity also becomes much harder.
In this work we studied central Au-Au collisions at √ s NN ∼ 20 GeV in which the fireball covers about 7 units of space-time rapidity along the beam direction (Fig. 4a). Assuming that baryon stopping leads to a space-time rapidity shift of about 1.5 units for the incoming projectile and target baryons, the initial net baryon distribution had a width of about 4 units. It was modeled by a double-humped function with two well-separated peaks located at η s = ± 1.5 (Fig. 4d). After accounting for ideal hydrodynamic evolution and thermal smearing this resulted in a double-humped net proton rapidity distribution whose peaks were still relatively cleanly separated by about 2 units of rapidity, but after including baryon diffusion they almost (though not quite) merged into a single broad hump around midrapidity (Fig. 8a). In our calculation, the QCD critical point was positioned at a baryon chemical potential µ = 250 MeV. Recent Lattice QCD results put the likely location of this critical point at µ > 400 MeV [96,114,115], which requires lower collision energies for its experimental exploration. At lower collision energies the width of the initial space-time rapidity interval of non-zero net baryon density will be narrower, and the final net-proton distribution will eventually become single-peaked in central collisions [116].
In the work presented here we focused on the questions how diffusive baryon transport manifests itself along the beam direction in hydrodynamic simulations, what traces it leaves in the finally measured rapidity distributions, and how it is affected by critical scaling of transport co-efficients in the proximity of the QCD critical point. To address these questions we systematically discussed the static and dynamic critical behavior of thermodynamic properties (especially those associated with baryon transport) and introduced an analytical parametrization of the correlation length that correctly reproduces the critical exponents of the 3D Ising universality class. Based on a careful comparison with the Hydro+/++ framework [34,40,72] we identified the critical scaling ("critical slowing down") of the relaxation time for the baryon diffusion current (τ n ∼ ξ 2 ), and demonstrated that in the critical regime the Israel-Stewart type equation for the baryon diffusion current plays the role of a single-mode Hydro+ equation for a vector slow mode. We did not discuss the out-of-equilibrium evolution of the slow mode itself. For a single scalar slow mode, this was studied elsewhere using the BEShydro+ framework [22] where it was found that its feedback to the hydrodynamic bulk evolution was negligible [21,22]. A systematic Hydro++ study incorporating the full coupled evolution of all relevant non-hydrodynamic critical slow modes is still outstanding.
We are not the first to point out that, because of its extended nature, different regions within a collision fireball probe different regions of the QCD phase diagram as they cool and expand. We found that, at early times, strong longitudinal gradients of µ/T lead to significant longitudinal baryon diffusion currents that shift the expansion trajectories for different parts of the fireball in different directions within the QCD phase diagram. The final net proton rapidity distributions reflect these shifts, albeit blurred by thermal smearing. Our model assumes a boost-invariant initial longitudinal momentum flow y flow = η s which maximizes the correlation between the space-time rapidity of a fluid cell on the freeze-out surface and the momentum rapidity of the final hadrons it emits. The expected breaking of boostinvariance of the longitudinal flow pattern by dynamical initialization effects at lower collision energies even near midrapidity [47,48,117] will inevitably weaken this correlation, adding to the decorrelating effects of thermal smearing. This will likely result in significant sensitivities of baryon diffusion coefficients inferred from experimental net baryon rapidity distributions to poorly controlled model ambiguities in the initial space-time rapidity profile of the net baryon density assumed in the dynamical model.
The baryon diffusion flows observed in the calculations presented in this paper are characterized by an important feature: They show almost no sensitivity to critical effects even for cells passing close to the critical point. Taken at face value, this implies that the hydrodynamic evolution of baryon diffusion leading to the finally emitted ensemble-averaged single-particle momentum spectra does not carry useful information for locating the QCD critical point.
The absence of critical effects on baryon diffusion in this work contrasts starkly with the strong critical ef-fects on the evolution of the bulk viscous pressure found in Ref. [72] which led to significant distortions of the rapidity distributions for all emitted hadron species. The main reason for that behavior is that the bulk viscosity and bulk viscous pressure are generically large around the quark-hadron phase transition, even without explicitly including critical scaling effects. Adding the latter thus causes significant modifications of the dynamics of the bulk viscous pressure. 22 Baryon diffusion effects, on the other hand, here appear to be insensitivity to critical dynamics, for two main reasons: (i) The baryon diffusion flows are strong at early times but decay very quickly, before the system enters the critical region, because diffusion reduces the initially strong chemical gradients ∇(µ/T ) that drive the flows, and the baryon diffusion coefficient describing the response to these gradients decreases quickly as the fireball cools by expansion. (ii) By the time the system reaches the phase transition, possibly passing close to the critical point, the Navier-Stokes value of the baryon diffusion current is already very small; critical enhancement by the baryon diffusion coefficient κ n by a power of the correlation length ξ/ξ 0 therefore does not help to revive it, and in any case the relaxation rate controlling the approach of the baryon diffusion current to its critically affected Navier-Stokes value is reduced by critical slowing down. The smallness of the baryon diffusion current on the freeze-out surface also implies very small dissipative corrections to the Cooper-Frye formula at particlization.
The observed insignificance of critical effects on baryon diffusion might be taken as permission to calibrate the fireball medium's bulk evolution at BES energies without worrying about the QCD critical point and its location. This may be premature, however, for multiple reasons: (1) The inclusion of shear and bulk viscous effects will modify the expansion trajectories through the phase diagram. Although the critical enhancement of the shear viscosity is negligible (η ∼ ξ /19 [23] where = 4−d, with d being the number of spatial dimensions), critical slowing down of the shear stress tensor may still be significant, possibly causing larger residual shear stresses in the vicinity of the critical point than predicted in the absence of critical effects. This has not been studied. The bulk viscous pressure is already known to be strongly affected by critical phenomena [72]. We see a potential for these effects to spill over into the baryon diffusion channel through second-order transport coefficients that couple these channels. Should this happen, shear and bulk viscous effects could invalidate some of the findings of the work presented here (in particular the rapid decay of the baryon diffusion current observed in Sec. V D 1). To clarify this, a future full simulation should include all dissipative effects simultaneously. (2) Large uncertain-ties still exist about the size of the critical region which is determined by the parametrization of the correlation length [65,67]. (3) At lower beam energies, the system may enter the critical region earlier, before the baryon diffusion significantly decays. When the evolution is fully (3+1)-dimensional the edge of the fireball may enter the critical region earlier as well, while the diffusion current is still appreciable.
A fully quantitative evaluation of the significance of critical effects on bulk medium evolution at BES energies can only be made on top of an at least tentatively constrained bulk evolution that includes all necessary theoretical ingredients and complications [57]. Only if this confirms negligible critical effects on bulk evolution can the final model calibration be safely made without worrying about critical modifications. convention of [65,119,120], the linear mapping from 3dimensional Ising variables (i.e., reduced Ising temperature r and magnetic field h) to the coordinate variables of QCD phase diagram (i.e., temperature T and baryon chemical potential µ) reads where (w, ρ) are scale factors for the Ising variables r and h. For notation simplicity we let s i = sin α i , c i = cos α i , i = 1, 2, where α 1 and α 2 are the angles relative to the negative µ axis for the mapped r and h axis respectively, defined in Fig. 1. From Eq. (B1) one can also find the inverse mapping relations Applying the flow with zero transverse components, these equations can be written into coupled partial differential equations, which can be solved using other softwares (e.g. Mathematica) for testing BEShydro's performance. While these equations clearly exhibit the physics in the LRF (which varies from point to point), BEShydro solves the conservation laws (Eqs. (1a,1b)) in a fixed global computational frame; thus this would be a non-trivial test of our hydrodynamic code.
To solve the equations, we start the system at initial time τ i = 1 fm, using the following initial conditions: u µ = (1, 0, 0, 0) and the initial diffusion current is zero; the longitudinal distribution of baryon density is given as where the initial temperature T (τ i , η s ) = T 0 = 0.5 GeV, and n max = 10 fm −3 , σ η = 0.5 and η ± = ± 1. Here g s = 16 is the degeneracy factor in the EoS below. The initial chemical potential is set to be zero everywhere. One also needs the two transport coefficients in Eq. (C4), and we use [76] κ n = 3 16σ tot , τ n = 9 4 1 nσ tot , with σ tot = 10 mb = 1 fm 2 . To close the equations, we use the EoS e = 3p = 3nT , n = g s π 2 T 3 exp µ T . (C7) We note that the EoS can be analytically inverted to get T (e, n) and µ(e, n), directly usable for hydrodynamic codes, which is convenient for testing such codes. The setup above which is from Ref. [56] can be solved semi-analytically using e.g. Mathematica and used for validating BEShydro, the comparison of which is shown in Fig. 15, for the baryonic sector. One can see from the figure that BEShydro works perfectly well in the longitudinal evolution.
With the solutions, we can also get the space-time distribution of the freeze-out surface, and correspondingly the other hydrodynamic quantities on the surface; thus we can take the chance to test the freeze-out finder implemented in BEShydro, which is based on Cornelius [107]. We have tested its efficiency within BEShydro at non-zero baryon density in the transverse plane [108]. Here, we validate it for this work, where longitudinal dynamics with baryon diffusion current is evolved. In Fig. 16, we define the freeze-out surface at two different freeze-out energies, and we see that the space-time profile and distribution of the freeze-out temperature perfectly agree with the semi-analytical results.
Calculation of various gradients
As mentioned in Sec. II, to include critical contributions to the EoS without using such an EoS, we rewrite the Navier-Stokes limit term of baryon diffusion into two terms: Note that consistence between results from κn∇α (red solid line) and DT ∇T + DB∇n (green dashed line) is a highly nontrivial test of numeric methods, since the later involves calculating (∂p/∂T )n and interpolating tabulated χ etc. Though the overall agreement is very good, small wiggles can be seen near the edge for the latter case, a reflection of the complexities in numerics.
through which critical behavior (18) is introduced to the thermal properties of the system. In the original BEShydro Eq. (C8) is calculated and tested, but apparently Eq. (C9) is much more complicated to calculate numerically.
Here we validate the code by verifying that exactly the same results can be achieved using the two expressions for the Navier-Stokes term in Eqs. (C8,C9). A quick and easy test can be done using the setup from Sec. C 1, where are analytically available and easily used in BEShydro.
We have checked that both methods in Eqs. (C8,C9) give precisely the same results in Fig. 15. However, in the simulations of this work, the neos is used, and the two terms in Eq. (C10) are not analytical calculable. Thus we repeat the comparison to MUSIC as in Ref. [54] for the two gradients, and show the validation in Fig. 17, which show excellent agreement. From the figure, we also see that the baryon transport driven by D B ∇ µ n is very close to that driven by D B ∇ µ n+D T ∇ µ T , indicating that density gradients dominates over temperature gradients in the Navier-Stokes limit of the baryon diffusion current. | 21,018.2 | 2021-07-05T00:00:00.000 | [
"Physics"
] |
On the importance of excited state species in low pressure capacitively coupled plasma argon discharges
In the past three decades, first principles-based fully kinetic particle-in-cell Monte Carlo collision (PIC/MCC) simulations have been proven to be an important tool for the understanding of the physics of low pressure capacitive discharges. However, there is a long-standing issue that the plasma density determined by PIC/MCC simulations shows quantitative deviations from experimental measurements, even in argon discharges, indicating that certain physics may be missing in previous modeling of the low pressure radio frequency (rf) driven capacitive discharges. In this work, we report that the energetic electron-induced secondary electron emission (SEE) and excited state atoms play an important role in low pressure rf capacitive argon plasma discharges. The ion-induced secondary electrons are accelerated by the high sheath field to strike the opposite electrode and produce a considerable number of secondary electrons that lead to additional ionizing impacts and further increase of the plasma density. Importantly, the presence of excited state species even further enhances the plasma density via excited state neutral and resonant state photon-induced SEE on the electrode surface. The PIC/MCC simulation results show good agreement with the recent experimental measurements in the low pressure range (1–10 Pa) that is commonly used for etching in the semiconductor industry. At the highest pressure (20 Pa) and driving voltage amplitudes 250 and 350 V explored here, the plasma densities from PIC/MCC simulations considering excited state neutrals and resonant photon-induced SEE are quantitatively higher than observed in the experiments, requiring further investigation on high pressure discharges.
Introduction
Radio frequency (rf) capacitively coupled plasma (CCP) discharges have a wide range of applications, but in particular they are applied for material processing in the semiconductor industry. The CCPs are currently indispensable for etching and thin film deposition in integrated circuit fabrication. The discharge is created when a rf voltage is applied across a gasfilled region between two electrodes. The gas pressure can vary over a wide range and the discharge properties vary with the operating pressure. For example, at low operating pressure (a few Pa), the mean free path for both electrons and ions is larger than the electrode spacing, and the ion velocity is highly non-isotropic when ions arrive at the electrode surface after the acceleration across a high sheath field [1,2]. For the fabrication of solar panels and flat panel displays, which involves thin film deposition on a large area surfaces, the pressure is often higher, or in the intermediate pressure regime, on the order of 130 Pa [3][4][5][6][7].
Numerical simulations are of significant importance in understanding the fundamental mechanisms of capacitive discharges, especially, particle-in-cell simulations coupled with Monte-Carlo collisions (PIC/MCC), which self-consistently calculate electron/ion energy and velocity distribution functions, and have been proven to be a very powerful tool, applicable over a wide pressure range [8][9][10][11]. Discharges operated in noble gas, especially argon, are usually studied to gather understanding of the fundamental mechanisms in plasma discharges. In earlier studies, Roberto et al [12] and Lauro-Taroni et al [13] showed that the direct ionization from the ground state atom is the main ionization source for low pressure argon discharges, and the ionization from exited state species, like metastable pooling, and step-ionization processes, is negligible, by means of PIC/MCC simulations, but in the absence of secondary electron emission (SEE) from the surface. Therefore, in most of the particle-based modeling studies at the current time [14][15][16][17][18], the excited state atoms with a relatively low density compared to the feedstock gas are excluded [13,19]. Meanwhile, it is also known that with increasing pressure, the plasma gas composition may differ from the feedstock gas, and the reaction-generated excited state species, especially the metastable atoms, Ar m , which are long-lived, although having a lower density than the feedstock gas atoms, can alter the discharge properties through metastable pooling and stepionization. We know that while fluid modeling inevitably has certain assumptions on ion/electron energy distribution function and transport coefficients, particle-in-cell simulation tracing excited state neutrals as individual particles via a direct simulation of MCCs is computationally expensive.
In our recent works on capacitive argon discharges [20][21][22], the excited state neutrals are modeled as space-and time-evolving fluids incorporated with PIC/MCC simulations that treat charged particles as individual computer particles. This way, not only are the nonlocal dynamics of charged particles properly included as conventional particlebased modeling, but the effect of the excited state neutrals is also captured. It is found that the presence of metastable atoms enhances the plasma density by a factor of three at an intermediate pressure of 213 Pa, and the main ionization source alters from electron impact ionization of the ground state atom at low pressure (7 Pa) to metastable pooling and step-ionization at higher pressure (666-2000 Pa) [20]. The photon emission process of the Ar(4p) manifold, Ar(4p) → Ar m + hν, contributes a considerably large proportion of Ar m production at low pressure, while the electron impact excitation gradually dominates the Ar m production at high pressure (666-2000 Pa). The plasma density axial profile is also found to transition from parabolic shape at low pressure to a 'passive' flat bulk shape at high pressure due to the ionization occurring at the sheathplasma interface with/without excited state neutrals [7,22]. In absence of excited state neutrals, the ionization near the sheath-plasma interface is due to electron impact ionization; while, in presence of excited state species, the ionization near the sheath edge is mainly due to metastable pooling ionization because the metastable atoms Ar m are mainly located near the sheath edge [22].
In addition to the plasma processes in the bulk region, the surface processes of SEE have also attracted increasing attention in recent years [21,[23][24][25][26][27]. The simplest assumption for plasma surface processes is to set the ion-induced SEE coefficient to be a constant, often it is assumed to be 0.1, and the electron is simply assumed to be reflected with a probability of 0.2. Under such assumptions, for rf driven CCPs operated at low pressure, the secondary electrons are accelerated by the high sheath field to penetrate the bulk plasma region collisionlessly, and to become lost at the opposite electrode, even though the secondary electrons absorb power from the rf source. However, the effect of SEE on the plasma density is generally assumed to be less important at low pressure [1]. The γ-mode in which secondary electron-induced ionization breakdown within the sheath is dominant is known to appear at high pressure and high voltage [28].
In 1998, Gopinath et al [29] introduced a more realistic treatment for electron-electrode interaction in multipactor discharges, which are sustained within two parallel metal plates. In their approach the empirical Vaughan's formula describing electron-induced secondary electron yield (SEY) that depends on the impact energy and incident angle [30,31], is adapted. Different from the simple assumption of a constant electronreflection coefficient, the empirical Vaughan's model considers the incident electron energy and angle, allowing the electron-induced SEE yield to be above unity if the incident electron energy is above certain energy, i.e. tens of volts for most metal and dielectric materials. It implies that an ioninduced secondary electron with high energy, after the acceleration in high sheath field, is capable of producing more than one secondary electron at the opposite electrode if it can overcome the opposite sheath potential barrier and arrive at the electrode at a high energy. The newly ejected secondary electron moves toward the bulk plasma region under the action of the sheath field, that usually points toward the electrodes, and produce ionization. This regime was found by Horváth et al [24] to significantly enhance the plasma density in low pressure (0.5 Pa), high voltage (∼1000 V) rf driven argon CCPs.
The ion-induced SEE coefficient in reality relies on the ion impact energy and surface models. For a clean metal electrode surface, the SEY of argon ions is almost a constant 0.07, when the ion energy is below around 500 V, and slightly rises with increasing ion impact energy, while, for a dirty electrode surface, the argon ion-induced SEY is as low as 0.01 for ion energy below 10 V and increases up to 0.4 when the ion energy is 1000 V [32].
The discharge characteristics under realistic ion and fast atom-induced SEYs have been studied for single-frequency and multi-frequency low-pressure capacitive rf discharges driven by tailored voltage waveforms in argon [23,[25][26][27] and electronegative gas [33,34]. In our recent work, the excited state neutral-induced SEE was considered in intermediate pressure rf driven argon CCPs, and a γ-mode discharge is observed in presence of excited state neutral-induced SEE due to a large excited neutral-to-ion flux ratio of SEE [21]. In addition, the electrode material effects for SEE from SiO 2 , Cu, Si surfaces have also been examined by PIC/MCC simulations by several groups [16,[35][36][37].
Although fruitful knowledge on both bulk plasma and surface processes in low-pressure argon CCPs has been accumulated in the past years, there is still a long-standing issue on the quantitative validation between PIC/MCC simulations and experimental measurements. In the early studies, Vahedi et al [9] compared the modeling results from PIC/MCC simulations to laboratory measurements by Godyak et al [38] and showed that the electron energy probability functions from the PIC/MCC simulations are very similar to experimentally determined ones, over a gas pressure range from a few Pa to tens of Pa, but the plasma density from the simulations is lower than the experimentally determined ones by a factor of two. The deviations in the electron density were reevaluated and confirmed when revisiting the topic of electron power absorption in capacitive argon discharge by Lafleur et al [39]. In fact, lower plasma density in particle-based simulations is frequently observed when quantitatively comparing to experimental measurements, and they may differ from each other even by a few times [27,40,41], indicating that the physics considered in the PIC/MCC simulation models may be incomplete.
More recently, Schulenberg et al [42] conducted a multidiagnostic experimental validation of 1d3v PIC/MCC simulations for a low pressure rf driven capacitive argon discharge in typical discharge conditions, i.e. the operation gas pressure is in the range from 1 to 20 Pa, and the rf driving voltage amplitude is in the range from 150 to 350 V. It was found that a good agreement between simulation and experiments can be achieved only if the electron reflection coefficient is set to be an 'effective' constant up to 0.7. This 'effective' electron reflection coefficient was used by Derzsi et al [43], while studying a capacitive O 2 /Ne discharge.
In this work, we focus on the comparison of PIC/MCC simulation results against the recent experiments conducted by Schulenberg et al [42] and we will explore the physics that may be missing in modeling CCPs in the past by PIC/MCC simulations. The PIC/MCC code (oopd1) [20][21][22] that was strictly benchmarked and upgraded recently to be capable of considering the excited state neutrals-involving reaction processes such as metastable pooling ionization, step ionization, photon emission, production and loss of excited state neutrals, as well as SEE induced by electrons, ions, and neutrals (including excited atom states). We also explore the effect of resonant photon-induced SEE from the electrode surface in this work. The electron-induced SEE, modeled by the empirical Vaughan model is identified to be important, the excited state speciesrelated processes including the neutral-induced SEE on the electrode surface, and resonant photon-induced SEE, are also found to play a critical role in low pressure CCPs. The fluxes of different species like neutrals, ions, and photons flowing towards the electrodes are also analyzed in detail.
In section 2, we briefly introduce the PIC/MCC model. The modeling results and their comparison with recent experiments are shown in section 3, and finally, discussions and conclusions are drawn in section 4.
All the simulations conducted here are for capacitive argon discharges sustained between two planar stainless steel electrodes with the left hand electrode connected to a rf voltage source through a dc blocking capacitor and the right hand electrode grounded. The electrode spacing is kept at 4 cm, the driving frequency is 13.56 MHz, the feedstock gas pressure varies from 1 to 20 Pa, and the rf voltage amplitudes are 150, 250, and 350 V, respectively. Those conditions are the same as in the experiments of Schulenberg et al [42].
In the simulations, the charged particles (electrons and/or ions) are treated as individual particles, and a computer particle represents a cluster of 10 6 -10 12 real particles and moves in the direction normal to the electrode with its velocities updated in three directions using an explicit leap-frog integration scheme. When the system reaches a steady state, the particle number per discrete space cell width and per Debye length is much larger than unity to minimize the discrete particle noise, thus a large number of computer particles (∼10 5 ) are traced. The electric field is obtained by solving Poisson's equation where the net charge density is collected over all the charged particles by linearly weighting the charge within the cell into two neighboring discrete grid points. One thousand of discrete cells are used in the simulation with each cell width resolving the Debye length.
The collision dynamics of the charged particles is simulated by a null collision method [10]. For ions, the elastic and charge exchange collisions with neutrals are implemented by using the isotropic elastic scattering and backward scattering cross sections of Phelps [61]. The electron-neutral collisions in this work considers elastic scattering, electron-neutral excitation, and electron impact ionization. The argon atoms are excited to multiple levels of states, including the metastable-level atoms Ar m and the resonance radiative atoms Ar r of the 4s manifold, and the 4p-manifold (Ar(4p)). The corresponding energy thresholds and cross sections are given in detail in our previous works [20,21].
Since the electron elastic scattering influences the electron power absorption, and in turn affects the plasma density, here we introduce more details about the calculation of the elastic scattering angle. The elastic scattering between electrons and neutrals is implemented in the center-of-mass system, and the scattering angle χ follows the expression derived by Okhrimovskyy et al [62]: where ξ is the screening parameter and R is a random number (R ∈[0,1]). For a conventional screened Coulomb interaction, ξ = 4ϵ/(1 + 4ϵ) and ϵ = E inc /E 0 with E inc the incident electron energy in the center-of-mass system before collision, and E 0 = 27.21 eV, respectively. To make the elastic momentum transfer more consistent with experiments, the screening parameter ξ is also derived by Okhrimovskyy et al [62] from the normalized differential scattering cross section and expressed as a function of the ratio of the momentum transfer cross section and the total elastic scattering cross section recommended by Hayashi [63]. The comparison of different calculations of scattering angle, like the non-isotropic scattering angle by Vahedi et al [10], equation (1) and isotropic scattering based on the momentum transfer cross section, the screening parameter including Coulomb screening and Hayashi's screening parameter, and their effects on plasma density electron temperature, electron power deposition at low (6.7 Pa) and intermediate (213 Pa) pressure were discussed in more detail in an earlier work [22]. In this work, the screening parameter based on the experimental momentum transfer and total elastic scattering cross section recommended by Hayashi is adapted to calculate the scattering angle in equation (1). The algorithm for electron-neutral excitation and electron impact ionization in terms of energy partition and determination of the scattering angle of ejected electrons can be found elsewhere [20,53].
When an energetic electron arrives at the electrode surface, electron-induced SEE is considered and the emission coefficient is calculated by a modified empirical Vaughan's formula [24,[29][30][31]64]. The total SEE coefficient (γ e ) fits the experimental data provided by Baglin et al [65] and by Furman and Kirby et al [66,67] for normal incidence on stainless steel surface. The experimental data provided is for primary electron energy in the range 10-1000 eV [65], and 50-2000 eV [66,67], respectively. γ e based on Vaughan's model (Vau) used in this work and the experimental data (Exp a and b) [65][66][67] for the fitting are shown in figure 1. For low energy primary electrons incident on the surface, an elastic reflection process with the emission velocity specularly symmetric with respect to the surface normal vector is considered as done elsewhere [24,64,68]. For higher energy primary electrons, the true SEE is described by the conventional Vaughan's model [30,31], with 3% elastic reflection component, and 7% inelastic backscattered component [29]. The corresponding three components elastic reflection, inelastic backscattered and true secondary electron coefficients are plotted and labeled as η e , η i and η SEE , respectively, in figure 1. The parameters characterizing the conventional Vaughan's formula, elastic reflection and inelastic back-scattering that fit experimental measurements for stainless steel surface are summarized in table 1 [24].
The excited state atoms, the metastable-level Ar m (two levels treated as one), the radiative-level Ar r (two levels treated as one) and all the levels of the 4p-manifold Ar(4p) (taken as one species), are treated as time-and space-varying diffusing fluids. The feedstock gas argon atoms are assumed to be a fixed fluid that is spatially uniform within the two metal electrodes. The gas temperatures used in our simulations are the values experimentally measured by Schulenberg et al in [42] and increase from 300 to 330 K when the pressure varies in the range of 1-20 Pa. The effect of the gas temperature is comparable to the effect of the different plasma species induced SEE coefficients. For example, the plasma density for 150 V and The threshold for elastic reflection 0 eV ηe,max The maximum elastic reflection coefficient The energy corresponding to ηe,max 5.5 eV ∆e The control parameter for the decay of ηe 10 eV re Portion of elastically reflected high-energy electrons 3% [29] Portion of inelastically back-scattered high-energy electrons 7% [29] 20 Pa is around 10% higher at room temperature than that at 330 K, however, the excited-state neutral and photon induced SEE increases the plasma density by around 22% and 54%, respectively (see more detailed discussion later in figure 3(b)).
The excited state atoms are assumed to diffuse only against the background gas and not against each other as the excited state atom densities are much lower than the feedstock gas density. The governing diffusion equation for fluid species also includes the source and loss terms originated from the impact reactions between fluid and charged particles, as well as fluidfluid reactions. When an electron or ion is generated via fluidfluid reactions, the particle is added as an individual particle. On the electrode surface, the recombination coefficient is set to be 0.5, a value estimated by Stewart [69], implying that the excited state atoms are 50% quenched and 50% reflected on the electrode surface. The excited state atoms also lead to SEE on the surface with a coefficient of 0.21 for Ar m and Ar r , and 0.27 for Ar(4p) [70]. The resonance radiation of Ar r is partially imprisoned at low pressure, and the fraction of the radiation escaping depends on the specific gas pressure and electrode spacing. The Walsh model [71] is used to calculate the escape factor g. Typically, higher pressure leads to a lower value of g, and a smaller fraction of the resonant photons escape from the plasma to the surface. The escaped photons are traced and their impact on the electrode surface to produce SEE is calculated. For a photon energy of 11.62 eV, the resonant photon-induced secondary electron emission coefficient is set to 0.075, taken from the measurements of Feuerbacher and Fitton [72] for stainless steel.
Results
In this section, we present the PIC/MCC simulation results and the comparison to experimental measurements with emphasis on the plasma density and the effects of different surface processes. For comparison, the discharge conditions set in the PIC/MCC simulations are the same as in the experiments of Schulenberg et al [42] as follows: the electrode spacing is fixed at 4 cm, and the driving frequency is 13.56 MHz, the feedstock gas pressure is in the range of 1-20 Pa, and rf voltage amplitude is 150, 250, and 350 V. The surface processes that we explore focus on the electron reflection (constant coefficient), electron-induced real SEE (incident energy and angle-dependent SEY using the modified empirical Vaughan's formula), excited state species-induced SEE, and resonant photon-induced electron emission. These processes are considered incrementally in the PIC/MCC simulations (see the more detailed definitions in the following). The case of electron reflection with a constant coefficient 0.2 is the same as in the simulations of Schulenberg et al [42] except that more reactions involving excited state species like metastable pooling and step-wise ionization are considered. However, in this work we explore the more realistic surface process of SEE from energetic electrons, excited state species, and photons rather than using an 'effective' electron reflection coefficient.
In our simulations, the ion-induced SEE yield is a function of the ion impact energy and we assume dirty electrodes [21,32,73] for all the cases. Generally, a clean electrode means that the metal surface is flashed at a very high temperature for a long time and the properties may change after a duration of discharge operation. Therefore, the surface condition of a 'dirty' electrode is used to calculate the ion-induced SEE yield [32,73] in this work. For other parameters involving surface processes, even though these SEE coefficients may be imperfectly known, they are chosen as reasonable as possible based on the published literature.
The PIC/MCC simulations were conducted for four cases corresponding to four different surface models: For a comparison of the cases explored an overview is given in table 2. Note that the energetic neutral state atoms are tracked as individual particles only when the energy is above 32 eV that is the threshold for fast neutral-induced SEE for a dirty metal surface. However, since the energetic neutral state atom-induced SEY is energy dependent and much lower than the ion-induced SEY in the energy range of interest (0-160 eV) where 160 eV is almost the highest ion energy at 350 V [42], the role of the fast ground state atoms induced SEY is indeed negligible, which is supported by simulations including and excluding energetic ground state neutrals.
For conciseness, we will refer to different electron emission processes and cases via the SEE coefficient, for example, if case I is discussed, then we will refer to 'γ e = 0.2; γ exc = 0; γ ph = 0', directly. It is worth noting that both the metastable state atoms Ar m and the resonant state atoms Ar r (also Ar(4p)) flowing toward the electrodes can produce SEE. Here we use γ exc to represent the excited state neutral-induced secondary electron coefficient. Note that the flux of metastable atoms is much higher than the flux of the other excited state species by two or three order of magnitudes in the pressure range investigated here. Figure 2 shows the electron density versus gas pressure for the three driving voltage amplitudes. The figures show the experimentally determined value and the PIC/MCC simulation results for cases I-IV. The black dotted lines with error bar in figures 2(a)-(c) show the experimental measurements of the plasma density at the discharge center by Schulenberg et al [42], versus pressure (1-20 Pa) at driving voltage amplitudes of 150, 250, and 350 V, respectively. We can see that the electron density n e monotonically increases with increased pressure for all driving voltage amplitudes except that a slight decrease is seen at 20 Pa for 250, and 350 V. Similar to the observations that were made by Schulenberg et al [42], the plasma density from the PIC/MCC simulations for the case 'γ e = 0.2, γ exc = 0, γ ph = 0' is significantly lower than the experimental measurements at the low pressure of 1 Pa. However, when the more realistic electron-induced SEE process described by modified Vaughan's model is considered, as shown by case II 'γ e = Vau, γ exc = 0, γ ph = 0' in figure 2(a), the plasma density is significantly enhanced and increases from 0.24 × 10 15 to 0.62 × 10 15 m −3 for 150 V. Incrementally, the presence of γ exc , as shown by case III: 'γ e = Vau, γ exc = 0.21, γ ph = 0', further increases the plasma density to 0.8 × 10 15 m −3 , and γ ph = 0.075 further increases the plasma density around 7.5% (case IV). This implies that the addition of excited state species, that produces excited state neutral and resonant photon impact on the surface and creates secondary electrons, enhances the plasma density by 35% in total.
At higher pressure, in the range 5-10 Pa, the plasma densities with or without the consideration of Vaughan's model are almost the same. However, the excited state neutral and resonant photon-induced secondary electrons still enhance the plasma density. To make the plasma density enhancement from different surface processes more visible quantitatively, the plasma density profiles at 1 and 20 Pa for 150 V are shown in figures 3(a) and (b), respectively. When the driving voltage amplitude is 250 or 350 V, while the pressure is kept at 1 Pa, the plasma density is also enhanced by a factor of two. Similar to the case of 150 V, as shown in figures 2(b) and (c), the change of γ e from 0.2 to Vaughan's model has minimal effects on the plasma density at higher pressures (5-20 Pa) for 250 and 350 V. The contribution of SEE from different surface processes depends on the electron dynamics present in different parameter ranges that will be further discussed in the following.
The external conditions controlling the discharge properties explored here in the PIC/MCC simulations are the SEE, mainly determined by the incident electron flux, energy, and angle, incident ion flux and energy, neutral (including excited states) flux and resonant photon flux. Firstly, we analyze the origin of the enhanced plasma density for Vaughan's model compared to a constant electron reflection coefficient. Comparing these two surface models, the only difference is the dependence of the SEY on the incident primary electron properties, i.e. the SEY in Vaughan's model relies on the impacting electron energy and incident angle. Although an oblique incidence produces a slightly higher SEY compared to a normal incidence at the same impact energy, the SEY strongly depends on impact energy, therefore, we plot the electron energy distribution, which mainly responds to the spatiotemporal electric field for a collisionless low pressure discharge (1 Pa). figure 4(a)), the E x in figure 4(a) is limited within the range of −0.3 × 10 4 -0.3 × 10 4 V m −1 , and the temporally-evolving boundary of the gray area represents the sheath edge. We can see in figure 4(b) that the primary electrons with energy above the sheath potential arrive on the right electrode (x = 4 cm) and form a pattern during the sheath collapse. The color represents the fraction of electrons within the specific energy range in the colorbar, and all primary electrons incident on the electrode having energy below 25 eV will contribute a small number of emitted secondary electrons.
The ions respond to the time-averaged sheath field due to their larger mass than electrons, and flow towards the electrodes at an almost constant flux, thus, the ion-induced secondary electrons are always present over the whole rf period regardless of the sheath collapse or expansion. As a result, the high sheath field (see arrow A in the figure 4(a)) in the gray area accelerates these secondary electrons to a high energy and consequently they impact the opposite (right) electrode (x = 4 cm) in the form of an electron beam as shown in figure 4(c). The energy of the secondary electron beam corresponds to the rf sheath potential. As the potential drop across the bulk plasma is small, the sheath potential drop when the sheath approaches the maximum width is almost equal to the driving voltage amplitude, 150 V, and the maximum energy of the electron beam is around 150 V. The plasma potential with respect to the two electrodes is equal and has no net contribution to the secondary electron energy gain. Comparing the starting point of the secondary electron beam (t/T ≈ 0.5) in figure 4(a), a phase delay can be found. The reason is that the accelerated secondary electrons emitted from the left electrode (x = 0 cm) at the initial phase duration of sheath expansion are reflected by the opposite sheath potential, and arrive at the opposite electrode (x = 4 cm). Note that the travel time of secondary electrons across the bulk region is much smaller than the rf period, and consequently it contributes very little to the time delay. The energetic secondary electrons striking the surface produce a large number of new secondary electrons, as the SEY is above unity for Vaughan's model (also called δ electrons in some literature [14,16,37]). The new secondary electrons are accelerated by the ambipolar electric field (see arrow B in figure 4(a)) to inject into the bulk plasma. Thus these new secondary electrons will have a similar properties as the primary electrons and are accelerated and decelerated back and forth by the sheath expansion and collapse, leading to additional ionization impacts in the bulk region. A considerable number of secondary electrons having the same properties as the primary electrons are accumulated in the simulation to form a pattern in figure 4(c) similar to that of primary electrons in figure 4(b). Ultimately, the plasma density is significantly enhanced by energetic secondary electrons that produce a large number of secondary electrons due to the surface process described by Vaughan's model.
At low pressure (1 Pa), the electron mean free path is much larger than the electrode spacing, and the secondary electrons can penetrate the bulk region without collisions. When the pressure is increased to 20 Pa, the electron mean path is significantly shortened, and the electrons lose their energy via inelastic excitation and ionization impacts, with the latter producing more electrons directly, enhancing the plasma density. As can be seen in figure 4(d), the behavior of the electron beam generation disappears for secondary electrons bombarding the right electrode. Therefore, the plasma density is almost the same for constant electron reflection coefficient and Vaughan's model at higher pressure (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20). In figures 4(e) and (f), we show more information on the ionization and secondary electron energy distribution at the midplane (x = 2 cm) for discharge conditions the same as of figures 4(b) and (c), respectively. As expected, the primary electron energy distribution oscillates continuously in response to the oscillating sheath.
The secondary electron energy distribution shows two beams due to the separate acceleration of the secondary electrons by the alternating high sheath fields adjacent to the left and right electrode. The two secondary electron beams at the mid-plane are almost symmetric for the first and second half rf period due to the symmetry of the discharge. The highest energy of the secondary electrons in figure 4(f) is above 150 V because of the presence of the plasma potential. The electron beams were also observed in previous PIC/MCC simulations where only ion-induced secondary electrons were considered and where the ionization solely produced by those ballistic electrons was analyzed [74].
In addition to electron and ion-induced SEE discussed above, the excited state neutrals and resonant photons also lead to SEE with the former and the latter controlled by the metastable atom Ar m and resonant photon flux flowing towards the electrodes, respectively. The resonant photon is from the escaped radiation of the resonant level atoms Ar r . As the Ar m and Ar r diffusion is not affected by the oscillating electric field, their density profiles are less significantly varying over the whole rf period. This implies that the secondary electrons are also emitted continuously from both the left and right electrodes over time, similar to the dynamics of ion-induced secondary electrons.
Quantitatively comparing the time-averaged fluxes of ions, excited state neutrals, and photons can help understand the importance of different species, as shown in figures 5(a)-(c) for 150, 250, 350 V, respectively. The flux of excited state neutrals is defined as Γ n,exc = (1 − 0.5γ rec ) −1 n exc v n /4 with the mean thermal velocity v n = √ 8k B T/πm and recombination coefficient γ rec = 0.5 and n exc is the density of excited state neutrals on the electrode. We can see that the excited state neutral flux, Γ n,exc , increases at first and then decreases with increasing the feedstock gas pressure in the range of interest, and maximum Γ n,exc appears at 2 Pa. There are two competitive factors affecting Γ n,exc : (i) the excited state neutral densities; and (ii) the collisions with the feedstock gas. The higher excited state neutral densities at higher pressure tend to result in higher flux Γ n,exc . However, the more frequent collisions between excited state neutrals and feedstock gas reduce the flux toward the electrodes. Factor (i) is dominant over factor (ii) when the pressure is between 1 and 2 Pa, and factor (ii) dominates when the pressure is higher than 2 Pa. As γ exc is almost three times larger than γ ph , the back flowing secondary electron flux, γ exc Γ n,exc , from neutral atoms in excited state is larger than the secondary electron flux that is due to photons bombarding the electrodes γ ph Γ ph , leading to a higher density gain at 1 Pa from excited state neutrals than photons (see figure 3(a)).
The ion and resonant photon fluxes increase with increased gas pressure, and the density gain from photon-induced secondary electrons becomes greater than from excited state neutrals (see figure 3(b)). The resonant photon flux incident on the electrode is proportional to the product of the Ar r density and the escape factor g, i.e. fraction of the escaped photons after radiation trapping. We can see that the Ar r densities in figure 6(a) increase significantly at higher pressures due to the generation from the chemical reaction processes in the plasma bulk. The escape factor g shown in figure 6(b) decreases from 3 × 10 −3 to 0.8 × 10 −3 with pressure for a given electrode spacing. It is worth noting that, indeed, the Ar r loss in the plasma is affected by the radiation rate (linearly proportional to the escape factor g). Finally, the nonlinear combined effect results in an increased photon flux flowing toward the surface. This resonance photon-to-ion flux ratio is at the upper limit of what has been measured experimentally for inductively coupled plasmas [75], and so our results would be the upper limit of what influence γ ph might have.
Discussions and concluding remarks
Kinetic PIC/MCC simulations have been an important tool for the understanding of the physics of low pressure capacitive discharges in the past three decades. However, the plasma density from PIC/MCC simulations commonly shows quantitative deviations from the experimental measurements, even in argon discharges. Generally, the observed plasma densities from particle-based simulations are lower than what is observed experimentally, when experimental measurements are compared to simulation results [27,40,41], lower even by a few times [27], indicating that certain physics may be missing in previous modeling of low pressure rf capacitive discharges. More recently, the PIC/MCC simulations were validated against experimental diagnostics [42], and it was found that an 'effective' electron reflection coefficient of 0.7 is required in simulations to match the experimental measurements. However, the underlying physics supporting the coefficient of 0.7 is not fully understood.
In this work, we explored the potential surface processes that may result in deviations between PIC/MCC simulations and the recent experiments [42]. Using the same external conditions as the experiments, argon discharges are sustained within two stainless steel electrodes with a spacing of 4 cm. The feedstock gas pressure is varied in the range of 1-20 Pa, and the driving voltage amplitudes are 150, 250, and 350 V at a driving frequency of 13.56 MHz. It is found that both the energetic electron-induced SEE and excited state atoms play important roles in low pressure (1 Pa) rf capacitive argon plasmas. The ion-induced secondary electrons are accelerated by the high sheath field to bombard the opposite electrode, producing a considerable number of secondary electrons. These energetic electron-induced secondary electrons are accelerated by the ambipolar electric field within the collapsed sheath toward the bulk plasma and act similarly to the primary electrons, i.e. these electrons are accelerated and decelerated back and forth by the oscillating sheaths over time, leading to additional ionization impacts that finally increase the plasma density.
Importantly, the presence of excited state species further enhances the plasma density via SEE from the excited state neutrals and resonant state photons (originating from the escaped radiation of resonant level atoms Ar r ) bombarding the electrode surface. With the consideration of SEE from energetic electrons, excited state neutrals and resonant photons, the PIC/MCC simulation results agree with the recent experimental measurements at low pressure (1-10 Pa).
At higher pressures, above 5 Pa, the plasma density from the electron surface process described by Vaughan's model is almost the same as that considering a constant electron reflection coefficient of 0.2, because of the absence of energetic electron beams at higher pressure where the electron mean free path is shorter. The excited state neutral and resonant photoninduced secondary electrons still can enhance the plasma density through direct ionization impacts within the plasma bulk region. To further understand the relative importance of excited state neutrals and resonant photons, we examined their flux flowing toward the electrode surface. The excited state neutral-induced secondary electron flux dominates over the photons at 1 Pa. With increasing pressure in the range of interest (5-20 Pa), the resonant photons play a more important role in SEE than excited state neutrals.
We also note that the plasma density from PIC/MCC simulations with both excited state neutral and photon-induced SEE included in the model are quantitatively higher than determined in the experiments at the highest pressure of 20 Pa for 250 and 350 V. There are a few possible reasons for the deviation: (i) the plasma discharge is more local radially at high pressure and a density peak usually appears above the electrode edge at high pressure [40,76]; radial geometry effects are excluded in the current one-dimensional simulations; (ii) outgassing processes from the electrode and glass wall may exist under ion impacts at higher pressures due to contamination with air or water vapor in the experiments; these contaminants may affect the discharge characteristics; (iii) the photon-induced SEE yield may be different for different treatments of metal electrode; for example, a reduced SEY from experiments is reported for heat-treated metal surface (Au, Cu and Pt) compared to untreated metal surface as discussed by Phelps et al [32], and the coefficient 0.075 is an absolute upper bound that can be expected from nonoxidized, hydro-carbon free, defect free surfaces. In day-today usage, γ ph is likely smaller than 0.075. In addition, it was stated by Schulenberg et al [42] that an additional discharge around the Langmuir probe started to appear at pressure above 20 Pa, which may influence the experimental results at that pressure. Therefore, in the future, it is worth further investigating the possible missing physics in PIC/MCC modeling, and further developing improved measurement tools, for quantitative comparisons between simulations and experiments in high pressure discharges. | 9,227.4 | 2023-05-18T00:00:00.000 | [
"Physics"
] |
Combination of the W boson polarization measurements in top quark decays using ATLAS and CMS data at $\sqrt{s} =$ 8 TeV.
The combination of measurements of the W boson polarization in top quark decays performed by the ATLAS and CMS Collaborations is presented. The measurements are based on proton-proton collision data produced at the LHC at a centre-of-mass energy of 8 TeV, and corresponding to an integrated luminosity of about 20 fb$^{-1}$ for each experiment. The measurements used events containing one lepton and having different jet multiplicities in the final state. The results are quoted as fractions of W bosons with longitudinal ($F_0$), left-handed ($F_\mathrm{L}$), or right-handed ($F_\mathrm{R}$) polarizations. The resulting combined measurements of the polarization fractions are $F_0 =$ 0.693 $\pm$ 0.014 and $F_\mathrm{L} =$ 0.315 $\pm$ 0.011. The fraction $F_\mathrm{R}$ is calculated from the unitarity constraint to be $F_\mathrm{R} = -$0.008 $\pm$ 0.007. These results are in agreement with the standard model predictions at next-to-next-to-leading order in perturbative quantum chromodynamics and represent an improvement in precision of 25 (29)% for $F_0$ ($F_\mathrm{L}$) with respect to the most precise single measurement. A limit on anomalous right-handed vector ($V_{\text{R}}$), and left- and right-handed tensor ($g_{\text{L}},g_{\text{R}}$) tWb couplings is set while fixing all others to their standard model values. The allowed regions are [$-$0.11, 0.16] for $V_{\text{R}}$, [$-$0.08, 0.05] for $g_{\text{L}}$, and [$-$0.04, 0.02] for $g_{\text{R}}$, at 95% confidence level. Limits on the corresponding Wilson coefficients are also derived.
Introduction
The large number of top quarks produced at the CERN LHC provides an excellent laboratory for the study of their production and decay properties. Precise predictions of some of these properties are available in the standard model (SM) of particle physics, and are tested through detailed comparisons to data. Potential deviations between data and predictions could reveal important information on the existence of new physics beyond the SM. The properties of the top quark decay vertex tWb are governed by the structure of the weak interaction. In the SM, this interaction has a V − A structure, where V and A refer to the vector and axial-vector components of the weak current. This structure, along with the masses of the particles involved, determines the fractions of -1 -JHEP08(2020)051 W bosons with longitudinal (F 0 ), left-handed (F L ), and right-handed (F R ) polarizations, referred to as polarization fractions. Theoretical calculations at next-to-next-to-leading order (NNLO) in perturbative quantum chromodynamics (QCD) predict the fractions to be F 0 = 0.687 ± 0.005, F L = 0.311 ± 0.005, and F R = 0.0017 ± 0.0001 [1], assuming a top quark mass of 172.8 ± 1.3 GeV. Thus, the SM predictions can be tested in high-precision measurements of the polarization fractions, and potential new physics processes that modify the structure of the tWb vertex can be probed.
Experimentally, polarization fractions can be measured in events containing top quarks, using the kinematic properties of its decay products.
For semileptonically decaying top quarks, i.e. t → W(→ ν)b (with lepton = electron, muon, or τ), the polarization angle θ * is defined as the angle between the direction of the charged lepton and the reversed direction of the b quark, both in the rest frame of the W boson. The distribution of the variable cos θ * is particularly sensitive to the polarization fractions. The differential decay rate is given by 1 Γ dΓ d cos θ * = 3 4 1 − cos 2 θ * F 0 + 3 8 (1 − cos θ * ) 2 F L + 3 8 (1 + cos θ * ) 2 F R . (1.1) In a similar way, θ * can be defined for the hadronically decaying top quarks, i.e. t → W(→ q q)b, by replacing the charged lepton with the down-type quark (q ). In the measurements used in this paper, only angles from top quarks decaying semileptonically to electrons or muons are considered. Imposing a unitarity constraint between the three polarization fractions, F 0 + F L + F R = 1, results in two independent observables. The W boson polarization fractions have been measured in proton-antiproton collisions by the CDF and D0 experiments [2] at a centre-of-mass energy of 1.96 TeV with experimental uncertainties of 10-15% in F 0 and F L . The ATLAS and CMS collaborations have performed measurements at the LHC in proton-proton (pp) collisions at √ s = 7 [3, 4] and 8 [5][6][7] TeV, reaching a precision in F 0 and F L of 3-5%. All measurements are in agreement with the SM NNLO predictions within their experimental uncertainties. However, these experimental uncertainties are larger than those of the current theoretical predictions, which are less than 2%. Improving the experimental precision motivates the combination of the ATLAS and CMS measurements: combining measurements based on independent data sets reduces the statistical uncertainty, while the overall uncertainty can be further decreased by exploiting the differences in experimental systematic effects stemming from the use of the two detectors and different analysis methods. This paper describes the combination of the W boson polarization fractions measured by the ATLAS and CMS collaborations based on data collected at √ s = 8 TeV, in final states enhanced in top quark pair (tt) [5,6] and single top quark [7] production processes. The paper is structured as follows: the measurements included in the combination are briefly described in section 2. Section 3 lists the sources of systematic uncertainty considered in the input measurements. The correlations between the measured values included in this combination are categorized in section 4, and presented for each source of systematic uncertainty. In section 5, the results of the combination and their interpretation in terms of new physics using the effective field theory approach are described. A summary and conclusions are presented in section 6.
The ATLAS and CMS measurements
Three measurements of the W boson polarization in the top quark decay from top quark pair production events in the +jets channel and one from events with a single top quark signature are the four input measurements in this combination. The measurements based on tt production events were performed by the ATLAS [5] and CMS [6] experiments, where the latter was separated in electron and muon channels. The measurement from events with a single top quark signature was performed by the CMS [7] experiment.
The measurements were based on pp collision data at √ s = 8 TeV, corresponding to integrated luminosities of 20.2 and 19.7 fb −1 for the ATLAS and CMS experiments, respectively. The 7 TeV measurements [3,4] are not included in this combination since they are based on smaller data sets, and, having relatively large systematic uncertainties, their contribution to the combination is expected to be marginal. All measurements were based on fits where the polarization fractions were adjusted to describe the observed cos θ * distributions of the semileptonically decaying top quark, taking into account the SM predictions for the backgrounds. These measurements are summarized in the rest of the section. Detailed descriptions of the ATLAS and CMS detectors can be found elsewhere [8,9].
The ATLAS measurement
The contributing input from the ATLAS experiment to this combination is described in ref. [5] and denoted "ATLAS" in the following. In this measurement, the event selection was defined to efficiently select events from top quark pair decays in the +jets channel, i.e. exactly one reconstructed electron or muon and at least four jets, of which at least two were tagged as b jets, and minimizing background contributions, e.g. from W/Z+jets and multijet productions. The latter corresponds to events including jets misidentified as leptons, or non-prompt leptons from hadron decay passing the +jets selection. The tt system was fully reconstructed via a kinematic likelihood fit technique [10], which maps the four decay quarks (two b quarks and two light quarks from the W boson decay) to four reconstructed jets, utilising Breit-Wigner distributions for the W boson and top quark masses, as well as transfer functions to map the reconstructed jet and lepton energies to the parton or true lepton level, respectively.
The W boson polarization was measured in the single-lepton channels from tt events using a template fit method. Dedicated tt templates of the cos θ * distribution for each polarization configuration were produced by reweighting the simulated SM tt events. Additional templates for background processes were also produced.
The templates were fit to the cos θ * distribution in data using different templates for the electron and muon channels, via a binned likelihood fit as: (2.1) where N data (k) and N exp (k) represented the number of observed and the total number of expected events (sum of signal and background events) in each bin k of the cos θ * distri--3 -JHEP08(2020)051 bution, respectively. The number of events for each background source j is represented by N bkg,j . The expected number of events for each background source j,N bkg,j , and the uncertainties in the normalization of the background events, σ bkg,j , were used to constrain the fit. Therefore, the uncertainties in the polarization fractions obtained from the fit included both the statistical and systematic uncertainties in the background normalizations. The final result was obtained by a simultaneous fit of the electron and muon channel templates to the data. A common parameter was used to scale each of the backgrounds in the electron and muon channel in a fully correlated manner, except in the case of the nonprompt-lepton background for which two separate, uncorrelated, parameters were used. The contribution from W+jets events was split into different quark flavour samples and scaled by the calibration factors derived from sidebands in data. These procedures were found to cover the corresponding shape uncertainties in the nonprompt-lepton and W+jets contributions. The uncertainty in the shape of the contributions from single top quark and diboson events was found to be negligible.
The CMS measurements
Three CMS measurements contribute to this combination. The results presented in ref.
[6] used similar final states to those in ATLAS: one lepton and four or more jets, of which at least two were tagged as b jets. The tt system was fully reconstructed using a constrained kinematic fit. The unmeasured longitudinal momentum of the neutrino was inferred by the kinematic constraints.
The measurement was performed by maximizing the binned Poisson likelihood function, 2) where N data (k) is the number of observed events in each bin k of the reconstructed cos θ * distribution, and N exp (k) is the number of expected events from Monte Carlo (MC) simulation for a given polarization configuration F ≡ (F 0 , F L , F R ), including signal and background events. During each step of the maximization, N exp (k) was modified for different values of the polarization fractions F using a reweighting procedure based on eq. (1.1). Weights are applied to the events at the generated level, so that the cos θ * distribution generated according to eq. (1.1) corresponds to alternative values of F . Backgrounds that did not involve a top quark did not change N exp (k) for different values of F . The ATLAS and CMS measurements considered the variations on N exp (k) coming from all top quark events passing the selection, either +jets or non-+jets, including τ+jets and dilepton tt processes. In addition, the CMS analyses took into account the variations arising from single top quark processes, which were treated as a background in the ATLAS measurement. The normalization of the tt process was left free in the fit. In order to allow a more detailed account of the correlations with the other measurements, the two lepton channels, e+jets and µ+jets, enter the combination as two separate measurements, referred to as "CMS (e+jets)" and "CMS (µ+jets)" throughout this paper, respectively. In the ATLAS measurement, the fractions were obtained simultaneously using the events from the two channels, therefore this separation is not available.
-4 -JHEP08(2020)051 The third CMS input [7] included in the combination used a final state targeting tchannel single top quark topologies instead of tt events. The event selection required exactly one electron or muon, and exactly two jets, one of which was tagged as a b jet. This selection is orthogonal to that of the CMS (e+jets) and CMS (µ+jets) analyses, making the three of them statistically independent. Nevertheless, while the expected amount of selected t-channel single top quark events corresponded to only about 13% of the sample, the expected contribution from the tt process amounted to about 35%, and needed to be taken into account as part of the signal. The largest background came from the W+jets process. This contribution was fully estimated from data, and corresponded to about 36% of the selected sample. Other processes, such as multijet and Z+jets production, accounted for the remaining 16% of the sample.
The fitting procedure applied in ref.
[6] was slightly modified for the single top quark topology measurement. In this case, because of the different background composition with respect to the tt analysis, the normalizations of the single top quark and tt processes were fixed according to their predicted cross section values. On the other hand, the normalization of the W+jets sample was left free in the fit to be adjusted simultaneously with the F 0 and F L fractions, and treated independently in the e+jets and µ+jets channels. Moreover, the fractions were extracted by maximizing a combined likelihood function, constructed from the two likelihood functions of the electron and muon channels, taking into account the correlations between them. Therefore, although based on two single-lepton channels, this measurement contributes to the combination as one single input, denoted as "CMS (single top)" in the following.
The W boson polarization values from the input measurements
The polarization fractions from the input measurements before applying the modifications concerning the combination (as discussed in section 3), and their uncertainties are summarized in table 1. The first quoted uncertainty in the ATLAS measurement includes the statistical uncertainties and uncertainties in the background determination, and the second uncertainty refers to the remaining systematic uncertainty. For CMS measurements, the first uncertainty is statistical, while the second is the total systematic uncertainty, including that on background determination.
In order to harmonize the treatment of the systematic uncertainties evaluation across the input measurements, some of them are modified before performing the combination process. The following modifications are applied (as detailed in section 3): • The uncertainty values in the ATLAS measurement are symmetrized.
• The tt modelling uncertainties in the CMS (e+jets) and CMS (µ+jets) measurements are recalculated without the contributions from the limited number of events in the samples used to estimate them.
• The uncertainty due to the top quark mass used in the ATLAS measurement is increased from a variation of ±0.7 GeV to ±1.0 GeV.
Sources of systematic uncertainty
The effects of various systematic uncertainties on the input results were studied individually for each measurement. In the ATLAS measurement, the impact of systematic uncertainties was evaluated with alternative pseudo-data distributions built from the altered signal and background contributions. The alternative pseudo-data distributions were produced by varying each source of systematic uncertainty by one standard deviation (±1σ). The CMS measurements also used pseudo-data to estimate the uncertainties due to parton distribution functions (PDFs), size of the simulated samples, and single top quark analysis specific uncertainties. The other uncertainties were estimated by replacing the nominal sample with alternative samples containing simulated events modified according to each of the systematic variations, and repeating the fit. As the algorithm used to perform the combination accepts only symmetric uncertainties (more details in section 5), the uncertainties in the ATLAS measurement are symmetrized by assigning the average uncertainty value between the up and down variations in each uncertainty source. A test is performed by replacing the average uncertainty value with the largest shift among the up and down variations. No variation in the combination results is observed, i.e. the central values of the polarization fractions, combination uncertainty, and total correlation remain unchanged. In addition, common uncertainty categories are established by merging and regrouping various uncertainties in each individual input measurement.
In the following, the categorization of the systematic uncertainties considered for the combination is presented. The categories, assumed to be independent from each other, comprise sources of uncertainties that have similar origins, easing the treatment of correlations discussed in section 4.
3.1 Limited size of the data and simulated samples, backgrounds, and integrated luminosity Statistical uncertainty, background determination, and integrated luminosity (stat+bkg). The uncertainties in the ATLAS measurement from the fit included both the statistical uncertainty in the data and the systematic uncertainty in the background normalizations -6 -JHEP08(2020)051 via priors for the background yields. The shape of the multijet processes was determined from data, while for the other background events it was fully determined from simulation. The impact of the 1.9% integrated luminosity uncertainty [11] was found to be negligible because of the background normalization treatment in the fit.
In the CMS measurements, the uncertainties in the expected backgrounds included shape and normalization effects, and were estimated by varying them separately within their uncertainties and repeating the measurement. The multijet background in all CMS measurements as well as the normalization of the W+jets contribution in the CMS (single top) case were derived exclusively from data. All other background processes, as well as tt, and single top quark processes in the CMS (single top) measurement were estimated using simulation, normalized to the integrated luminosity of the data samples. These were affected by the uncertainties in their predicted cross sections, and the integrated luminosity determination. The CMS integrated luminosity uncertainty of 2.6% [12] had a sizeable effect only on the CMS (single top) measurement.
Size of simulated samples. This category accounts for the limited number of simulated events for the nominal samples in all input measurements. Both ATLAS and CMS evaluated this uncertainty by performing pseudo-experiments. In the CMS (e+jets) and CMS (µ+jets) measurements, the limited number of simulated events was also considered for the tt samples used for the estimation of the modelling uncertainties. In order to perform a consistent combination, the tt modelling uncertainties in the CMS (e+jets) and CMS (µ+jets) measurements are recalculated without the contributions from the limited number of events in the samples used to estimate them. The impact of this modification on the relative uncertainty in the measurements is found to be in the order of O(10 −4 ).
Detector modelling
Jets. In all input measurements in this combination, the same jet clustering algorithm, the anti-k T algorithm [13,14], was used, with the radius parameter R of 0.4 and 0.5 for the ATLAS and CMS experiments, respectively. However, in the ATLAS measurement the jets were built from energy deposits in the calorimeter [15], while in the CMS analyses they were reconstructed from particle-flow [16] objects. Thus, the two experiments used different calibration procedures and uncertainties for jets. The following categories comprise various sources of uncertainty related to the reconstruction and energy calibration of jets.
• Jet energy scale (JES): the JES uncertainty in the ATLAS and CMS analyses was composed of different uncertainty sources, such as jet flavour dependence, the additional interactions in the same or nearby bunch crossings (pileup), calibrations from Z+jets or γ+jets processes, and other components. In general, these components have different level of correlations among the two experiments and have been used to evaluate the total JES correlation (as detailed in section 5.1). The final JES uncertainty used in this combination is quoted in tables 2-4 and results from grouping all JES uncertainty components into a single number.
• Jet energy resolution (JER): this category includes contributions due to the uncertainties in the modelling of the jet energy resolution. The momenta of the jets in -7 -JHEP08(2020)051 Table 2. Uncertainties in F 0 , F L and their corresponding correlations from the ATLAS measurement. The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by "n.a.". The line "Systematic uncertainty" represents the quadratic sum of all the systematic uncertainty sources except for the uncertainty in the background determination, which is included in the "Stat+bkg" category. The quoted correlation values are obtained via the procedures described in section 4.1.
simulation were smeared so that the jet energy resolution in simulation agrees with that in data. Both experiments used a similar method to estimate this uncertainty.
• Jet vertex fraction (JVF): to suppress jets from pileup, in the ATLAS measurement jets were required to fulfil the JVF criterion. The corresponding uncertainty was eval- Table 3. Uncertainties in F 0 , F L and their corresponding correlations from the CMS e+jets and µ+jets measurements. The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by "n.a.". The line "Systematic uncertainty" represents the quadratic sum of all the systematic uncertainty sources except for the uncertainties in the background determination and the integrated luminosity, which are included in the "Stat+bkg" category. The quoted correlation values are obtained via the procedures described in section 4.1. uated in the measurement by changing the nominal JVF cutoff value and repeating the measurement [17]. In the CMS measurements, pileup events were removed at the event reconstruction level with the particle-flow algorithm. In this case, uncertainties in jet reconstruction due to pileup were covered by the JES and pileup categories, and are not added as a separate source.
• Jet reconstruction efficiency: a systematic uncertainty was included in the ATLAS measurement to account for the jet reconstruction efficiency mismatch between simulation and data. In the CMS measurements, this uncertainty is included in the JES uncertainty.
-9 -JHEP08(2020)051 Table 4. Uncertainties in F 0 , F L and their corresponding correlations from the CMS (single top) measurement. The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by "n.a.". The line "Systematic uncertainty" represents the quadratic sum of all the systematic uncertainty sources except for the uncertainties in the background determination and the integrated luminosity, which are included in the "Stat+bkg" category. The quoted correlation values are obtained via the procedures described in section 4.1.
Lepton efficiency. For all measurements, this category accounted for the uncertainties in the scale factors used to correct the simulated samples so that the efficiencies for lepton selection, reconstruction, and identification observed in data were well reproduced by the simulation. Since the samples were collected using single-lepton triggers, uncertainties in -10 -JHEP08(2020)051 the trigger efficiencies were also included. All corrections were applied as functions of p T and η of the leptons. This uncertainty was found to be negligible for the CMS (single top) measurement, compared to other uncertainties.
b tagging. In this category, uncertainties on the scale factors used to correct the simulation for different efficiencies for tagging jets originating from b quarks (tag) or for those originating from c or light partons wrongly identified as b jets (mistag) were taken into account. This difference was accounted for by assigning scale factors to the jets, dependent on the p T and η as well as on the flavour of the jet. In the ATLAS measurement, additionally, an uncertainty was assigned to account for the extrapolation of the b tagging efficiency measurement to the high-p T region.
Pileup. In both the ATLAS and the CMS analyses, pileup effects were taken into account in the simulation of signal and background events. The distribution of pileup was adjusted to reflect the measured instantaneous luminosities per bunch in data. In the CMS measurements, this uncertainty was estimated by varying the pp cross section used to estimate the number of pileup in data within its uncertainty, and recalculating the weights applied to the simulation. In the ATLAS measurement, the uncertainty in the description of extra energy deposited due to pileup interactions was treated as a separate missing transverse momentum (p miss T ) scale uncertainty. The impact on the measured W boson polarization fractions from this uncertainty was found to be negligible, and therefore was not considered.
Signal modelling
Top quark mass. In all four analyses, the effect of the uncertainty in the top quark mass was estimated by repeating the measurements using simulated samples with different input top quark masses for the signal process. In the ATLAS measurement, this effect was evaluated using an uncertainty of ±0.70 GeV in the top quark mass as given by the AT-LAS measurement [18]. In the CMS measurements on the other hand, an uncertainty of ±1.0 GeV in the top quark mass was assumed. In order to keep consistency across the various input measurements, this effect in the ATLAS measurement is reestimated using the original estimation method described in ref.
[5], accounting for a variation of ±1.0 GeV in the top quark mass. The impact of this modification in the ATLAS input result is negligible.
Simulation model choice. The impact of using different MC event generators and their interfaced showering and hadronization models was estimated in all input measurements. In the ATLAS measurement, the impact of the choice of different MC event generators was assessed by comparing events produced by Powheg-Box [19][20][21][22][23] and mc@nlo [24][25][26], both interfaced to Herwig [27] for showering and hadronization. To evaluate the impact of the different parton shower and hadronization models, the Powheg+Herwig sample was compared to Powheg+Pythia [28]. For the CMS (e+jets) and CMS (µ+jets) measurements, the uncertainties were estimated by replacing the events produced by Mad-Graph [29] interfaced with Pythia with mc@nlo interfaced with the Herwig generator and additionally, varying the kinematic scale used to match jets to partons (matching -11 -JHEP08(2020)051 threshold) by twice and half its central value. In the CMS (single top) measurement, the uncertainty in the choice of different MC generators was estimated as the difference between the Powheg+Pythia and the CompHEP [30] generators.
Radiation and scales. In all four analyses, this category represents the uncertainty associated with initial-and final-state radiation (ISR/FSR) estimated using simulated samples of tt events where the renormalization and factorization scales (µ R and µ F ) were simultaneously set to twice and half the default value in the matrix element (ME) calculations. In the CMS measurements, the µ R and µ F in the parton shower were also varied simultaneously to those used in the ME calculations. However, in the ATLAS measurement, a different set of tuned parameters of the pythia parton shower with a modified strong coupling α S was used to account for low and high radiation to match the variation of scales in the ME calculations. The detailed list of modified parameters is given in ref. [31]. Furthermore, in the ATLAS measurement the value of the damping parameter (h damp ) in Powheg-Box was set to twice the top quark mass for the high-radiation sample. In addition to changing it in the tt background, the CMS (single top) measurement varied the scales used in the single top quark simulated samples.
Top quark p T . In previous CMS analyses of tt events, described e.g. in ref. [32], the shape of the p T spectrum for top quarks was found to be softer than the predictions from MadGraph simulation. The effect of this mismodelling on the CMS (e+jets) and CMS (µ+jets) measurements was estimated by reweighting the simulated tt sample to describe the data. The difference in the polarization fractions with the default sample to the reweighted sample was taken as a systematic uncertainty. On the other hand, the top quark p T distribution did not exhibit, within uncertainties, a significant difference with the predictions in the single top quark enriched phase space, therefore no systematic uncertainty was assigned in the CMS (single top) measurement. In the ATLAS measurement, this mismodelling was checked to be covered by the simulation model choice uncertainties, and therefore no additional uncertainty for the top quark p T spectrum was considered.
PDF. The uncertainty due to the choice of PDFs in all input measurements was evaluated by varying the eigenvalues of different PDF sets following the PDF4LHC recommendations [33,34]. In the ATLAS measurement, the differences between three PDF sets: CT10 [35], MSTW2008 [36], and NNPDF 2.3 [37] were taken into account. Uncertainties related to the choice of PDF set in the CMS (e+jets) and CMS (µ+jets) measurements were estimated by replacing CTEQ6L1 [38] used to generate the nominal samples, with NNPDF 2.1 [39] and MSTW2008. A similar procedure was adopted in the CMS (single top) measurement, where the default CTEQ6.6M [40] set was replaced with CT10 instead.
Single top quark analysis method. In addition to the systematic uncertainties considered for the tt measurement, a few specific uncertainties were included for the CMS (single top) measurement. For the specific case of single top quark processes, unlike for tt production, the polarization fractions can also be altered at the production level. To study this effect, pseudo-data were generated from samples simulated using CompHEP and Single Top [41] event generators with varied values of the couplings g L , V R , and V L (as described in section 5.2) both at the single top quark production and decay, and the polarization fractions values were extracted using the analysis fitting framework. The differences between the generated and fitted values were taken as the systematic uncertainty. Secondly, a small difference in the generated W boson polarization fraction values was observed for the tt events, simulated with MadGraph, and single top quark events, simulated with powheg. This difference of about 0.01 was taken into account as an uncertainty in the measurement. Finally, the effect of fixing the signal normalization in the fit was considered. All these uncertainties are merged into a single uncertainty, referred to as Single top method in tables 2-4 and 6-7. In all input measurements, the uncertainty in the modelling of colour reconnection was found to be negligible and therefore was not considered.
Correlations
Four pairs of longitudinal and left-handed polarization fractions from four input measurements, as described in section 2 are used in the combination. The correlations between the input values are defined taking into account the unitarity relation between the polarization fractions in each measurement and the correlations among the measurements. The groups of correlations are listed in table 5 and defined as follows: • Correlations within the same measurement : because of the unitarity constraint, and given that F R ≈ 0, the observed values of F 0 and F L within one single measurement are usually highly anticorrelated. In the ATLAS measurement, this correlation is estimated for each systematic uncertainty source from its corresponding covariance matrix. For categories with multiple sources of systematic uncertainty, the sum of the individual covariance matrices is used to calculate the correlation. In the CMS analyses, this group of correlations is estimated from the covariance propagation of -13 -JHEP08(2020)051
Uncertainty Category
Samples size and background determination Table 6. Input correlations across different measurements, as explained in section 4.1. The values stand for the correlations ρ(F i , F i ), with i being either 0 or L. The correlations of the type ρ(F 0 , F L ) are assumed to be ρ(F 0 , F L ) = −ρ(F 0 , F 0 ) = −ρ(F L , F L ). In case an uncertainty is not applicable, the correlation value is set to zero and marked with an asterisk. The correlations marked with a dagger sign are those that are not precisely determined and checks are performed to test the stability of the results against these assumptions.
where σ(F i ) is the uncertainty in the polarization fraction F i , which is directly obtained from the individual measurements. This is done for all sources of systematic uncertainty. For systematic uncertainty categories with multiple sources, e.g. 'stat+bkg' including statistical uncertainty, background determination, and others, σ 2 (F i ) is defined as the quadratic sum of the individual uncertainty sources. • Correlations between measurements within the CMS experiment : for each source of systematic uncertainty, the correlations between the polarization fractions in the CMS (e+jets) and CMS (µ+jets) measurements are denoted ρ e,µ+jets CMS (F i , F j ), where i and j stand for 0 or L. The correlations between CMS (single top) and CMS (e+jets) are assumed to be the same as those between the CMS (single top) and CMS (µ+jets) measurements for each source of the uncertainty, and are denoted generically ρ st, +jets
JHEP08(2020)051
are assumed in all CMS measurements. In this hypothesis, the strong anti-correlation observed for F 0 and F L within the same measurement (as described above) is assumed to hold also across different measurements.
The uncertainties associated with the limited size of the data and simulated samples, and background estimation are assumed to be uncorrelated (as also discussed in sections 4.2 and 5.1). The lepton efficiency uncertainty is assumed to be uncorrelated between the CMS (e+jets) and CMS (µ+jets) measurements, and partially correlated with the CMS (single top) measurement. All other sources of uncertainty are assumed to be fully correlated.
• Correlations between the ATLAS and CMS experiments: for each source of systematic uncertainty, the correlation between the measured polarization fractions F i by the ATLAS and CMS experiments, The uncertainties associated with the detector modelling (except for the JES) as well as the method-specific uncertainty are assumed to be uncorrelated, i.e. ρ LHC (F 0 , F 0 ) = 0.
The uncertainty associated with the radiation and scales, and the JES are assumed to be partially correlated with ρ LHC (F 0 , F 0 ) estimated to be 0.5 and 0.2, respectively (see sections 4.2 and 5.1 for details). All other sources of uncertainty are assumed to be fully correlated, i.e. ρ LHC (F 0 , F 0 ) = +1.
Correlation choices for the partially correlated uncertainties
Although the correlations between the measurements are well known for most of the systematic uncertainty sources, some of them, in particular those that are partially correlated, are not very accurately determined. This section describes how these values are estimated for the combination. Stability tests are performed to verify the robustness of the combination against these correlation assumptions, as discussed in section 5.1.
In the CMS measurements, the uncertainties in the background determination (shape and normalization), integrated luminosity, and the statistical uncertainty were estimated independently and grouped into a single uncertainty category (stat+bkg) for coherence with the ATLAS treatment. The major components of the stat+bkg category in the CMS (e+jets) and CMS (µ+jets) measurements are the uncertainty in the determination of the background events from multijet and W+jets production. The former is estimated from data, and therefore uncorrelated between all CMS measurements, while W+jets production, as well as the other minor backgrounds are estimated from simulation, and therefore at least partially correlated between the measurements. For the CMS (single top) case, the major component of this category is the statistical uncertainty, which is uncorrelated with the other measurements. The normalization of W+jets production, a major background in the CMS (single top) analysis, is estimated from data, and therefore it is uncorrelated to the other CMS measurements. On the other hand, the W+jets production shape, as well as the modelling of other background event sources and signal events, rely on simulation, which may lead to a nonzero ρ st, +jets CMS (F i , F i ) correlation. Neglecting the small correlations -16 -JHEP08(2020)051 that could arise from the W+jets production shape and the background modelling from simulation, the values ρ e,µ+jets CMS (F i , F j ) = 0 and ρ st, +jets CMS (F i , F i ) = 0 are assumed for the combination, and the impact of this assumption is studied via the stability tests.
In all ATLAS and CMS measurements, the JES systematic uncertainty is estimated from different components, which are characterized by different levels of correlations among the two experiments. These components are categorized as fully correlated, such as gluoninitiated jet fragmentation; partially correlated, such as modelling uncertainties from in situ techniques, such as Z-jet, γ-jet, and multijet balance techniques; and uncorrelated, such as statistical and detector-related uncertainties. These correlations have been evaluated and are described in ref. [42]. In the ATLAS measurement, the contribution from the uncorrelated (partially correlated) components to the total JES uncertainty is found to be about 70 (20)%, and the total JES uncertainty is dominated by the uncorrelated jet flavour composition component. In the CMS measurements, because JES uncertainties are small, the breakdown into components was not done. Therefore, assuming a similar JES uncertainty composition between the two experiments, the value of ρ LHC (F i , F i ) is found to be 0.2.
In the ATLAS and CMS analyses, different approaches were used to estimate the radiation and scales uncertainties, as described in section 3.3. In the CMS (single top) measurement, this uncertainty is estimated by varying the scales µ R and µ F for the simulations of both the tt and the single top quark processes. While the tt component, which is dominant, is fully correlated to the analogous uncertainties in the ATLAS, CMS (e+jets), and CMS (µ+jets) measurements, the smaller component from the single top quark µ R and µ F scales is uncorrelated with the other measurements. Since the effects being studied are the same, but the methods are different, the values of ρ LHC (F i , F i ) and ρ st, +jets CMS (F i , F i ) are not well known, and are assumed to be 0.5 and 1.0, respectively.
Summary of the uncertainties and correlations of the input measurements
For each systematic uncertainty category, the correlations between the measured polarization fractions for the input measurements are given in table 6. A breakdown of the uncertainties in the input measurements of F 0 and F L as well as their correlations, are presented in tables 2-4. The uncertainties are grouped according to the categories listed in section 3. Figure 1 presents the total correlation values between the input measurements. Typically, F 0 and F L are highly anticorrelated within the same measurement. The three tt measurements (ATLAS, CMS (e+jets), and CMS (µ+jets)) are also correlated or anticorrelated, with the absolute values of the correlations ranging around 30 to 40%. The correlations of the CMS (single top) measurement with the CMS (e+jets) and CMS (µ+jets) measurements are around 20% in the absolute value, and are generally smaller with the ATLAS measurement.
Results
The combination is performed by finding the best linear unbiased estimator (BLUE) [43,44] with the method implemented in ref. [45]. The BLUE method finds the coefficients of the linear combination of the input measurements by minimizing the total uncertainty of the combined result, taking into account both the statistical and systematic uncertainties, as well as the correlations between the inputs. In this analysis, the measurements of F 0 and F L are combined while F R is obtained as F R = 1 − F 0 − F L . As no further constraints on the observables were placed, values outside the range [0, 1] are allowed for the three polarization fractions. The total correlation between F 0 and F L obtained from the combination is taken into account in the estimation of the uncertainty in the F R value.
The results of the combination of the polarization fractions measurements are where the first quoted uncertainty includes the statistical part and uncertainties in the background determination, and the second uncertainty refers to the remaining systematic contribution. From these results, an upper limit of F R < 0.007 at 95% confidence level (CL) is set. The limit is set using the Feldman-Cousins method [46], considering that F R follows a normal distribution, and that it is physically bound to F R ≥ 0. The relative uncertainty on F 0 and F L is 2.0 and 3.5%, respectively, including systematic and statistical components. Figure 2 shows an overview of the four measurements included in the combination and the result of the combination together with the polarization fractions predicted by NNLO QCD calculations. The uncertainties in the NNLO predictions, presented with vertical bands, include an uncertainty of 1.3 GeV in the top quark mass, uncertainties in the b quark and W boson masses, and in α S . The combined F R value is negative, as this is not explicitly forbidden in the combination, but compatible with the predictions within the uncertainties. The measurements are consistent with each other and with the NNLO QCD prediction.
The χ 2 and upper tail probability of the combination are 4.3 and 64% respectively. The combination includes four sets of measurements, each composed of two highly anticorrelated observables, and two fit parameters of the combination, i.e. the combined F 0 and F L . A detailed breakdown of the uncertainties is presented in table 7. The dominant uncertainties are those arising from the statistical uncertainty on data and background estimation (stat+bkg), followed by the uncertainties in the radiation and scales modelling, the limited size of the simulated samples, and simulation model choice. The total detector modelling uncertainty is minor, smaller than the uncertainties in the stat+bkg category. The measurement with the highest impact in the determination of F 0 is ATLAS, while CMS -19 -JHEP08(2020)051 (µ+jets) dominates the combined F L determination. The impact of the CMS (e+jets) and CMS (µ+jets) measurements is not directly comparable to the other input measurements that already include the electron and muon channels together. As a test, the combination is repeated, using a pre-combined CMS (e+jets) + CMS (µ+jets) input, and the results are unchanged. The ATLAS+CMS combined fractions and uncertainties are identical in both cases, with a small variation on the resulting (F 0 , F L ) correlation, being 1.5% smaller for the cross-check combination.
In another test, the CMS (single top) measurement was removed from the combination. The impact on the combined fractions and uncertainties is less than 1.5%.
The combination yields an important improvement in precision, as compared to the most precise individual published measurements [5,6]. Improvements of 25 and 29% relative to the most precise single measurement are found for the precision of the combined measurements of F 0 and F L , respectively. The improvement is estimated with respect to the published values of the W boson polarization fraction determination that is given in table 1. The total correlation between the combined fractions is similar to those in the input measurements, and their uncertainties are smaller. These two factors lead to a combined right-handed polarization fraction F R that is almost a factor two more precise than in previous publications.
Stability tests
The hypotheses assumed for the correlations between the measurements, as defined in sections 4.1 and 4.2, are based on the best knowledge of the similarities and differences in the detectors, analysis methods, and simulations used in each measurement. Nevertheless, some of these correlations cannot be precisely determined. The checks described in this section are performed to test the stability of the results against this potential lack of knowledge.
ρ LHC (F i , F i ) hypothesis (with i = 0, L) for the JES uncertainty. The correlation value ρ LHC (F i , F i ) = 0.2 was estimated according to the prescription given in ref. [42] and the description in section 4.2. The impact of this assumption is evaluated by repeating the combination by varying ρ LHC (F i , F i ) in the interval between 0.0 and 0.4, in steps of 0.1. The fraction values and uncertainties remained unchanged in the entire probed range. The χ 2 of the fit, the probability, and the total (F 0 ,F L ) correlation are found to be stable with a relative shift of less than 0.5%. ρ LHC (F i , F i ) and ρ st, +jets CMS (F i , F i ) hypotheses for the radiation and scales uncertainties. Although addressing similar effects, the radiation and scales uncertainties are estimated in three different ways for ATLAS, CMS (single top), and the other CMS measurements, with different levels of correlations among them. Therefore, the two hypotheses, ρ LHC (F i , F i ) = 0.5 and ρ st, +jets CMS (F i , F i ) = 1, are tested simultaneously, by variation in steps of 0.1 in the interval between 0 and 0.5 for ρ LHC (F i , F i ) and between 0.6 and 1.0 for ρ st, +jets CMS (F i , F i ). The resulting polarization fraction mean values and uncertainties remained unchanged in the whole ranges. Small variations, below the percent level, are observed for the total correlation and fit probability.
-20 -JHEP08(2020)051 JES versus radiation and scales correlations. Since the JES and radiation and scales uncertainties are among the dominant sources of uncertainty with significant correlation between measurements, an additional test was performed varying the two correlation hypotheses simultaneously, rather than separately. The results of this test also show stable combination with maximum relative shifts of about 2% for the χ 2 and probability and about 0.6% for the total correlation. The combined fractions and uncertainties are found to be stable, with negligible variations for all probed hypotheses. In order to investigate the effect of these hypotheses, the combination was repeated by varying ρ e,µ+jets CMS (F i , F i ) and ρ st, +jets CMS (F i , F i ), using for both the same correlation values in the range [0.0, 0.7] in steps of 0.1. In the interval between 0.0 and 0.6, the fraction values are varied by a maximum of 1.3%, with F 0 going from 0.693 to 0.687, and F L from 0.314 to 0.319. At 0.7, the combination yields F 0 = 0.684 ± 0.014 and F L = 0.321 ± 0.010, which is the maximum variation observed in all tests performed in this study. However, in this case the fit probability decreases to 28%, suggesting that the correlation assumption of 0.7 is less favoured. The fit combination does not converge for unreasonable values, i.e. correlation values above 0.7.
In conclusion, the tests reported in this section indicate that the combined results are robust against variations of some poorly known or unknown input correlations. The correlations are varied over a large range, and in all cases the observed deviation from the nominal results are well covered by the uncertainties in the combined result.
Limits on anomalous couplings
The result of the combination of the polarization fractions measurements can be used to set limits on beyond-the-SM physics contributing to the tWb vertex. In the two approaches presented in this section, only new physics contributions to the top quark decay vertex are considered -effects at the production vertex in single top quark processes are disregarded.
In a first approach, the structure of the tWb vertex is parameterized in a general form in effective field theory, expanding the SM Lagrangian to include dimension-six terms 1) where V L,R and g L,R are left-and right-handed vector and tensor couplings, respectively. Here, P L,R refers to the left-and right-handed chirality projection operators, m W to the W boson mass, and g to the weak coupling constant, as detailed in refs. [47,48]. In the SM, V L is given by the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V tb , with a measured value of ≈ 1, while V R = g L = g R = 0 at the tree level. Using this formalism, the polarization fractions can be translated into the couplings V L , V R , g L , and g R (as discussed e.g. in ref. [49]). The two independent W boson polarization measurements, F 0 and F L , cannot fully constrain the four tWb couplings. Therefore additional assumptions have to be made. Figure 3 shows the limits on the left-and right-handed tensor couplings, Table 8. Allowed ranges for the anomalous couplings V R , g L , and g R at 95% CL. The limit on each coupling is obtained while fixing all other couplings to their SM value. The limits from CMS are obtained using the pre-combined result of all CMS input measurements. The anomalous couplings are assumed to be real.
while the other couplings are fixed to their SM values, as well as limits on the right-handed vector and tensor couplings, with the other couplings fixed to their SM values. Limits on these anomalous couplings are set using the EFTfitter tool [50]. The anomalous couplings are assumed to introduce no additional CP violation, and are taken to be real. The allowed regions at 68 and 95% CL and the most probable couplings values are shown, as derived from the measured polarization fractions reported in refs. [5,6], and from the combined results presented in this paper. A second region allowed by the W boson polarization measurements around Re(g R ) = 0.8 is excluded by the single top quark cross section measurements [51,52], and therefore is not shown in this figure. Table 8 shows the 95% CL intervals for each anomalous coupling, while fixing all others to their SM values. These limits correspond to the set of smallest intervals containing 95% of the marginalized posterior distribution for the corresponding parameter.
In a similar way, limits are set in terms of Wilson coefficients. In this second approach, effects of beyond-the-SM physics at a high scale Λ are described by an effective Lagrangian [47,[53][54][55][56] as Table 9. Allowed ranges for the Wilson coefficients C * φφ , C * bW , and C tW at 95% CL. The limit on each coefficient is obtained while fixing all other coefficients to their SM values. The limits from CMS are obtained using the pre-combined result of all CMS input measurements. The numerical values are obtained by setting the Λ scale to 1 TeV, and the coefficients are assumed to be real.
where O x are dimension-six gauge-invariant operators and C x are the complex constants known as Wilson coefficients that give the strength of the corresponding operator. Only dimension-six operators are considered in this analysis. The relevant operators affecting the general effective tWb vertex can be found, e.g. in ref. [56]. Three of these operators are of particular interest, since the measurement of the W boson polarization is able to constrain their corresponding Wilson coefficients. These operators are: where φ represents a weak doublet of the Higgs field, t R and b R are the weak singlets of the right-handed top and bottom quark fields, q T L = (t, b) L denotes the SU(2) L weak doublet of the third generation left-handed quark fields, and τ I is the usual Pauli matrix. Assuming the Wilson coefficients to be real, they can be trivially parameterized as functions of the anomalous couplings of eq. (5.1) (as shown e.g. in refs. [48,56]), thus, as functions of the W polarization fractions. The limits on each Wilson coefficient are derived from the measured fractions, as done for the anomalous couplings, fixing all others to their SM value, i.e. to zero. They are shown at 95% CL in table 9.
Summary
The combination of measurements of the W boson polarization in top quark decays performed by the ATLAS and CMS collaborations is presented. The measurements are based on proton-proton collision data produced at the LHC at a centre-of-mass energy of 8 TeV, and corresponding to an integrated luminosity of about 20 fb −1 for each experiment. The fractions of W bosons with longitudinal (F 0 ) and left-handed (F L ) polarizations were measured in events containing a single lepton and multiple jets, enhanced in tt or single top quark production processes. The results of the combination are F 0 = 0.693 ± 0.009 (stat+bkg) ± 0.011 (syst), where "stat+bkg" stands for the sum of the statistical and background determination uncertainties, and "syst" for the remaining systematic uncertainties. The fraction of W -23 -JHEP08(2020)051 bosons with right-handed polarization, F R , is estimated assuming that the sum of all polarization fractions equals unity, and by taking into account the correlation coefficient of the combination, −0.85. This leads to F R = −0.008 ± 0.005 (stat+bkg) ± 0.006 (syst), which corresponds to F R < 0.007 at 95% confidence level.
The results are consistent with the standard model predictions at next-to-next-toleading-order precision in perturbative quantum chromodynamics. A limit on each anomalous tWb coupling is set while fixing all others to their standard model values, with the allowed regions being [−0.11, 0.16] for V R , [−0.08, 0.05] for g L , and [−0.04, 0.02] for g R , at 95% confidence level. All couplings are assumed to be real. Limits on Wilson coefficients are also derived in a similar manner. 14.W03.31.0026 (Russia); the Tomsk Polytechnic University Competitiveness Enhancement Program and "Nauka" Project FSWW-2020-0008 (Russia); the Programa Estatal de Fomento de la Investigación Científica y Técnica de Excelencia María de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (U. S.A.).
In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. In particular, the support from CERN, the ATLAS Tier-1 fa- Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. [51] CMS collaboration, Measurement of the single top quark and antiquark production cross sections in the t channel and their ratio in proton-proton collisions at √ s = 13 TeV, Phys. Lett. B 800 (2020) [52] ATLAS collaboration, Fiducial, total and differential cross-section measurements of t-channel single top-quark production in pp collisions at 8 TeV using data collected by the ATLAS detector, Eur. Phys. J. C 77 (2017) [57] ATLAS collaboration, ATLAS Computing Acknowledgements, ATL-GEN-PUB-2016-002 (2016). | 12,626.6 | 2020-05-07T00:00:00.000 | [
"Physics"
] |
Anti-fuzzy BRK-Ideal of BRK-Algebra
DOI: 10.9734/BJMCS/2015/19309 Editor(s): (1) Andrej V. Plotnikov, Department of Applied and Calculus Mathematics and CAD, Odessa State Academy of Civil Engineering and Architecture, Ukraine. (2) Paul Bracken, Department of Mathematics, The University of Texas-Pan American Edinburg, TX 78539, USA. Reviewers: (1) Francisco Bulnes, Department of Research in Mathematics and Engineering, Tecnológico de Estudios Superiores de Chalco, Mexico. (2) Xingting Wang, Department of Mathematics, University of California, San Diego, USA. (3) Piyush Shroff, Department of Mathematics, Texas State University, USA. Complete Peer review History: http://sciencedomain.org/review-history/10426
Introduction
Y. Imai and K. Iséki introduced two classes of abstract algebras: BCK-algebras and BCI-algebras [5]. It is known that the class of BCK-algebras is a proper subclass of the class of BCI-algebras. In [4], Q. P. Hu and X. Li introduced a wide class of abstract: BCH-algebras. They have shown that the class of BCI-algebras is a proper subclass of the class of BCH-algebras. In [10], J. Neggers, S. S. Ahn and H. S. Kim introduced Qalgebras which is a generalization of BCK / BCI-algebras and obtained several results. In 2002, Neggers and Kim [9] introduced a new notion, called a B-algebra, and obtained several results. In 2007, Walendziak [11] introduced a new notion, called a BF-algebra, which is a generalization of B-algebra. In [7], C. B. Kim and H.S. Kim introduced BG-algebra as a generalization of B-algebra. In 2012, R. K. Bandaru [1] introduces a new notion, called BRK-algebra which is a generalization of BCK / BCI / BCH / Q / QS-algebras and BH/BM-algebras [6,8]. In [3] I consider the fuzzification of BRK-ideal of BRK-algebra. I introduce the
Original Research Article
notion of fuzzy BRK-ideal of BRK-algebra. In this paper I introduce a new notion which is the anti fuzzy BRK-ideal of BRK-algebras. I study the epimorphic image and the into homomorphic inverse image of an anti fuzzy BRK-ideal. I investigate some related properties. Also I introduce the Cartesian product of Anti fuzzy BRK-ideals and some related properties.
Preliminaries Definition 2.1 [1]:
A BRK-algebra is a non-empty set X with a constant 0 and a binary operation " " satisfying the following conditions: In a BRK-algebra X, a partially ordered relation can be defined by y ( X is a BRK-algebra, the following conditions hold:
Definition 2.3 [1]:
A non empty subset S of a BRK-algebra X is said to be BRK-subalgebra of X, if
Definition 2.4 (BRK-ideal of BRK-algebra):
A non empty subset I of a BRK-algebra X is said to be a BRK-ideal of X if it satisfies:
Definition 2.5:
Let X be a set. A fuzzy set in X is a function
Definition 2.6:
Let ) 0 , , ( X be a BRK-algebra. A fuzzy set in X is called a fuzzy BRK-ideal of X if it satisfies:
Anti-Fuzzy BRK-Ideal of BRK-Algebra
Definition 3.1: be a BRK-algebra. A fuzzy set in X is called an anti fuzzy BRK-ideal of X if it satisfies:
Example 3.2:
. Define on X as the following table: , routine calculation gives that is an anti fuzzy BRK-ideal of BRK-algebra.
Proposition 3.3:
Let be an anti fuzzy BRK-ideal of BRK-algebra X and if Proof. Let be an anti fuzzy BRK-ideal of a BRK-algebra X . For any
Theorem 3.4:
A fuzzy subset of a BRK-algebra X is a fuzzy BRK-ideal of X if and only if its complement c is an anti fuzzy BRK-ideal of X.
Proof. Let be a fuzzy BRK-ideal of a BRK-algebra X, and let So, is a fuzzy BRK-ideal of a BRK-algebra X.
Theorem 3.5:
Let be an anti fuzzy BRK-ideal of BRK-algebra X. Then the set
Definition 3.6 [5]:
Let be a fuzzy subset of a set X ,
Definition 3.7:
Let be a fuzzy BRK-ideal of BRK-algebra X . The BRK-ideal t , [0,1] t , is called a lower t-level BRK-ideal of .
Theorem 3.8:
Let be a fuzzy subset of a BRK-algebra X. If is an anti fuzzy BRK-ideal of X then for each Proof. Let be an anti fuzzy BRK-ideal of X and let
Definition 3.10:
Let f be a mapping from the set X to the set Y. If is a fuzzy subset of X, then the fuzzy subset B of Y defined by Similarly, if B is a fuzzy subset of Y, then the fuzzy subset defined by is said to be the inverse image of B under f.
Theorem 3.11:
The epimorphic image of an anti fuzzy BRK-ideal is also an anti fuzzy BRK-ideal.
Hence μ is an anti fuzzy BRK-ideal of Y.
Theorem 3.12:
The into homomorphic inverse image of an anti fuzzy BRK-ideal is also an anti fuzzy BRK-ideal.
Proof. Let
So that (FI 1 ) holds, since The proof is complete.
Cartesian Product of Anti Fuzzy BRK-Ideal
All the definitions in this section are from [2].
Definition 5.1:
A fuzzy relation on any set S is a fuzzy subset :
Definition 5.2:
Let and be the fuzzy subsets of a set S . The anti Cartesian product of , for all , x y S .
Definition 5.3:
Let and be fuzzy subsets of a set S . The Cartesian product of and is defined by
Definition 5.6:
If is a fuzzy subset of a set S , the strongest Anti fuzzy relation on S that is an anti fuzzy relation on is given by for all , x y S .
Proposition 5.7:
For a given anti fuzzy subset of a BRK-algebra X , let be the strongest anti fuzzy relation on X .
If is an anti fuzzy BRK-ideal of Proof. Since is an anti fuzzy BRK-ideal of X X , it follows from (FT 1 ) that ) ,
Theorem 5.8:
Let be an anti fuzzy subset of BRK-algebras X, and be the strongest anti fuzzy relation on X. If is an anti fuzzy BRK-ideal of X then is an anti fuzzy BRK-ideal of Proof. Suppose that, is an anti fuzzy subset of a BRK-ideal X, and is the strongest anti fuzzy relation on X. Then is an anti fuzzy BRK-ideal of X X .
Conclusion
In this paper, we have introduced the concept of anti fuzzy BRK-ideal of BRK-algebra and studied their properties.
In our future work, we introduce the concept of Cubic fuzzy BRK-ideal of BRK-algebra, interval-valued Fuzzy BRK-ideal of BRK-algebra, intuitionistic fuzzy structure of BRK-ideal of BRK-algebra, intuitionistic fuzzy BRK-ideals of BRK-algebra, and intuitionistic L-fuzzy BRK-ideals of BRK-algebra. I hope this work would serve as a foundation for further studies on the structure of BRK-algebras. | 1,598.8 | 2015-01-10T00:00:00.000 | [
"Mathematics"
] |
Network-based time series modeling for COVID-19 incidence in the Republic of Ireland
Network-based time series models have experienced a surge in popularity over the past years due to their ability to model temporal and spatial dependencies, arising from the spread of infectious disease. The generalised network autoregressive (GNAR) model conceptualises time series on the vertices of a network; it has an autoregressive component for temporal dependence and a spatial autoregressive component for dependence between neighbouring vertices in the network. Consequently, the choice of underlying network is essential. This paper assesses the performance of GNAR models on different networks in predicting COVID-19 cases for the 26 counties in the Republic of Ireland, over two distinct pandemic phases (restricted and unrestricted), characterised by inter-county movement restrictions. Ten static networks are constructed, in which vertices represent counties, and edges are built upon neighbourhood relations, such as railway lines. We find that a GNAR model based on the fairly sparse Economic hub network explains the data best for the restricted pandemic phase while the fairly dense 21 -nearest neighbour network performs best for the unrestricted phase. Across phases, GNAR models have higher predictive accuracy than standard ARIMA models which ignore the network structure. For county-specific predictions, in pandemic phases with more lenient or no COVID-19 regulation, the network effect is not quite as pronounced. The results indicate some robustness to the precise network architecture as long as the densities of the networks are similar. An analysis of the residuals justifies the model assumptions for the restricted phase but raises questions regarding their validity for the unrestricted phase. While generally performing better than ARIMA models which ignore network effects, there is scope for further development of the GNAR model to better model complex infectious diseases, including COVID-19.
Introduction
In recent years, statistical models which incorporate networks and thereby acknowledge spatial dependencies when predicting temporal data have experienced a surge in popularity (e.g., Knight et al. 2019Knight et al. , 2016;;Urrutia et al. 2022).Knight et al. (2016) developed a generalised network autoregressive (GNAR) time series model which incorporates a secondary dependence in addition to standard temporal dependence.The secondary dependence is captured in a network.In Knight et al. (2016), the proposed networkbased time series model is leveraged to predict mumps incidence across English counties during the British mumps outbreak in 2005.As graph to be associated with the Mumps network time series, Knight et al. (2016) chose a "county town" for each county and connected all towns which were less than a radius of a fixed number of kilometers away from each other.
Similar to mumps, COVID-19 is a highly infectious disease spread by direct contact between people (Nouvellet 2021).Human movement networks have been extensively relied upon to explain COVID-19 patterns (e.g.Jia 2020;Kraemer 2020;Li et al. 2021;Mo 2021;Nouvellet 2021;Sun et al. 2021;Wu et al. 2020).Therefore, it is a natural conjecture that such movement networks may help predict the spread of COVID-19.To investigate, this paper • Fits GNAR models to predict the weekly COVID-19 incidence for all 26 counties in the Republic of Ireland, exploring different network constructions; • Assesses the prevalence of a network effect in COVID-19 incidence in Ireland and the suitability of GNAR models to predict epidemic outbreaks as complex as COVID-19; • Investigates the influence of changes in inter-county mobility, due to COVID-19 restrictions, on the performance of GNAR models as well as on the model parameters and hyperparameters.
The GNAR model is chosen because multivariate time series are often modelled by vector autoregressive (VAR) models.General VAR models are very flexible but require a large number of parameters to be estimated.The GNAR model which we employ here is a special case of a VAR model.It reduces the number of parameters to be estimated by restricting attention to edges in a network; in the case of a complete graph, the VAR model and the GNAR model coincide.Our overview of network-based time series models, given in Supplementary Material B.2, concludes that many network-based time series models can be conceptualized as a special case of the GNAR model, or are more restrictive with respect to the temporal-spatial dependencies they can model.Moreover, as a VAR-type model, the GNAR model inherits the well-understood VAR model framework, including parameter estimation via least squares, and model selection based on the BIC; for a survey see for example Lütkepohl (2005).These methods yield confidence sets for parameter estimation, which can inform analysis as well as policy development in a quantitative fashion.In contrast, deep learning approaches such as developed in Park et al. (2024) for predicting rental and return patters at bicycle stations do not come with such theoretical guarantees.
In addition to a distance-based network as chosen in Knight et al. (2016), in this paper we explore a collection of network models which are motivated by potential movements of individuals.The movement networks are constructed according to general approaches based on statistical definitions of neighbourhoods as well as approaches specific to the infectious spread of the COVID-19 virus.In abuse of notation, we call these networks COVID-19 networks, although they are only meant to reflect possible transmission routes of the disease.For each network, we select the best performing hyperparameter values, to predict COVID-19 incidence by a GNAR model, using the Bayesian Information Criterion (BIC).By splitting the available Irish data into two phases of the pandemic, restricted and unrestricted, we are able to investigate the potential change in the temporal and spatial dependencies in COVID-19 incidence between the two phases.
Overall, our findings are that while there is a clear network effect, the performance of the optimal GNAR model varies little across different network architectures of similar network density.GNAR models indicate higher predictive accuracy than ARIMA models on a country level, since they account for inter-county dependencies.On an individual county level, the variability of predictive performance is high, resulting in similar performance of ARIMA and GNAR models for some counties, while for others the GNAR model consistently outperforms the ARIMA model.The GNAR model seem better suited to predictive COVID-19 incidence in restricted pandemic phases than in unrestricted pandemic phases; the latter may be related to some of the model assumptions possibly requiring an adaptation as well as an increase in noise during unrestricted pandemic phases and high fluctuation of COVID-19 case numbers.Moreover, the classical VAR model, which is the GNAR model on the complete graph, does not perform as well as the GNAR model with an underlying network that has fewer edges, illustrating the value of using a GNAR model.This paper is organised as follows.Section 2 introduces the data set.The methodology for network construction and for network-based time series modeling is described in Sect.3. Section 4 provides an exploratory data analysis, while the model fit is shown in Sect.5.1.The conclusions for the different pandemic phases are found in Sect.5.2.The results are discussed in Section 6.The Supplementary Material includes a literature review on alternative models for incorporating temporal and spatial dependencies, visualisations of the COVID-19 networks, as well as details on the performance of GNAR models for predicting COVID-19 incidence.
The Irish COVID-19 data set
By March 2023, the Republic of Ireland (abbreviated in this paper as Ireland) had recorded a total of 1.7 million confirmed COVID-19 cases and 8,719 deaths since the beginning of the pandemic (Health Protection Surveillance Centre 2022a).The Health Protection Surveillance Centre identified four main variants of concern for the COVID-19 virus in Ireland (Health Protection Surveillance Centre 2022b), in addition to the original variant: Alpha from 27.12.2020,Delta from 06. 06.2021, Omicron I from 13.12.2021, and Omicron II from 13.03.2022(Figure 1a in Health Protection Surveillance Centre (2022b)).The open data platform (Government of Ireland 2022;Ordnance Survey Ireland 2024) by the Irish Government provides weekly updated multivariate time series data on confirmed daily cumulative COVID-19 cases for all 26 Irish counties, starting from the beginning of the pandemic in February 2020.A COVID-19 case is attributed to the county in which the patient has their primary residence.1To our knowledge, spatially more detailed COVID-19 data is not available for Ireland.The limited granularity makes it difficult to implement fine-scale spatial models.The GNAR model was originally demonstrated using county-level data, indicating its potential for modeling infectious diseases at lower resolutions.The cumulative case count is given for 100,000 inhabitants.Age profiles vary across counties and COVID-19 infection rates are age dependent.Hence, the cumulative case count is adjusted for age distribution according to the 2016 census of Ireland, to ensure inter-county comparability (Central Statistics Office 2016).In our dataset, the first COVID-19 case was registered in Dublin on 02.03.2020 and the last reported date is 23.01.2023, spanning a total of 152 weeks (Ordnance Survey Ireland 2024).From 20.03.2020 onward, COVID-19 cases were recorded in every Irish county.The daily COVID-19 data is aggregated to a weekly level to avoid modelling artificial weekly effects (Kubiczek and Hadasik 2021;Sartor 2020).Due to delayed reporting during winter 2021/22, the weekly COVID-19 incidences from 12.12.2021 to 27.02.2022are averaged over a window of 4 weeks (Health Protection Surveillance Centre 2021, 2022c;Wei 2006).
The main COVID-19 regulations restricting physical movement and social interaction between Irish counties (Brennan 2022;Loughlin 2022;McQuinn et al. 2021) are used to naturally split the data into five sequential subsets, where the COVID-19 incidence is small at the beginning of the pandemic and shows a clear increasing trend over time.We denote by σ the average standard deviation in COVID-19 incidence across the 26 Irish counties within the considered data subset.
Methodology
Let G = {V, E} denote a fixed, simple, undirected, unweighted network with vertex set V containing N vertices and edge set E ; an edge between vertices i and j is denoted by i ∼ j .The neighbourhood of a subset of vertices A ⊂ V is defined as the set of neighbours outside of A to the vertices in A, N (A) = i∈A {j ∈ V\A : i ∼ j}.The set of r th -stage neighbours, or the r th -stage neighbourhood, for vertex i ∈ A is defined recursively as N (0) (i) = {i} and
COVID-19 networks: constructions and properties
The key to the GNAR model is the network.The true network underlying the data generating process (in this case, who infected whom in the spread of the disease) is usually unknown.Ideally, expert knowledge can be leveraged to build a network that captures the relationship between vertices, representing the subjects of interest.Networks to model the spread of an infectious disease, such as COVID-19, are frequently modelled off of human mobility patterns, which are considered to have a shaping influence on disease spread (e.g.Colizza et al. 2006;Jia 2020;Li et al. 2021;Mo 2021;Sun et al. 2021).To our knowledge, detailed information on weekly population flow between Irish counties is unavailable.Hence, we construct implicit COVID-19 transmission networks (COVID-19 networks hereafter) based on geographical approaches, in line with Knight et al. (2016).
In the Railway-based network, an edge is established between two counties if there exists a direct train link between the respective county towns (without change of trains) and the county towns are closest to each other on this train connection.The Queen's contiguity network connects each county to the counties it shares a border with (Sawada 2022).The Economic hub network adds an additional edge between each county and its nearest economic hub to the Queen's contiguity network: Dublin, Cork, Limerick, Galway or Waterford (Gardham 2022).To measure the distance to the nearest economic hub we use the Great Circle distance d C (i, j) , the shortest distance between two points on the surface of a sphere (Weisstein 2002).For two points i, j with latitude δ i , δ j and longitude i , j on a sphere of radius r > 0, The K-nearest neighbours network (KNN) connects a vertex with its K nearest neighbours with respect to d C (Bivand 2022;Eppstein et al. 1997).The distance-based neigh- bour network (DNN) constructs an edge between counties if their Great Circle distance d C lies within a certain range [l, r]; this construction is similar to the one used in Knight et al. (2016).For the COVID-19 network, we set l = 0 and consider r a hyperparameter, chosen large enough to ensure that no vertex is isolated.The maximum value for r is determined by the largest distance between any two vertices, for which it returns a fully connected network (Bivand et al. 2013).
In addition to these geographical networks, the Delaunay triangulation constructs geometric triangles between vertices such that no vertex lies within the circumsphere of any constructed triangle (Chen and Xu 2004), thus ensuring that there are no isolated vertices.The Gabriel, Sphere of Influence network and Relative neighbourhood are obtained from the Delaunay triangulation network by omitting certain edges.In a Gabriel network, vertices x and y in Euclidean space are connected if they are Gabriel neighbours; that is, where d(x, y) = n i=1 (x i − y i ) 2 denotes the Euclidean distance.In a Sphere of Influ- ence network (SOI), long distance edges in the Delaunay triangulation network are eliminated and only edges between SOI neighbours are retained, as follows.For x ∈ V and d x the Euclidean distance between x and its nearest neighbour in V , let C x denote the circle centred around x with radius d x .For y ∈ V , the quantities d y and C y are defined analogously.Vertices x and y are SOI neighbours if and only if C x and C y intersect at least twice, preserving the symmetry property of the Delaunay triangulation (Bivand et al. 2013).The Relative neighbourhood network only retains edges between relative neighbours, The Relative neighbourhood network is contained in the Delaunay triangulation, SOI and Gabriel network, and is the sparsest of the four networks (Bivand et al. 2013).Finally, the Complete network represents the homogeneous mixing assumption, where all countries are connected (Bansal et al. 2007).
Figure 1 shows the Economic hub network and the KNN network ( k = 11 and k = 21 ) for Ireland.Figures of the other networks are found in the Supplementary Material A; network summaries are provided in Table 1.
The networks are created based on the literature on spatial modelling; Bivand et al. (2008, pp. 239-251) suggests the Delaunay network and its variants, Queen's contiguity network, distance-based networks, and K-nearest neighbour networks.In De Souza et al. (2021) it was found that infrastructure has an effect on the spread of COVID-19 in Brazil; here we use the railway network as infrastructure network.The Economic hub network is motivated by the idea of including a proxy of commuter flows in the network construction, as commuting to work has been shown in Mitze and Kosfeld (2022) to be related to the spread of COVID-19 in Germany.
Generalised network autoregressive models
Network-based time series models incorporate non-temporal dependencies in the form of networks in addition to temporal dependencies commonly modeled in time series models (Knight et al. 2019(Knight et al. , 2016;;Zhu et al. 2017).In contrast to standard time series methodology and spatial models (Box et al. 2015;Hamilton 2020;Wei 2006), network-based time series models are not limited to geographic relationships but can incorporate any generic network.As COVID-19 is an infectious disease with spatial spreading behaviour, warranting the construction of networks based on spatial information, we use terms relating to spatial dependence in our exposition.Other types of dependence could easily be incorporated in the model through networks which reflect the hypothesised dependence.
The global α generalised network autoregressive models GNAR (p-s 1 , ..., s p ) models the observation X i,t for a vertex i at time t as the weighted linear combination of an autore- gressive component of order p and a network neighbourhood autoregressive component of a certain order, also called neighbourhood stage; for i = 1, . . ., p , the entry s i gives the largest neighbourhood stage considered for vertex i when regressing on up to p past values.In our analysis, X i,t denotes the 1-lag difference in weekly COVID-19 incidence over time t for county i.The effect of neighbouring vertices depends on some weight ω i,q .A GNAR model GNAR(p-s 1 , . . ., s p ) has the following form, where ε i,t ∼ N (0, σ2 i ) are uncorrelated. 2As weights ω i,q we choose the normalised inverse shortest path length (SPL) weight, where d i,q denotes the shortest path length (SPL) (Knight et al. 2016); in connected networks, 1 ≤ d i,q < ∞ for i = q .For i ∈ V and q ∈ N (r) (i) , we thus take, If the network is dynamic over time instead of static, the weights can be constructed to depend on time (Knight et al. 2016).
The general GNAR model relies on vertex specific coefficients α i,j , instead of vertex unspecific autoregressive coefficients, α j , indicating vertex specific temporal depend- ence.This modification is comparable to including individual random effects in regression models.
(1) To fit a GNAR model, we must choose two hyperparameters, the lag p, or α-order, and the vector of neighbourhood stages, s = (s 1 , ..., s p ) , also called β-order.They can be determined either through expert knowledge, e.g. on the spread of infections, or through a criterion-based search (Knight et al. 2016).The model coefficients are computed via Estimated Generalised Least Squares (EGLS) estimation (Knight et al. 2016;Leeming 2019;Lütkepohl 1991). 3
GNAR model selection and predictive accuracy
For our analysis of the Irish COVID-19 data, model selection, i.e. the choice of α -and β -order, is performed by minimizing the Bayesian Information Criterion (BIC) (Knight et al. 2016).The BIC avoids overfitting by penalizing the observed likelihood L by the dimensionality of the required parameters (Schwarz 1978).For a sample X of size n and a parameter The GNAR package assumes Gaussian errors (R Package Documentation 2022); under this assumption, the BIC is consistent.This assumption could be weakened; it can be shown that the BIC is consistent for the GNAR model (1) if the error term is i.i.d. with bounded fourth moments (Leeming 2019;Lütkepohl 2005;Lv and Liu 2014).
The predictive accuracy of a GNAR model is measured by the mean absolute scaled error (MASE), due to its insensitivity towards outliers, its scale invariance and its robustness (Hyndman and Koehler 2006).MASE is defined for each county i as the ratio of absolute forecasting error εi,t = |X i,t − Xi,t | divided by the mean absolute error between true and a naive 1-lag random walk forecast for the entire observed time period [1, T] (Hyndman and Koehler 2006;Urrutia et al. 2022);
The weekly incidence differences
The GNAR model requires stationary data.Stationary data has no trend over time and is homogeneous, i.e. has time-independent variance (Knight et al. 2016).The weekly COVID-19 count is clearly not stationary, as it shows an increasing trend in Fig. 2. To remove any linear trend, we perform 1-lag differencing on the weekly COVID-19 incidence for the 26 Irish counties, resulting in the incidence difference, (1-lag) COVID-19 ID, between two subsequent weeks (Montgomery et al. 2015).We assess the stationarity by applying a Box-Cox transformation to each data subset.Figure 12 in the Supplementary Material C, indicate that no further transformation is required.
Constructed networks
For the COVID-19 KNN network, neighbourhood sizes sequencing from k = 1 to the fully connected network, k = 25 , with step size 2, are considered.The minimal distance There is considerable variability in network characteristics, Table 1, in particular regarding the network density.The KNN and DNN networks for the abovementioned hyperparameters have much larger average degree than the other networks; the sparsest network is the Relative neighbourhood network.Consequently, the SPL is shortest in the denser DNN and KNN networks.The Railway-based network has the longest average SPL due to its vertex chains and the low number of shortcuts between counties.For the Economic hub network, the introduction of shortcuts to the economic hubs leads to a decrease in average SPL, i.e. the disease spreads quicker, compared to the Queen's contiguity network.The Gabriel network is sparser than the SOI network, with slightly longer average SPL.Deleting long edges in the Delaunay triangulation network to obtain the SOI network decreases the average degree and the average local clustering coefficient, but increases the average SPL.The Queen's network, the Economic hub network, the Delaunay network, the Gabriel network, and the SOI network show small world behaviour, i.e. high clustering with short SPL.To assess small world behaviour, the average SPL and average local clustering for a network is compared to a Bernoulli Random graph G(n, m) with identical size n and number of edges m as the network under investigation.The Railway network has much larger average SPL than G(n, m), while the dense KNN and DNN networks have almost the same average SPL and local clustering coefficient as the G(n, m) network.
Although there are differences in the detailed summary statistics, the networks can be clustered according to density and average local clustering coefficient; we use the k-means algorithm, running 10 randomisations to ensure robustness over a range of k = 1, . . ., 10 .The corresponding elbow plot implies two clusters of COVID-19 networks.As evident in Fig. 3, one cluster has high density and high average local clustering coefficient, while the second cluster has low density and low to medium average local clustering coefficient.
Spatial effects
Intuitively, if spatial correlation is present in a network, the closer in SPL two vertices are, the more highly correlated their COVID-19 incidences.Moran's I quantifies spatial correlation by estimating the average weighted correlation across space (Cliff and Ord 1981;Moran 1950;Zhou and Lin 2008).Let t ∈ T and x (t) i denote the COVID-19 ID for county i at time t, where W 0 = N i,j=1 w ij for normalisation.For non-neighbours, the weights are zero, i.e. ∀r : j � ∈ N (r) (i) : w ij = 0 .Here we use weights w ij = e −d ij where d ij denotes the SPL between vertex i and j (Coscia 2021).The spatial dependency between counties varies strongly over time for every network, see Fig. 4 and Figure 11 in Supplementary Material A.1.Peaks in Moran's I coincide with peaks in the 1-lag COVID-19 ID at the beginning of the pandemic as well as during the winters 2020/21 and 2021/22.The introduction of restrictive regulations, e.g.lockdowns, shows a decreasing trend in Moran's I while the ease of restrictions from summer 2021 onward has lead to an increasing trend in Moran's I.This indicates a network effect in the data, which is associated with the intercounty mobility, and becomes particularly evident after the official end of pandemic restriction in March 2022.To further assess the presence of non-linear spatial correlation, we also apply Moran's I to the ranks of the COVID-19 ID at each time point t over the duration of data observation.The rank-based Moran's I follows the same trajectory, with less extreme peaks, as evident in Fig. 5.
GNAR model fitting
To assess the benefit of accounting for network effects, the GNAR model is compared to a standard county-specific ARIMA model which is allowed to include seasonal effects. 5 The GNAR model allows for more flexible spatial-temporal dependencies than other network-based time series models as detailed in Supplementary Material B.2, including the ARIMA models, with a network specific selection of α -and β-order.The models are selected by choosing the model with the lowest BIC.
On average, the ARIMA model achieves a BIC = 1846.88on the entire data set, BIC = 534.58for the restricted data set and BIC = 670.27for the unrestricted data set.Optimal 6 GNAR models for each COVID-19 network achieve much lower BIC.For the restricted phase, the best phase-specific GNAR model yields BIC = 58.91 on the Eco- nomic hub network, and for the unrestricted phase BIC = 190.07 on the KNN ( k = 21 ) network, see Table 2.When fitted on the entire data set, the GNAR-5-11110 model with the KNN network ( k = 21) 7 achieves the lowest BIC = 193.95.8All BICs are considera- bly smaller than those obtained from the ARIMA fit, with the minimal BIC for the entire data set much larger than the BIC for the restricted phase, and also larger than the BIC for the unrestricted phase, thus justifying the use of GNAR models, as well as the split of the data.
The nature of the virus suggests that the transmission of COVID-19 between Irish counties may depend strongly on the population flow between counties (Lotfi et al. 2020).Protective COVID-19 restrictions taken by the Irish Government restricted and at times forbade inter-county travel in Level 3-5 lockdowns (Department of the Table 2 Overview over the best performing model and network for restricted and unrestricted pandemic phases average residual ε and average (av.)MASE indicated for the predicted 5 weeks at the end of the observed time period across all counties, 11.04.2021-09.05.2021 for the restricted dataset and 25.12.2022-23.01.2023 5 Fitted for each county individually, the ARIMA models might differ in orders and parameters.
6 Optimal describes the best performing combination of α -and β-order as well as global-α setting which obtain the minimal BIC value.The a-priori range of α-order spans {1, ..., 7} .The maximum lag to consider follows from Schwert's rule (Ng and Perron 1995), applied to the minimum number of weeks across the individual five datasets ( w = 18 ).The possible choices for the β-order are listed in Supplementary Material B.4.The maximum neighbourhood stage that can be included in the GNAR model is determined by the smallest maximum SPL across most networks, which is 5.For the complete network, only 1st stage neighbourhood can be modelled, while for the Economic hub network the maximum neighbourhood stage is 4. 7 From hereon referred to as the KNN network.
Taoiseach 2020; McQuinn et al. 2021).As supported by the positive and negative trends in Moran's I, the spatial dependence of COVID-19 incidence across counties is likely to have decreased during lockdowns and increased during periods in which inter-county travel was allowed (Wang 2022).This provides additional subject specific motivation to train a pandemic phase specific GNAR model.
Pandemic phases
Table 2 summarises the optimal GNAR models and COVID-19 network for the restricted and unrestricted data set.For both phases, the best performing GNAR model select an autoregressive component of order 7.The average MASE are smaller for the restricted than the unrestricted pandemic phases, implying that GNAR models are more suited to predicting periods with strict regulations than periods with fewer or no restrictions.The variance for residuals and MASE is smaller for the GNAR model than the ARIMA model.The optimal network for the unrestricted pandemic phase is much denser than the optimal network for the restricted phase.As evident from Tables 3, 4 in Supplementary MaterialD.1, the BIC values for the optimal GNAR model lie within the range [58.91, 68.36] for the restricted data set and within the range [190.07, 192.56] for the unrestricted data set.Figure 13 in Supplementary Material D.1 illustrates that denser networks perform better for the unrestricted data set while sparse networks achieve lower BIC for the restricted data set, on average.A decrease in inter-county dependence due to COVID-19 restrictions should result in decreasing values for the β-coefficients in the GNAR model.This hypothesis can only be partially verified, see Fig. 6.The absolute value of β-coefficients increases from the restricted to the unrestricted phase, implying increased spatial dependence after COVID-19 restrictions have been eased or lifted.We note that the GNAR model picks up a decrease in temporal dependence in COVID-19 ID.As a disease spreads more freely due to lenient or no restrictions, it has been observed in other data studies that case numbers can grow more erratic and become less dependent on historic data (Firth 2020;Kraemer 2020).This effect, in addition to peaks and high volatility in COVID-19 ID observed during pandemic phases with less strict regulations, might contributed to the negative α-coefficient values for the unrestricted data set.In gen- eral, an increase in noise during unrestricted pandemic phases might contribute to the decrease in predictive performance of the GNAR model.
Identical observations can be made when considering how the coefficients develop between the restricted and unrestricted phase for the GNAR model that is optimal for the entire data set, namely, GNAR(5-1,1,1,1,0) with the KNN ( k = 21 ) network; the β-coefficients increase in absolute value for the unrestricted phase compared to the restricted phase, see Fig. 14 in Supplementary Material D.2.
The predictive accuracy for both datasets is comparable and varies from county to county, see Figs. 7 and 8 for 9 example counties; MASE values for the remaining counties follow similar patterns.
For the restricted phase, GNAR models achieve lower MASE than the ARIMA models except for counties Cavan, Galway, Leitrim, Louth, Sligo, Tipperary, Roscommon, for which the ARIMA model performs equally well.For the unrestricted phase, the ARIMA model predicts particularly poorly for counties Carlow, Kilkenney, Louth and Waterford, Fig. 7 MASE values for the restricted pandemic phase, for 9 selected counties and for the restricted phase for counties Cavan, Clare, Limerick and Wexford.We note that Cavan, Leitrim and Sligo have a border with Northern Ireland, which could introduce some confounding factors.
For the restricted phase, the predicted COVID-19 ID values differ more strongly between the GNAR model and the ARIMA model, see Figure 16 in Supplementary Material D.3.For the unrestricted phase, the GNAR and ARIMA models estimate roughly the same trajectory while the former achieves smaller residuals for most counties.
Assessing the model assumptions
The above models assume that the observations follow a Gaussian i.i.d.error structure.To assess this assumption, we test whether the residuals εi,t follow a normal distribu- tion with a county-specific Kolmogorov-Smirnov test, aggregated over time.In contrast, we obtain a majority of significant p-values across counties for the unrestricted phase (the counts of p-values are #p ≤ 0.025 = 21 , #p > 0.025 = 5), 9 raising doubts about the Gaussianity assumption in the unrestricted phase.We obtain primarily insignificant p-values across counties for the restricted phase ( #p ≤ 0.025 = 8 , #p > 0.025 = 18)10 , so that the Gaussian assumption is not rejected.
Table 5 in Supplementary Material D.4 details the average MASE, average residual and p-value for each county, resulting for the two optimal GNAR models for the restricted and unrestricted data set.The Gaussian nature of residuals indicate suitability of the GNAR model to model restricted pandemic phases and ensure consistency in coefficient estimates.For the unrestricted phase, the Gaussianity in the model assumptions could not be statistically verified.These conclusions are supported by the county-specific QQplots in Supplementary Material D.4.The hyperparameters α -and β-order were set data-adaptively to minimize the BIC criterion, which assumes Gaussian error.Under the assumption of Gaussian error, the BIC would be minimal for the correct higher order network dependence.As we examined a large range of β-order choices, this deviance from Gaussianity leads us to propose investigating alternative error structures as future work (Fig. 9).
The GNAR model further assumes that the errors are uncorrelated.To assess this assumption, the residuals are investigated according to their temporal as well as spatial autocorrelation using the Ljung-Box test and Moran's I based permutation test (Ljung and Box 1978;Moran 1950).The former concludes significant temporal correlation for short-term lags in the GNAR residuals for each county.Thus, there is evidence that the GNAR model insufficiently accounts for temporal dependence in COVID-19 incidence in subsequent weeks.The residuals show remaining spatial autocorrelation.The Moran's I based permutation test counts N m = 8 Moran's I values outside the corresponding 95% credibility interval (expected 0.05 • 45 ≈ 2 ) for the restricted phase and N m = 16 for the unrestricted phase (expected 0.05 • • • 107 ≈ 5 ).The reduction in spatial correlation for the restricted phases and the Economic hub network is greater ( N m = 10 on COVID-19 cases to N m = 8 for residuals) than for the unrestricted phases and the KNN network ( N m = 16 for COVID-19 cases and residuals).We conclude that there is evidence that the GNAR model may not sufficiently incorporate the spatial relationship in COVID-19 case numbers across counties.These possible violations of the model assumptions have to be taken into account when interpreting the model fit.
Discussion
In general, a network model could can be a powerful tool to inform the spread of infectious diseases, see for example (Britton 2019;Overton 2020).In this paper, we modelled the COVID-19 incidence across the 26 counties in the Republic of Ireland by fitting GNAR models, leveraging different networks to represent spatial dependence between the counties.While we do not assume that the disease only spreads along the network, we consider the edges to represent the main trajectory of the infection.The analysis shows that there is a clear network effect, but networks of similar density perform similarly in predictive accuracy.GNAR models perform better on data collected during pandemic phases with inter-county movement restrictions than data gathered during less restricted phases.Sparse networks perform better for the restricted data set, while denser networks achieve lower BIC for the unrestricted data set.
There are some caveats relating to the model.First, the time series for the restricted phase and for the unrestricted phase are actually concatenated time series; Fig. 2 shows the original time series.The concatenation was carried out because of data availability, but it is possible that it obfuscates some potentially interesting phase-specific signals.
As seen in Fig. 2, even after differencing the time series do not display clear stationarity.Further, COVID-19 is subject to "seasonal" effects, e.g.systematic reporting delays due to weekends and winter waves (Kubiczek and Hadasik 2021;Nichols 2021;Sartor 2020).The GNAR model does not have a seasonal analogue which can incorporate seasonality in data, like SARIMA for ARIMA models (Shumway et al. 2000).Future work could introduce a seasonal component to the GNAR model, improving its applicability to infectious disease modelling.There may be other spatio-temporal patterns such as non-linear effects which the GNAR model currently does not include.Moreover, the COVID-19 pandemic had a strong influence on mobility patterns (COVID-19 Community Mobility Report 2022; Manzira et al. 2022), in particular due to restrictions of movement and an increased apprehension towards larger crowds.Considering only static networks may introduce a bias to the model (Bansal et al. 2007;Mo 2021;Perra 2012).Future work could therefore explore how GNAR models can include dynamic networks to incorporate a temporal component of spatial dependency.Alternative weighting schemes for GNAR models could be investigated to account for differences in edge relevance across time and network.
Regarding the theory of GNAR models, alternative error distributions, such as a Poisson distributed error term as in Armillotta and Fokianos (2021), could be explored given the indication of non-Gaussian residuals for the unrestricted pandemic phase.The stability of parameter estimation in GNAR models also warrants further investigation.The network constructions themselves could also be refined.Simulations have shown that GNAR models are sensitive to network misspecifications.Omitting edges may result in bias in the GNAR coefficients.While this paper has carried some robustness analysis regarding network choice, future analysis could focus on more content-based approaches to constructing networks, e.g.building a network based on the intensity of inter-county trade, computed according to the gravity equation theory (Chaney 2018).Many researchers have successfully modelled the initial spread of COVID-19 from Wuhan across China based on detailed mobility patterns, e.g.Jia (2020); Kraemer (2020).Finally, our statistical analysis did not include information about the dominant strain.
With COVID-19 being an evolving disease, different strains may display different transmission patterns.If more detailed data become available then this question would also be of interest for further investigation.However, this study has illustrated that it may be of benefit to use a GNAR model for the spread of an infectious disease, in particular during movement restrictions, so that the spread is mainly local.It has also detailed a range of possible network choices, and it has provided a set of tests to assess the performance of the model and its fit.
Fig. 1
Fig. 1 Map of Ireland and COVID-19 networks; economic hub towns marked in blue for the COVID-19 DNN network measures 90.3 km, between Kerry and Cork, and the maximal value 338.5 km, between County Cork and Donegal.The KNN and DNN network parameters are chosen to minimise the BIC of the associated GNAR model.For the restricted pandemic phase, the KNN network has k = 11 and the DNN network d = 325 .For the unrestricted pandemic phase, it is k = 21 and d = 325.
Fig. 2
Fig.2Weekly (1-lag) COVID-19 incidence difference (ID) and weekly COVID-19 incidence (dark grey dashed lines) for 100,000 inhabitants, from the start of the pandemic in March 2020 to mid January 2023, red for restricted phases, green for unrestricted; predominant COVID-19 virus variants shown by color scale at the bottom, in order: Original, Alpha, Delta, Omicron I and Omicron II
Fig. 4
Fig. 4 Moran's I across time, with weights based on SPL; main COVID-19 regulations by the Irish Government indicated by vertical lines; in order: initial lockdown, county-specific restrictions, Level-5 lockdown, allowance of inter-county travel, official end of all restrictions; 95% credibility interval in grey dashed
Fig. 6
Fig. 6 Development of GNAR model coefficients for the restricted and unrestricted pandemic phase; restricted phase with Economic hub network, unrestricted phase with KNN ( k = 21 ) network
Fig. 8
Fig. 8 MASE values for the unrestricted pandemic phase, for selected
Fig. 9
Fig.9QQ-plot for the residuals from the best performing GNAR model and network (Economic hub network for restricted phase, KNN (k= 21) network for unrestricted phase) for restricted and unrestricted pandemic phase for county Dublin (left) andDonegal (right) | 8,689.8 | 2023-07-12T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Computer Science"
] |
Anisotropic Turbulent Advection of a Passive Vector Field: Effects of the Finite Correlation Time
The turbulent passive advection under the environment (velocity) field with finite correlation time is studied. Inertial-range asymptotic behavior of a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow, is investigated by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and prescribed pair correlation function. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to nontrivial fixed points of the RG equations and depend on the relation between the exponents in the energy spectrum E ∝ k1−ξ ⊥ and the dispersion lawω ∝ k2−η ⊥ . The corresponding anomalous exponents are associated with the critical dimensions of tensor composite operators built solely of the passive vector field itself. In contrast to the well-known isotropic Kraichnan model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L has a logarithmic behavior: instead of power-like corrections to ordinary scaling, determined by naive (canonical) dimensions, the anomalies manifest themselves as polynomials of logarithms of L. Due to the presence of the anisotropy in the model, all multiloop diagrams are equal to zero, thus this result is exact.
Introduction
Understanding the turbulent advection is a rich and challenging problem.On the one hand, the violation of the classical Kolmogorov-Obukhov theory [1] is even more strongly pronounced for a advected field than for the velocity field itself; see, e.g., review [2]; on the other hand, the problem of passive advection appears to be easier tractable theoretically.The most remarkable progress on this way was achieved for the so-called Kraichnan's rapid-change model [3], in which instead of the stochastic Navier-Stokes (NS) equation the velocity field is modeled by a Gaussian ensemble, not correlated in time, with zero mean and prescribed pair correlation function.This approximation corresponds to the passive field approximation: if we neglect the influence of advected field θ to the dynamics of the environment (velocity) field v, the latter can be modeled by statistical ensembles with prescribed properties.For the first time, the anomalous exponents have been calculated on the basis of a microscopic model and within regular expansions in formal small parameters in [2].a e-mail<EMAIL_ADDRESS>e-mail<EMAIL_ADDRESS>this paper, we consider a more realistic model with finite (and not small) correlation time.The passively advected field is chosen as a vector and corresponds to the magnetohydrodynamic (MHD) turbulence.Furthermore, we will investigate the "strongly anisotropic" model, which is obtained by introducing the velocity field v, oriented along a fixed direction n: This problem is closely related to the processes taken place in solar corona, e.g., with solar wind [4], and n is an "orientation of a large-scale flare" in this context.As well, this model can be viewed as a d-dimensional generalization of the strongly anisotropic velocity ensemble introduced in [5] in connection with the turbulent diffusion problem and further studied and generalized in a number of papers [6][7][8].Following [9], we included into the stochastic advection-diffusion equation [see equation ( 2) below] an additional arbitrary dimensionless parameter A 0 , which unifies different physical situations: the kinematic MHD model, the linearized NS equation and the passive admixture with complex internal structure of the particles.One of the most convenient ways to study the anomalous scaling in various statistical models of the turbulent advection is to apply the field theoretic renormalization group (RG) and operator product expansion (OPE); see, e.g., the monographs [10,11].In this case the anomalous scaling is a consequence of the existence in the model of composite fields ("composite operators" in the quantumfield terminology) with negative scaling dimensions; see [12,13] and references therein.
This model should be considered as generalization and evolution of the models, considered in [7], where the advected field is chosen scalar, and [14], where the environment (velocity) field is chosen decorrelated in time.The main result of the present paper is that the inertial-range behavior of vector fields advected by velocity ensemble with finite correlation time combines the features of both above models: as in the scalar case, there is a set of fixed points, governing the infrared (IR) behavior; as in the zero-time correlation model, the inertial-range behavior of vector fields has logarithmic corrections to ordinary scaling.
Description of the model
In the presence of the anisotropy, the turbulent advection of a passive vector field θ(x) ≡ θ(t, x) is described by the stochastic equation [14] where θ i (x) is the vector field, x ≡ {t, x}, ∂ t ≡ ∂/∂t, ∂ i ≡ ∂/∂x i , n is a unit vector that determines the distinguished direction, x ⊥ and ∂ ⊥ are the components of the vectors x and ∂ perpendicular to n, is an artificial Gaussian scalar noise with zero mean and correlation function Here r = x − x , r = |r|, the parameter L ≡ M −1 is the integral (external) turbulence scale related to the stirring, and C ik is a dimensionless function finite for r/L → 0 and rapidly decaying for r/L → ∞.
The dimensionless constant f 0 , which breaks the O d symmetry of the Laplace operator, had to be introduced for renormalizability reasons.
As we consider the passive field approximation, the field u(x) can be simulated by a statistical ensemble with prescribed statistics: it is assumed to be Gaussian, strongly anisotropic, homogeneous, and with zero mean and a correlation function [7] Here d is the dimensionality of the x space, k ⊥ ≡ |k ⊥ |, 1/m is another integral turbulence scale, related to the stirring, D 0 > 0 is the amplitude factor and the symbol k denotes the scalar product k • n.Unlike [15] and some other generalizations of the isotropic Kraichnan's model, this velocity ensemble does not contain the isotropic case as a special case.The function (4) involves two independent exponents ξ and η, which in the RG approach play the role of two formal expansion parameters; depending on their values, the model reveals various types of inertial-range scaling regimes (see Sec. 3).A new parameter α 0 is needed for dimensionality reasons.
Field theoretic formulation of the model. Renormalization
The stochastic problem ( 2) -( 4) is equivalent to the field theoretic model of the extended set of three fields Φ ≡ θ, θ , u with the action functional Here all the terms, with the exception of the first one, represent the De Dominicis-Janssen action for the stochastic problem (2), (3) at fixed u, while the first term represents the Gaussian averaging over u.Furthermore, D θ and D v are the correlators (3) and ( 4) respectively; the needed integrations over x = (t, x) and summations over the vector indices are implied.This formulation means that the statistical averages of random quantities in the stochastic problem (2), (4) coincide with the functional averages of weight exp S(Φ) with the action (5).
The model ( 5) is solved by a standard Feynman diagrammatic technique with the triple vertex, which is represented in the diagrams as the point, in which three lines connect with each other, and the three bare propagators: v a v a 0 , θ c θ c 0 and θ b θ b 0 , which are determined by the quadratic (free) part of the action functional and are represented in the diagrams as wavy (which corresponds to the field v), slashed straight (the slashed end corresponds to the field θ ) and straight (the end without a slash corresponds to the field θ) lines, respectively.
From the analysis of the ultraviolet (UV) divergences it follows, that we deal with two diverging functions: the response function θ α θ β 1−ir and the triple correlation function θ α θ β v γ 1-ir ; but from direct calculations the latter appears to be equal to zero.Moreover, all multiloop diagrams entering into expansion of the 1-irreducible response function θ α θ β 1−ir , also are equal to zero, therefore to renormalize our model we have to calculate the only diagram represented in Fig. 1.From the substitution of the explicit expression for the divergent part of the self-energy operator Σ αβ into the expression for the 1-irreducible linear response function θ α θ β 1−ir it follows that the poles in ξ and η cannot be removed by the renormalization of the model parameters, and in order to ensure multiplicative renormalizability one has to add new term of the form u 0 f 0 ν 0 (n k θ k )∂ 2 (n k θ k ) into the action functional (5), with a new positive amplitude factor u 0 .
02008-p.3
After this trick the model becomes multiplicatively renormalizable with two independent renormalization constants Z f and Z u : Here μ is the "reference mass" (additional free parameter of the renormalized theory) in the minimal subtraction (MS) renormalization scheme, and the coupling constant g is defined as g = D 0 /ν 3 0 f 0 with D 0 from (4).
One of the basic RG statements is that the asymptotic behavior of the model is governed by the fixed points g * i , defined by the relations the type of the fixed point (IR/UV attractive or a saddle point) is determined by the matrix Ω ik = ∂β i /∂g k : for an IR attractive fixed point the matrix Ω has to be positive.The analysis of the β-functions reveals four different IR attractive fixed points depending on the exponents ξ and η: if ξ < 0, η < 0, or if η > 0, η − ξ > 0, we have two gaussian fixed points with g * = 0 (regimes 1a and 2a, respectively; see Fig. 2); if ξ > 0, η < 0, or if η > 0, ξ − η > 0, we have two nontrivial regimes, called 1b and 2b, respectively; for detailes of the calculation see [16].Thus we can conclude that the domains of IR stability in the vector model ( 5) coincide with the corresponding domains of IR stability in the scalar model considered in [7].Since the β-functions have no higher-order corrections, this pattern is exact.This fact implies that the correlation functions of the model ( 5) in the IR region (μr Λr 1, Mr ∼ 1) exhibit scaling behavior (as we will see below, up to logarithmic factors).For the corre- Substituting the fixed point values of the regimes (1a) -(2b) we obtain: For any correlation function G R = Φ . . .Φ of the fields Φ we have Δ G = N Φ Δ Φ , so the critical dimensions of the fields Φ = u, θ, θ are the same as their canonical dimensions.It is the specific feature of the present model, which makes it similar to the zero-correlation time model [14] and distinguishes it from both the isotropic Kraichnan's vector model [13] (in which γ ν 0) and the anisotropic Kraichnan's scalar model [7] (in which the Laplacian splitting parameter f 0 is not dimensionless).
02008-p.4 4 Renormalization and critical dimensions of composite operators
The measurable quantities are equal-time pair correlation functions of two composite fields ("operators") F N p , built solely of the basic fields θ: Here is the total number of fields θ, entering into the operator F, i = {N 1 p 1 }, and k = {N 2 p 2 }.
To calculate the renormalization constants we have to calculate the only one-loop diagram, represented in figure 3; the divergent parts of all the multiloop diagrams are equal to zero and therefore give no contribution to the renormalization constants and the anomalous dimensions.From the calculation of this diagram it follows that the operators F N p indeed mix in the renormalization.This means that the renormalization constants Z and the anomalous dimensions γ are some matrices: Therefore to solve the RG equation we have to diagonalize these matrices.But they appear to be not diagonalizable and can only be brought to the Jordan form.
For the equal-time pair correlation function (10) this leads to the appearance of a logarithmic dependence in the IR asymptotic behavior (in the following we denote it G ik for brevity): Here P L (. . . ) is a polynomial of degree L with the argument ln μr; f is the invariant charge and f → f r ξ as 1/μr → 0 for scaling regime (1b), f → f r ξ−η as 1/μr → 0 for scaling regime (2b).The representations (12) with unknown scaling functions Φ describe the behavior of the correlation functions at μr 1 and any fixed value of Mr. The inertial range r L corresponds to the additional condition Mr 1, the form of the functions Φ under this condition is studied using the operator product expansion.
Combining the RG representation (12) with the corresponding OPE, restoring the canonical dimension d G = −N 1 − N 2 and retaining only the leading term, we obtain the following asymptotic expression for the pair correlation function (10) in the inertial range: where Φ f is a certain scaling function, restricted to the inertial range r L. Owing to the nilpotency of the matrix of the critical dimensions, the only dependence on the exponents ξ and η that distinguishes the two nontrivial cases (1b) and (2b) from each other is contained in the invariant charge f .For the trivial regimes (1a) and (2a) there is no corrections to the ordinary scaling.
Conclusion
We applied the field theoretic renormalization group and the operator product expansion to the analysis of the inertial-range asymptotic behavior of a divergence-free vector field, passively advected by a strongly anisotropic turbulent flow.Depending on the two exponents ξ and η that describe the energy spectrum E ∝ k 1−ξ ⊥ and the dispersion law ω ∼ k 2−η ⊥ of the velocity field, the possible nontrivial types of the IR behavior appear to reduce to only two limiting cases: the rapid-change type behavior, realized for ξ > η > 0 (regime 2b), and the "frozen" (time-independent or "quenched") behavior, realized for ξ > 0, η < 0 (regime 1b).In this respect, the situation is the same as in the model of the anisotropic advection of the scalar field studied in [7].
The inertial-range asymptotic expressions for various correlation functions are summarized by the expressions (13).In contrast to the Kraichnan's rapid-change model, where the correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the dependence on the integral turbulence scale L demonstrates a logarithmic character.The key point is that the matrices of scaling dimensions of the relevant families of composite operators appear nilpotent and cannot be diagonalized.This result is perturbatively exact in the sense that the contributions of all multiloop diagrams appear equal to zero.The physical meaning of this feature is not yet clarified, but it is clear that it is closely connected with the presence of the anisotropy vector n.
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, 201
Figure 2 .
Figure 2. Domains of IR stability of the fixed points in the model (5).The numbers in boxes correspond to the fixed points (1a) -(2b) in the text.
Figure
FigureThe one-loop contribution to the generating functional | 3,578.4 | 2016-02-01T00:00:00.000 | [
"Physics"
] |
Nonlinear Mixed Convective Flow over a Moving Yawed Cylinder Driven by Buoyancy
The fluid flow over a yawed cylinder is useful in understanding practical significance for undersea applications, for example, managing transference and/or separation of the boundary layer above submerged blocks and in suppressing recirculating bubbles. The present analysis examines nonlinear mixed convection flow past a moving yawed cylinder with diffusion of liquid hydrogen. The coupled nonlinear control relations and the border restrictions pertinent to the present flow problem are nondimensionalized by using nonsimilar reduction. Further, implicit finite difference schemes and Quasilinearization methods are employed to solve the nondimensional governing equations. Impact of several nondimensional parameters of the analysis on the dimensionless velocity, temperature and species concentration patterns and also on Nusselt number, Sherwood number and friction parameter defined at the cylinder shell is analyzed through numerical results presented in various graphs. Velocity profiles can be enhanced, and the coefficients of friction at the surface can be reduced, for increasing values of velocity ratio parameters along chordwise as well as spanwise directions. Species concentration profile is reduced, while the Sherwood number is enhanced, for growth of the Schmidt number and yaw angles. Furthermore, for an increasing value of yaw angle, skin-friction coefficient in chordwise direction diminishes in opposing buoyancy flow case, whereas the results exhibit the opposite trend in assisting buoyancy flow case. Moreover, very importantly, for increasing magnitude of nonlinear convection characteristic, the liquid velocity and surface friction enhance in spanwise direction. Further, for increasing magnitude of combined convection characteristics, velocity profiles and coefficient of friction at the surface enhance in both spanwise and chordwise directions. Moreover, we have observed that there is no deviation for zero yaw angle in Nusselt number and Sherwood number.
Introduction
Many researchers over the last few decades have given considerable attention to the investigation of heat and mass transfer characteristics of combined convection in various flow geometries.Combined convection flows appear when the temperature and species concentration variations between the wall and the external liquid are larger and, hence, become important when buoyancy forces significantly disturb the circulation, heat and species concentration patterns.When the fluid is subjected to two various density drops having various rates of diffusion, it is meant to be the double diffusion convection.The density differences may be caused by gradients in the liquid concentration, or by the changes in the temperature.The double diffusive mixed convection plays a vital role in boundary layer flow problems because of its significance in several technical and geophysical challenges including solar collectors, solar ponds, lakes, reservoirs and crystal growth [1].Due to its vast applications, many researchers have worked on the double diffusion combined convection flow past various geometries such as sphere, exponential stretching sheet, vertical cone, slender cylinder, moving plate, etc. [2][3][4][5][6][7].
However, for viscous liquid flows with heat transfer, the impact of linear dependence of density on dimensionless temperature, that is, natural convection, seems to be highly significant in applications pertaining to industrial manufacturing processes and, therefore, cannot be ignored.The "temperature and density relation" is nonlinear for a large distinction between the border and liquid temperatures, the nonlinear density temperature differences in the buoyancy force term have a substantial effect on the circulation and energy transference features.Vajravelu and Sastri [8] have analyzed the heat transfer characteristics between the vertical borders with and without the nonlinear density temperature differences.Bhargav and Agarwal [9] have examined the fully developed free convection with temperature-dependent density within a duct.Mosta et al. [10] have examined the influence of nonlinear temperature combined with density differences in nanosuspension convection over a vertical border.In the present analysis, the "nonlinear temperature and density relation" is regarded owing to the important variation between the liquid and wall temperatures.At the same time, the liquid hydrogen diffusion [3,5] is regarded because of its cooling ability.
The boundary layer concept is the most useful and important aspect in understanding the transport processes occurring in external flows.The phenomenal results of the mixed convection flow along various geometries have been contributed by various researchers around the globe.Recently, Muthukumaran and Bathinathanan [11] have worked on mixed convection boundary layer flow over a stretching sheet, and their results have revealed that the heat transfer rate increases with a raise of the Prandtl number for both assisting and opposing flows.Halim and Noor [12] have analyzed mixed convection over a vertical stretching sheet, and their outcomes have revealed that the assisting flow has higher rates of heat and mass transfer compared to the opposing flow.Alsabery et al. [13] have investigated the mixed convection through a rotating cylinder.Khashiie et al. [14] have worked on mixed convection flow over a Riga plate.Complex flow patterns over yawed cylindrical cables, suppression of fluctuations in lift forces and the control of the drag forces are the main problems encountered in the engineering design.At the same time, the investigation of combined convection circulation about a tilted cylinder has not been analyzed so far.The examination of motion along tilted cylinder is very useful for heat exchangers design [15].The fluid flow past yawed and unyawed cylindrical geometries extensively occurs in various engineering-related applications, like tow cables, chimney stacks, different towers, sub-sea pipelines, risers, heat exchangers and overhead cables [16,17].King [16] has studied the vortex exited oscillations of yawed circular cylinders.This study reveals that the sustained oscillations can be observed at the yaw angles ranging between ±65 • .Najafi et al. [18] have studied the undisturbed flows over the yawed cylinder, wherein it is observed that eddies near the wake of the cylinder detach due to the increase in the angle of yaw.Further, Snarski [19] has investigated the variation of wall pressure on circular cylinder due to yaw angle and has revealed that the spectra exhibit powerful narrow band energy stages related with the Strouhal vortex shedding region, for the degrees of yaw angle ranging from π/3 to π/2.In the work of Sears [20], the boundary layer motion along a yawed cylinder is analyzed, and it is noticed that the location of the boundary layer separation point is independent from the yaw angle.Moreover, Thapa et al. [21] have investigated numerically the circulation over a yawed circular cylinder near the plane border.In the work of Gupta and Sarma [22], the timedependent circulation along a yawed infinite cylinder under the influence of cross flow has been analyzed, and this analysis reveals that the coefficient of friction along chordwise direction diminishes with larger values of nonsimilar variable (ξ).Further, Bucker and Lueptow [23] have examined the boundary layer flow on weakly yawed cylinders, and they have found that the thickness of boundary layer rises nonlinearly for small degree yaw angle.In this direction, many of the researchers have worked on the boundary layer flow past a yawed cylinder with the influence of non-uniform suction/injection [17,[24][25][26][27]. Marshall [28] has studied the disturbed flow along a moving yawed cylinder and found that, as cross stream Reynolds number increases, the surface vorticity enhances.Vakil and Green [29] have performed numerical calculations pertaining to the flow over a yawed cylinder for moderate values of Reynolds number, i.e., 1 ≤ Re ≤ 40, and have validated the independence principle and also proposed an empirical relation for the lift and drag forces on the cylinder.Recently, Patil et al. [30] have scrutinized the combined convection circulation along a yawed cylinder, and the obtained data reveal that the liquid velocity and surface drag coefficient at the cylinder's boundary in all directions rises with the heat transfer rate because of combined convection.
From the above literature review, it is found that the investigation of double diffusive combined convection past a moving yawed cylinder has not been attempted so far.Since, many authors such as Roy [25], Chiu and Lienhard [31], Roy and Saikrishnan [26] and Revathi et al. [17] have analyzed a boundary layer flow over a yawed cylinder.The works of these researchers have stimulated us to work on the present article, and, therefore, we have scrutinized the considered challenge.However, the present problem is formulated as an endeavor to investigate the steady combined convection boundary layer flow around a moving yawed cylinder.The novelty in the present analysis is as follows: − Convective flow over a moving yawed cylinder driven by buoyancy.− Influence of liquid hydrogen diffusion.− Effects of yawed angle.− Flow characteristics in chordwise and spanwise directions.
The considered governing equations and boundary conditions have been reduced using nonsimilar technique.Further, these equations are solved by employing the quasilinearization method and the implicit finite difference schemes [32][33][34].
Mathematical Simulation
Herein, we analyze the viscous, laminar, incompressible, combined convective flow over a moving yawed cylinder.The flow system is demonstrated in Figure 1, where the liquid (water) is supposed to move over a vertically yawed cylinder with radius R so that the yaw angle θ is used between 0 and π/6.Here, θ = 0 represents the vertical cylinder, while θ = π/2 represents the horizontal cylinder.To have the influence of buoyancy, the cylinder must be regarded in vertical or tilted location because the investigation is of combined convection.Therefore, the yaw angle is regarded in the range 0 ≤ θ ≤ π/2.The magnitude of angle of yaw above π/6 is not regarded because it would be nearer to stagnation point circulation and it is not the aim of the current examination of combined convection flow.For analysis, x and z are the coordinate axes in chordwise and spanwise directions, respectively, with u and w denoting the corresponding velocity components.Moreover, y is the coordinate axis drawn normal to x and z axes with v denoting the corresponding velocity component.The temperature and species concentration of the liquid at the border are denoted by T s and C s , while that away from the surface are represented by T ∞ and C ∞ .The density changes are modeled using the Boussinesq approach [5,35].
Momentum equation along chordwise direction Energy equation Concentration equation The prescribed boundary conditions are , The nonsimilar transformations and the resulting variables are The governing equations representing the fluid flow variations are as follows [17,[24][25][26][27]30,36] Continuity equation ∂u ∂x Momentum equation along chordwise direction Momentum equation along spanwise direction Energy equation Concentration equation The prescribed boundary conditions are The nonsimilar transformations and the resulting variables are In view of ( 7), we have Now, utilizing the transformations (7), Equations ( 2)-( 5) can be written as The employed boundary conditions are given below The nondimensional characteristics arising in this investigation are The velocity distribution is Therefore, ξ, β(ξ), s(ξ) and p(ξ) can be defined in terms of (15) Using the following relation, equations are expressed in where In view of Equations ( 16) and (17), Equations ( 8)-( 11) become as follows The relevant boundary conditions are Skin friction coefficient along chordwise direction can be defined as follows . Taking into account Equation ( 7) one can define this skin friction parameter as ) and as a result Skin friction coefficient along spanwise direction can be defined as follows . Taking into account Equation ( 7) one can define this skin friction parameter as , (w e = w ∞ cos(θ)) and as a result Nusselt number reflecting the heat transfer rate can be defined as Taking into account Equation ( 7), one can find and G η (x, 0).Using Equation ( 15) for ξ one can find Re −0.5 Nu = Sherwood number reflecting the mass transfer rate can be defined as Taking into account Equation ( 7) one can find and H η (x, 0).Using Equation ( 15) for ξ one can find Re −0.5 Sh =
Solution Technique
Equations ( 18)-( 21) are linearized using quasilinearization technique as below where the coefficients at the (i + 1)th iteration are defined employing the known ith iterative parameters.The dimensionless boundary conditions are where the boundary layer edge is denoted by η ∞ .
The coefficients in Equations ( 27)-( 30) are given below The nonlinear coupled partial differential Equations ( 18)-( 21) under the boundary conditions (22) have been solved numerically using an implicit finite difference scheme in combination with the quasilinearization technique.The quasilinearization technique can be viewed as a generalization of the Newton-Raphson approximation method in functional space.An iterative sequence of linear equations is carefully constructed to approximate the nonlinear Equations ( 18)-( 21) under the boundary conditions (22) achieving quadratic convergence and monotonicity.Applying the quasilinearization technique, the nonlinear coupled partial differential Equations ( 18)-( 21) with boundary conditions (22) are replaced by the sequence of linear ordinary differential equations.Because the method is presented for ordinary differential equations by Inouye and Tate [37] and for partial differential equations in a recent study by Singh and Roy [38], its detailed description is not provided here.At each iteration step, the sequence of linear partial differential Equations ( 27)-( 30) is expressed in difference form using the central difference scheme in the x-direction and the backward difference scheme in η direction.Thus, in each step, the resulting equations have been then reduced to a system of linear algebraic equations with a block tri-diagonal matrix, which is solved by Varga's algorithm [39].To ensure the convergence of the numerical solution to the exact solution, step sizes ∆x and ∆η are taken as 0.01 and 0.01.A convergence criterion based on the relative difference between the current and previous iteration values is employed.When the difference reaches 0.0001, the solution is assumed to have converged and the iteration process is terminated, i.e., max
Results and Discussion
In order to examine the qualitative behavior of fluid flow characteristics over a moving yawed cylinder, comprehensive numerical computation has been carried out for numerous values of nondimensional parameters that characterize the flow, heat and mass transfer.The simulated outcomes are presented graphically.Considering water as the working liquid, the value of Prandtl number Pr = 7 is selected.The realistic Schmidt numbers are Sc = 160, 240 and 340 which correspond to liquid hydrogen, liquid nitrogen and liquid oxygen, respectively.The values of yaw angle (θ), combined convection parameter (Ri) and the ratio of buoyancy force parameter (Nc) are varied in the ranges 0 ≤ θ ≤ π/3, -3 ≤ Ri ≤ 10 and 0 ≤ Nc ≤ 1, respectively.It should be noted that considered values of governing parameters allow one to show the features of liquid behavior and heat and mass transfer.
Further, θ = 0 corresponds to the perfect geometry of a vertical cylinder.Moreover, x = 0 and x = 0 represent the similarity and nonsimilarity cases, respectively.Additionally, ε 1 = u s /u e and ε 2 = w s /w e are two velocity ratio parameters along chordwise and spanwise directions, respectively.Thus, the buoyancy for liquid enhances the liquid motion and simultaneously enhances the liquid's velocity and respective frictions at the surface.Further, for enhancing values of velocity ratio characteristics (ε1), the velocity profile enhances, while the surface drag coefficient in the chordwise direction diminishes.Moreover, the same behavior is observed for larger values of velocity ratio parameters (ε2) in the spanwise direction.The physical reason is that, for ε1 < 1 and ε2 < 1, the free stream velocity dominates over surface velocity and causes such variations.Moreover, the combined impacts of assisting buoyancy force because of temperature and concentration gradients along with a rise of velocity ratio parameter acts like a favorable pressure gradient which enhances the liquid flow.For 0.5 x = , Pr = 7.0, βT = 0.1, θ = π/6, Nc = 0.1, ε1 = ε2 = 1.5, βC = 0.1 and Sc = 160, the coefficients of friction along chordwise and spanwise directions at the wall are increased by about 22% and 62%, respectively, as Ri varies from opposing buoyancy flow case (Ri = -1) to assisting buoyancy flow case (Ri = 3).along the spanwise direction has been observed.For enhancing values of velocity ratio parameter (ε1), the velocity pattern increases, while surface drag coefficient in chordwise direction diminishes.The physical reason is that, for ε1 < 1, the free stream velocity dominates over surface velocity and causes such variations.Further, for the increasing magnitudes of βT, the velocity patterns and coefficient of friction at the surface increase along the spanwise direction.The larger magnitudes of βT characterize the greater variation between the plate and the environmental temperature.Therefore, for larger magnitudes of βT, the larger temperature variation reasons stronger convection and Thus, the buoyancy for liquid enhances the liquid motion and simultaneously enhances the liquid's velocity and respective frictions at the surface.Further, for enhancing values of velocity ratio characteristics (ε 1 ), the velocity profile enhances, while the surface drag coefficient in the chordwise direction diminishes.Moreover, the same behavior is observed for larger values of velocity ratio parameters (ε 2 ) in the spanwise direction.The physical reason is that, for ε 1 < 1 and ε 2 < 1, the free stream velocity dominates over surface velocity and causes such variations.Moreover, the combined impacts of assisting buoyancy force because of temperature and concentration gradients along with a rise of velocity ratio parameter acts like a favorable pressure gradient which enhances the liquid flow.For x = 0.5, Pr = 7.0, β T = 0.1, θ = π/6, Nc = 0.1, ε 1 = ε 2 = 1.5, β C = 0.1 and Sc = 160, the coefficients of friction along chordwise and spanwise directions at the wall are increased by about 22% and 62%, respectively, as Ri varies from opposing buoyancy flow case (Ri = −1) to assisting buoyancy flow case (Ri = 3).
Impacts of Nonlinear Convection Parameter and Yaw Angle
Figures 5-8 display the influence of velocity ratio characteristics (ε 1 ) and angle of yaw (θ) on velocity profile F(x, η) and the coefficients of friction at the surface Re 0.5 C f along the chordwise direction.Moreover, the influence of nonlinear convection parameter (β T ) and angle of yaw (θ) on velocity profile S(x, η) and the coefficient of friction at the surface Re 0.5 C f along the spanwise direction has been observed.For enhancing values of velocity ratio parameter (ε 1 ), the velocity pattern increases, while surface drag coefficient in chordwise direction diminishes.The physical reason is that, for ε 1 < 1, the free stream velocity dominates over surface velocity and causes such variations.Further, for the increasing magnitudes of β T , the velocity patterns and coefficient of friction at the surface increase along the spanwise direction.The larger magnitudes of β T characterize the greater variation between the plate and the environmental temperature.Therefore, for larger magnitudes of β T , the larger temperature variation reasons stronger convection and as a result enhances the liquid velocity and the friction between the cylinder and the liquid.
Further, velocity profiles in chordwise and spanwise directions and the skin friction coefficient in chordwise and spanwise directions both increase with the increasing values of yaw angle.Higher values of yaw angle (i.e., the cylinder inclines more) cause pressure increase in the fluid flow and increase fluid velocity, and the inclination of the cylinder causes the enhancement in the surface friction along chordwise and spanwise directions.Moreover, in Figure 7, at θ = 0, the lines obtained for different values of β T merge together.For x = 0.5, Pr = 7.0, Ri = 10.0, β T = 0.5, Nc = 0.1, ε 1 = 0.8, ε 2 = 0.5, β C = 0.1 and Sc = 160, the coefficient of friction along spanwise directions at the wall increases by about 50%, as θ varies from π/6 to π/3.
uid.
Further, velocity profiles in chordwise and spanwise directions and the skin friction coefficient in chordwise and spanwise directions both increase with the increasing values of yaw angle.Higher values of yaw angle (i.e., the cylinder inclines more) cause pressure increase in the fluid flow and increase fluid velocity, and the inclination of the cylinder causes the enhancement in the surface friction along chordwise and spanwise directions.Moreover, in Figure 7, at θ = 0, the lines obtained for different values of βT merge together.For 0.5 x = , Pr = 7.0, Ri = 10.0, βT = 0.5, Nc = 0.1, ε1 = 0.8, ε2 = 0.5, βC = 0.1 and Sc = 160, the coefficient of friction along spanwise directions at the wall increases by about 50%, as θ varies from π/6 to π/3.It is observed that the concentration profile reduces while the Sherwood number enhances, for the larger Schmidt numbers and angles of yaw.Mass diffusivity reduces for the higher values of Sc.As Sc enhances, the thickness of the concentration boundary layer reduces, and as a result the species concentration profile diminishes.Accordingly, the Sherwood number increases.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.For 0.5 x = , Ri = 10.0, βT = 0.1, Nc = 0.1, ε1 = 0.8, ε2 = 0.5, βC = 0.1 and θ = π/6, the Sherwood number enhances by about 27%, when Sc value is reduced from 340 to 160.
Impacts of Yaw Angle and Schmidt Number
Figures 9 and 10 display the impact of Schmidt number (Sc) as well as angle of yaw (θ) on concentration profile H(x, η) and the mass transfer rate Re −0.5 Sh , respectively.It is observed that the concentration profile reduces while the Sherwood number enhances, for the larger Schmidt numbers and angles of yaw.Mass diffusivity reduces for the higher values of Sc.As Sc enhances, the thickness of the concentration boundary layer reduces, and as a result the species concentration profile diminishes.Accordingly, the Sherwood number increases.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.For x = 0.5, Ri = 10.0, β T = 0.1, Nc = 0.1, ε 1 = 0.8, ε 2 = 0.5, β C = 0.1 and θ = π/6, the Sherwood number enhances by about 27%, when Sc value is reduced from 340 to 160.reduces, and as a result the species concentration profile diminishes.Accordingly, the Sherwood number increases.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.For 0.5 x = , Ri = 10.0, βT = 0.1, Nc = 0.1, ε1 = 0.8, ε2 = 0.5, βC = 0.1 and θ = π/6, the Sherwood number enhances by about 27%, when Sc value is reduced from 340 to 160. .Temperature pattern and heat transfer rate are reduced for an enhancing magnitudes of combined convection parameter.The magnitudes Ri > 0 illustrate the essential influence of buoyancy force compared to the inertia force.Thus, a rise of the buoyancy impacts illustrates more essential fluid flow that characterizes a reduction of the liquid temperature and an appearance of the cool liquid close to the cylinder's border.This in turn enhances the energy transport strength from the cylinder's border to the liquid.Further, as yaw angle enhances temperature profile and heat transfer rate, for enhancing magnitudes of combined convection parameter, we observe the curves obtained by varying the yaw angle; the curves overlap into a single curve with irrespective values of combined convection parameter.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.Moreover, at 0.5
Velocity profile ( )
, Fx is compared with the already existing results of Eswara and Nath [40] for a particular case by assuming A = 0, Ec = 0, θ = 0.The results are found in an excellent agreement and the comparisons are shown in Figure 13.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.Moreover, at 0.5 x = and Ri = 10.0, as yaw angle enhances from θ = π/6 to θ = π/3, Nusselt number increases approximately about 7%.
Velocity profile ( )
, Fx is compared with the already existing results of Eswara and Nath [40] for a particular case by assuming A = 0, Ec = 0, θ = 0.The results are found in an excellent agreement and the comparisons are shown in Figure 13.Further, we observe that there is no deviation for θ = 0 which represents the vertical cylinder, as compared to other yaw angles.Moreover, at x = 0.5 and Ri = 10.0, as yaw angle enhances from θ = π/6 to θ = π/3, Nusselt number increases approximately about 7%.
Velocity profile F(x, η) is compared with the already existing results of Eswara and Nath [40] for a particular case by assuming A = 0, Ec = 0, θ = 0.The results are found in an excellent agreement and the comparisons are shown in Figure 13.Table 1 demonstrates the variations of friction parameter at the yawed cylinder along chordwise direction, Nusselt and Sherwood numbers, for various magnitudes of yaw angle.From the Table 1, it is found that the surface drag coefficient, Nusselt and Sherwood numbers are raised for increasing yaw angles.The friction parameter, rates of heat transfer and mass transfer are enhanced by about 31%, 2% and 1%, respectively, at 0.5 x = , as yaw angle varies from π/12 to π/6.
Conclusions
This research considers the double-diffusive combined convection around a moved yawed cylinder.In that, the influence of varying strength of yaw angle, mixed convection and nonlinear convection characteristics on the velocity patterns and skin friction coefficients in chordwise and spanwise directions, nondimensional temperature, concentration profile, mass and heat transfer rates are analyzed employing different graphs.Taking into account this detailed analysis, the obtained outcomes are detailed as follows:
−
Velocity profiles can be enhanced, while the coefficients of friction at the surface diminish, for increasing values of velocity ratio parameters in spanwise and chordwise directions.
−
For enhancing magnitudes of nonlinear convection coefficient, the velocity profile and the skin friction parameter in spanwise direction are increased.Table 1 demonstrates the variations of friction parameter at the yawed cylinder along chordwise direction, Nusselt and Sherwood numbers, for various magnitudes of yaw angle.From the Table 1, it is found that the surface drag coefficient, Nusselt and Sherwood numbers are raised for increasing yaw angles.The friction parameter, rates of heat transfer and mass transfer are enhanced by about 31%, 2% and 1%, respectively, at x = 0.5, as yaw angle varies from π/12 to π/6.
Conclusions
This research considers the double-diffusive combined convection around a moved yawed cylinder.In that, the influence of varying strength of yaw angle, mixed convection and nonlinear convection characteristics on the velocity patterns and skin friction coefficients in chordwise and spanwise directions, nondimensional temperature, concentration profile, mass and heat transfer rates are analyzed employing different graphs.Taking into account this detailed analysis, the obtained outcomes are detailed as follows: − Velocity profiles can be enhanced, while the coefficients of friction at the surface diminish, for increasing values of velocity ratio parameters in spanwise and chordwise directions.− For enhancing magnitudes of nonlinear convection coefficient, the velocity profile and the skin friction parameter in spanwise direction are increased.− Concentration profile diminishes, while the Sherwood number enhances, for increasing values of Schmidt number and yaw angle.− Velocity profiles in spanwise and chordwise directions and skin friction coefficient at the border in chordwise and spanwise directions are enhanced with growing values of yaw angle.
Figure 1 .
Figure 1.Sketch of the motion with coordinates.
4. 1 .
Impacts of Combined Convection Parameter and Velocity Ratio Parameter
Figures 2 -
Figures 2-4 display the influence of combined convection parameter (Ri) and velocity ratio parameters (ε 1 and ε 2 ) on velocity patterns (F(x, η) and S(x, η)) and the coefficients of friction at the surface Re 1/2 C f and Re 1/2 C f along chordwise and spanwise directions.Increasing magnitude of combined convection characteristics enhances the velocity pattern and the surface drag coefficient in chordwise and spanwise directions.The positive magnitudes of Ri designate the essential impacts of buoyancy force over inertia force.
Figures 2 -
Figures 2-4 display the influence of combined convection parameter (Ri) and velocity ratio parameters ( ) 12 and on velocity patterns
Figure 4 .
Figure 4. Influence of Ri and ε2 on skin-friction parameter ( ) 05 .f Re C in spanwise direction when
Figures 5 - 8
Figures 5-8 display the influence of velocity ratio characteristics (ε1) and angle of yaw (θ) on velocity profile ( ) , Fx and the coefficients of friction at the surface ( ) 0.5 f Re C along the chordwise direction.Moreover, the influence of nonlinear convection parameter (βT) and angle of yaw (θ) on velocity profile ( ) , Sx and the coefficient of friction at the
Figures 9 and 10 ,
Figures 9 and 10 display the impact of Schmidt number (Sc) as well as angle of yaw (θ) on concentration profile ( ) , Hx and the mass transfer rate ( ) 05 .Re Sh −
Figures 11 and 12
Figures 11 and 12 depict the influence of combined convection parameter (Ri) and angle of yaw (θ) on dimensionless temperature pattern ( ) , Gx and heat transfer rate
Figure 10 .
Figure 10.Influence of θ and Sc on Sherwood number Re −0.5 Sh when Ri = 10.0, β T = 0.1, Nc = 0.1, ε 2 = 0.5, β C = 0.1 and ε 1 = 0.8.4.4.Impact of Yaw Angle and Combined Convection CharacteristicsFigures 11 and 12 depict the influence of combined convection parameter (Ri) and angle of yaw (θ) on dimensionless temperature pattern G(x, η) and heat transfer rateRe −0.5 Nu .Temperature pattern and heat transfer rate are reduced for an enhancing magnitudes of combined convection parameter.The magnitudes Ri > 0 illustrate the essential influence of buoyancy force compared to the inertia force.Thus, a rise of the buoyancy impacts illustrates more essential fluid flow that characterizes a reduction of the liquid temperature and an appearance of the cool liquid close to the cylinder's border.This in turn enhances the energy transport strength from the cylinder's border to the liquid.Further, as yaw angle enhances temperature profile and heat transfer rate, for enhancing magnitudes of combined convection parameter, we observe the curves obtained by varying the yaw angle; the curves overlap into a single curve with irrespective values of combined convection parameter.
Figure 13 .
Figure 13.Comparison of velocity profile ( ) , Fx with the particular case of flow over cylinder
Figure 13 .
Figure 13.Comparison of velocity profile F(x, η) with the particular case of flow over cylinder for θ = 0, Ec = 0 and A = 0.
(m s −1 ) u ∞ free stream velocity (m s −1 ) V y-velocity (m s −1 ) w z-velocity (m s −1 ) x, y and z curvilinear coordinates (m) Greek symbols β 1 , β 2 linear and nonlinear thermal expansion parameters (K −1 ) β 3 , β 4 linear and nonlinear thermal expansion parameters of liquid hydrogen β C nonlinear concentration convection coefficient for liquid hydrogen β T nonlinear temperature convection coefficient ∆x, ∆η step size for x and η coordinates ε 1 velocity ratio parameter along chordwise direction ε 2 velocity ratio parameter along spanwise direction x, η transformed variables θ yaw angle ν kinematic viscosity (m 2 s −1 ) ψ dimensionless stream function Subscripts x, η denote the partial derivatives with respect to these variables e indicates the condition at the boundary layer edge w indicates the condition at the wall ∞ indicates the condition at the mainstream. | 7,271.6 | 2021-06-01T00:00:00.000 | [
"Physics"
] |
Investigating the Preparation Conditions on Superconducting Properties of Bi 2 − xLixPb 0 . 3 Sr 2 Ca 2 Cu 3 O 10 + δ
One and multi-step solid state reaction methods were used to prepare a high temperature superconductor with a nominal composition Bi2−xLixPb0.3Sr2Ca2Cu3O10+δ for (0 ≤ x ≤ 0.5). The effect of preparation conditions and substituting Li on Bi sites had been investigated by the use of X-ray diffraction, resistance measurements and oxygen content to obtain the optimum conditions for formation and stabilization of the 2223-phase. It has been found that intermediate grinding will force to convert and accelerate the formation rate of the 2223-phase. The morphological analyses were carried out by SEM. The results showed that the multi-step technique was appropriate to prepare the composition Bi2−xLixPb0.3Sr2Ca2Cu3O10+δ. X-ray diffraction analysis showed two phases: high-TC phase 2223 and low-TC phase 2212 with orthorhombic structure for all samples. However, the optimum concentration was found for 0.3 which improved the microstructure and had the highest TC value 130 K for the highest value of oxygen content.
Introduction
High-T C superconductors comprise a collection of tiny, randomly oriented anisotropic grains which are connected to each other by a system of so called "weak links" or "matrix" and other impurity phases and defects [1].Therefore, the orientation of the grains in the polycrystalline microstructure must be along a certain direction which is often produced textured grain alignment and causes microstructure modifications, enhancing flux pinning, and that increases the current carrying capacities.It is well established that the improvement of preparation process of high-T C superconductor and its conducting properties is important for practical applications.
In the Bi-2223 compound, it is difficult to align well the superconducting grains, because of the complex formation mechanism of the Bi-2223 superconductor phase.However, the grain connectivity and the degree of texturing in the samples depend on various parameters.It has been widely proved that the starting composition, several times of intermediate grinding, sintering time and temperature have a strong influence on the (2223) phase [2].
On the other hand, the formation and stability of the 2223 phase can be modified by the addition or substitution of elements of varying ionic radii and bonding characteristics.This variation is thought to be related to the density of charge carriers in the CuO planes [3].Correlation between the preparation conditions of high T C superconductors and its conducting properties is necessary to develop the practical applications.The approval of a material for application in the Bi(Pb)SrCaCuO system necessitates the ability to control the effect of different doping elements and processing parameters, on its properties.
Agostinelli and coworkers [4] pointed out that the longer annealing times resulted in more homogenous samples having a single superconducting transition to zero resistance at 108 K.
Dey [5] studied the influences of sintering duration on the electrical resistivity and thermal conductivity of (Bi 0.8 Pb 0.2 ) 2 Sr 2 Ca 2 Cu 3 O 9.8+δ pellets with 0.11 < δ < 0.54 in the range of (10 -150) K.He found a gradual transformation of the 2212 phase to the 2223 phase and this transformation started within 5 hr of sintering in air at 840˚C.
Fernando et al. [6] showed that the conventional solid state reaction method required very long heat treatment with several intermediate grinding stages in order to produce single Bi 2223 superconducting phase.
Bilgili et al. [7] showed that Bi 1.7 Pb 0.3 Sr 2 Ca 2 Cu 3−x Li x O y compound with x = 0.2 exhibited a maximum critical temperature (T C = 98 K), besides the samples had the highest volume fraction of the Bi-(2223) high-T C phase and was 81%.
Muna et al. [8] found that not only the physical properties of HTSc Bi 2−x Cu x Pb 0.3 Sr 2 Ca 2 Cu 3 O 10+δ for Cu (0.1 ≤ x ≤ 0.5) depended greatly on the elemental composition, but also it was very much delicate with the details of the preparation method.
Obviously, the structure and electrical development of BiPbSrCaCuO systems depend on preparing condition and substitution.So the main purpose of this work is to know how they are affected by the processing variables together with the partial substitution of Bi by Li, and try to find a subtle and delicate balance between composition, structure and superconducting properties for Bi 2−x Li x Pb 0.3 Sr 2 Ca 2 Cu 3 O 10+δ compounds.
Experimental Procedures
The samples of the system Bi 2−x Li x Pb 0.3 Sr 2 Ca 2 Cu 3 O 10+δ with concentration (0 ≤ x ≤ 0.5) were prepared by conventional solid-state reaction route.An appropriate weights of the starting materials, Bi 2 (CO 3 ) 3 , Pb 3 O 4 , Sr(NO 3 ) 2 , CaO, CuO and Li 2 CO 3 were taken with precise values.Then the powders were well mixed and grounded using agate mortar with a sufficient quantity of 2-propane to homogenize the mixture to get a fine powder.The mixture was then calcined in air at 800˚C for 30 hrs with intermediate re-grinding.The powder obtained reground again and pressed into disc-shaped pellets at 0.7 GPa, with thickness between (2 mm × 3 mm) and 13 mm diameter using hydraulic pressure type (Specac).
Series of pellets were prepared with different conditions as following: First group was sintered at 830˚C for 140 hrs (one step).Second group was sintered at 840˚C for 140 hrs (one step).Third group was sintered by using three steps with intermediate re-grinding and repressed for one and two steps: the samples were sintered twice at 850˚C for 50 hrs while in the third step the samples were sintered at 830˚C for 40 hrs.
The structure of the prepared samples was obtained by using X-ray diffractometer type Philips with the Cu-K α radiation.A computer program was used to calculate the lattice parameters, based on Cohen's last square method.The resistivity measurements were performed by the standard four-probe method.
Iodometric titration was used to access the oxygen content d in the samples.Scanning electron microscopy (SEM) type (TESCAN) has been used to analyze the surface morphology and investigate the nature of the grains of Bi 2−x Li x Pb 0.3 Sr 2 Ca 2 Cu 3 O 10+δ with different x.
Results and Discussion
XRD analyses showed an orthorhombic structure of all the samples and show two main phases: high-T C phase (2223), low-T C phase (2212) with some impurities phases like Sr 2 Ca 2 Cu 7 O δ detected at 2θ equal to 36.8 O as shown in Figures 1-3.
It could be seen that increase the sintering temperature causes an increase the intensity of the peaks.The most intense peak pattern of samples belongs to the high-T C phase which also indicates an increase in the volume fraction of the high-T C phase [9].Furthermore, samples of third series exhibited an enhancement of high T C (2223) peaks with increasing Li concentration.This result indicates that the substitution by Li improves the crystalline arrangement degree [10] and volume fraction of the 2223 phase.where: I(2223) and I(2212) are the intensities of Bi-2223 and Bi-2212 phases respectively.The calculated relative portion of all samples is listed in Table 1.In general increasing of sintering temperature promotes the formation of high-T C phase 2223 due to improvement of the nucleation.Multi-step technique develops the superconductivity properties and necessary for growth the high-T C phase.
Samples of third series show a higher volume fraction of the 2223 phase at 0.2 for Li concentration in agreement with the previous observation by Bilgili et al. [7].According to the model suggested by Grivel and Fliikiger [11], the (Bi,Pb)-2223 phase forms not due to a layer-by-layer intercalation in the pre-existing (Bi,Pb)-2212 grains but through a distinct nucleation and growth process.Thus, the slightly larger volume fraction of the (Bi,Pb)-2223 phase present in the x = 0.2 composition might be a reason for the faster conversion rate of this composition.Moreover, the differences in the ionic radii of Li +1 , Bi +3 and Pb +2 together with increasing sintering temperature [12] elongate the c-axis then heightening of the high-T C phase furthermore.
The parameters a, b, c and V were also calculated from the XRD analysis as shown in Table 1.This table indicates that the content of Bi-2223 increases and a change in the structural parameters was obtained with increasing Li concentration, this change in the lattice parameters affect the volume of the unit cell and then causes an increase in the density.
Upon Li substitution a parabolic curve for lattice parameter c is observed as illustrated in Figure 4, that is increased up to x = 0.3 then shortened towards Li concentration.This implies that the mechanism of substitution of Bi by Li is not simple.Since the ionic radius of Bi is greater than Li thus the substitution increases the distance between the CuO 2 planes which leads to increase the c parameter.But increasing Li concentration causes reduction of the c parameter, this could be ascribed to the charge ordering phenomenon (probably induced by Li known as a pair breaker) may be a companied by change in oxygen content or oxygen ordering effects.Furthermore, the interaction between additional bands crosses the Fermi level extracts the holes from CuO band [13].This attractive interaction caused the decreased of the distance between CuO 2 planes.The decreased of cparameter may be caused detrimental effects on the critical temperature as will be discussed.
On other hand addition of Pb to the compounds may relax the modulation by influencing the charge balance, oxygen content and structural of the relevant layers [14].From the previous considerations it is possible to state that even for the same compositions high-T C phase formation kinetics depend considerably on the preparation conditions.
The relation between volume percentages of 2223 phases with different concentrations is plotted in Figure 4.It can be inferred that the volume of 2223 phase varies systematically with c-parameter.
Measurements of electrical resistivity of the first and second group's samples are displayed in Figure 5 and Figure 6 respectively.The resistivity of most samples decreasing nearly linearly albeit to a certain limit although a complete zero-resistance hasn't been observed.Whereas samples of third group reveal a metallic behavior in the normal state and a superconducting transition to zero resistance with T C = (113, 116, 117, 130, 118 and 113) K for (x = 0, 0.1, 0.2, 0.3, 0.4, and 0.5) respectively as shown in Figure 7.
It is well established that there are two types of superconducting grains, one formed by the 2223 phase and the second by the 2212 phase coupled together via some weak links and by passes the islands of the 2212 phase [15].Once the volume fraction of 2223 phase within the sample is sufficient to make this possible, a one-step resistivity transition is observed even in the samples which contain a rather large amount of the 2221 phase.
A sharp drop of resistivity was observed for concentration Li = 0, this revealing that the sample consists of predominantly of (2223) phase; this trend is also a good evidence of the homogeneity of the (2223) phase [16] Composition with x = 0.1 shows long tail with T C = 116 K, the reason may existence of small amount of the secondary phase and or/fluctuation of the oxygen content.
Sample with x = 0.2 shows two steps, reflecting the decrease of the intergrain transition.It is possible that the both superconducting phases exist within one grain in such a way that the low-T C phase forms on the surface of the high-T C phase one.In this case a sufficiently thin layer of the low-T C phase can play a role in the weak link.
Although increasing Li concentration to 0.3 degraded the granular quality of the sample meanwhile; it raised T C to 130 K.This could mainly due to the strong link and increasing the contact areas between the grains during the sintering process in other words decrease of porosity [17].Moreover a broadening transition is noted and this may be due to the presence of impurities or non-superconducting regions or multi superconducting phases in the sample.The most probable reason may be the existence of multi superconducting phases.
Increasing the Li concentration to x = 0.4 and 0.5 results in a substantial degradation of the (Bi,Pb)-2223 phase and decreases the critical temperature.Another feature was noticed a sharp drop at the transition temperature which is attributed to the transition within the grains and the presence of low-T C phase Bi-2212.
This results suggest the increasing amounts of Li content in Bi-2223 phase stabilizes and promotes the growth of the Bi-2212 phase at the expense of Bi-2223.This result in good agreement with that obtained by Halim et al. [18].
It should be emphasized that, in most of hole-doped cuprates (but not in all), T C as a function of doping has bell-like shape which is common behavior in mono-or bilayer cuprates.Thus, the different doping regions of the superconducting phase may be chosen such as the underdoped, optimally doped and overdoped regions [19] [20].
According to the above results the relation between T C and Li concentration is almost parabolic as given in Figure 8, and can be explained based on the parabolic dependence between T C and the number of holes per CuO 2 layer [21].Since the samples with the majority of Bi-2223 are in the underdoped state, their superconducting parameters tend to optimal values with increasing Li concentration.Highest T C was determined at 0.3 which confirm that the sample is in optimal doping regime while decrease of T C beyond this concentration seems to be due to the shift of this sample towards the over-doped region.However, increasing Li concentration, decreases the T C but does not inhibit the formation of (BiPb)-2223 phase.These observations are confirmed by XRD analysis.
Bi2223 is very sensitive to oxygen; it has been found that increasing the time of sintering causes a decrease of the oxygen content [22].According to McKernan et al. diffusion of the oxygen is more rapid at high temperature [23].From all the analysis results given in Figure 4 and Table 2, it is seen that synthesis sample sintering at higher temperature within short time by multiple pressing and sintering process (third group) increases the oxygen content.This agree with [24], he observed that the values of δ increased by multi-step technique.Higher value of δ determined at Li concentration 0.3, and then reduced for further increasing of substitution.It is well known that Li-doping lowers the melting temperature of this phase so the decrease in oxygen content would be expected [25].
It can be easily inferred that both the δ and T C increased, with increasing x to 0.2 and 0.3, then reduced for further increasing of substitution similar behavior of oxygen content with the transition temperature for for Bi 1.6 Pb 0.4 Sr 2 Ca 3 Cu 4 O 10+δ system was indicated by Zhao et al. [26].This apparently it is due to Bi replacement by Li ion, so the presence of excess oxygen atoms in the CuO 2 layers will create more holes in the perovskite layers, the creation of holes will shorten the Cu-O 2 bond length and this leads to an improvement of the T C [8] [27].
Whereas, substitution by Pb decreases the average valance of Cu and thereby the average CuO 2 -plane hole concentration in BiPb-2223.Therefore the reduction of T C is argue to out of plane substitution of Pb for Bi where show a longer wavelength of structural modulation [28].
Thereafter increasing Li concentration beyond 0.3, caused decrease both δ and T C .This effect related to the charge ordering phenomenon, probably induced by Li and Pb as a pair breaker, may be a accompanied by changes in oxygen content or oxygen order effects which is decreases the number of holes in the lattice [29] results suggest that as the hole concentration of the compound increases above a critical value, the superconductor is suppressed.
From the viewpoint it is obviously each of T C , δ and c parameter have harmonious behavior with doping concentration.
Surface morphology of the samples showed a noticeable change observed of the grain size, shape and distribution on the surface.This indicates the influence of sintering temperature and varying Li concentration on the morphology of the samples.
Figure 8 illustrates the SEM for x = 0.2, and 0.3 samples sintered at 840˚C (second group), a formation of very small crystal regions with no preferred orientation with in the limit of SEM, was observed.These suggest that grain growth of the superconducting phase is insufficient within this range of temperatures.Increasing the sintering temperature up to 850˚C (third group) yielded larger grains (larger crystallinities) as shown in Figure 9 for the samples at different doping concentration of Li (x = 0, 0.1, 0.2, 0.3, 0.4 and 0.5).Samples with larger grains had superconductor behavior with higher T C this effectively increases the contact areas and leads to smaller current density under the same applied current.
The SEM image of undoped and 0.1 Li concentration specimens show more voids and small size grains with thin flake like structures, indicating poor formation of the superconducting phase.While increasing Li concentration up to 0.3 yields larger grains with formation of needle like.In addition a plate like grains which are the typical grain structure of (Bi,Pb)-2223 had been formed, similar results were pointed out by Takano et al. [3].A drastic change were found in the grain morphology grain size, shape and grain orientation gradually changed and reduced with increasing the amount of Li substitution in the main matrix.As can be seen from Figure 9(e) and Figure 9(b) thin grains seems to have decomposed into small grains and have more pores for samples with x = 0.4, and x = 0.5.
All the XRD, SEM and resistivity, studies implied that higher amount of Li up to an optimum concentration 0.3 promotes the formation of 2223 phase to achieve higher T C values.This may be attributed to the capability of Li to facilitate oxygenation of the samples and could be responsible for nucleating more of the 2223 grains with the desired stoichiometry.
Conclusions
Sintering temperature is considered to be a critical value for the high-T C phase formation and the optimum sintering temperature appears to be close to the partial melting point (just below the melting temperature).Moreover, it has been observed that synthesis Bi-2223 within short sintering time by multiple pressing and sintering process which lead to larger growth of grain size and thus yield stronger coupling at the grain boundaries.It did improve the superconductivity properties and growth the high-T C phase.
Substitution of Li and Pb altered the electronic structure of BSCCO system and simultaneously influenced the superconducting T C .However, samples with Li substitution of 0.3 exhibited maximum values of T C , 130 K.
Oxygen incorporation into sample allows an uptake of the optimal oxygen content into matrix, which leads to the prominent enhancement of T C and the high-T C 2223 phase formation.
Different behavior of the superconducting parameters attributed to the effect of doping with different concentration and oxygen content based on the parabolic dependence between T C and the number of holes per CuO 2 layer.Therefore, multi-step technique combining with Li doping, appears to be an efficient way to bring the usually under-doped superconductor to its optimal T C .
Figure 4 .
Figure 4. Volume fraction, c parameter, δ and T C with different concentrations for samples of third group.
Figure 7 .
Figure 7. Temperature dependence of resistivity for Bi 2−x Li x Pb 0.3 Sr 2 Ca 2 Cu 3 O 10+δ with 0 ≤ x ≤ 0.5 third group.The two small figures above are partial enlarged details of the large version below.
Figure 8 .
Figure 8. SEM for second group samples.
Table 1 .
The lattice parameters and volume fraction of phases formed with concentration for third group.
Table 2 .
The sintering temperature, oxygen content and critical temperature with concentration for all groups.
Figure 9. SEM for third group samples. | 4,656.4 | 2015-03-26T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Substrate Induced Strain Field in FeRh Epilayers Grown on Single Crystal MgO (001) Substrates
Equi-atomic FeRh is highly unusual in that it undergoes a first order meta-magnetic phase transition from an antiferromagnet to a ferromagnet above room temperature (Tr ≈ 370 K). This behavior opens new possibilities for creating multifunctional magnetic and spintronic devices which can utilise both thermal and applied field energy to change state and functionalise composites. A key requirement in realising multifunctional devices is the need to understand and control the properties of FeRh in the extreme thin film limit (tFeRh < 10 nm) where interfaces are crucial. Here we determine the properties of FeRh films in the thickness range 2.5–10 nm grown directly on MgO substrates. Our magnetometry and structural measurements show that a perpendicular strain field exists in these thin films which results in an increase in the phase transition temperature as thickness is reduced. Modelling using a spin dynamics approach supports the experimental observations demonstrating the critical role of the atomic layers close to the MgO interface.
FeRh undergoes a first order metamagnetic phase transition from an antiferromagnetic (AFM) phase to ferromagnetic phase (FM) when such stimuli such as thermal or strain energy are introduced to the system. Originally discovered by in studies of Fe based intermetallic alloys of Ru 1 , Ir 2 and Rh 3 , later work by Kouval (1962) further demonstrated that a magnetic phase transition could be observed when the alloy was heated above room temperature [3][4][5][6] . Fe x Rh 1−x , when grown close to equi-atomic (48 ≤ x ≤ 56 at% Fe) composition, forms a B2 CsCl-type chemically ordered binary alloy. When correctly grown this CsCl type structure will exhibit an antiferromagnetic phase, denoted α″ -FeRh, at room temperature. It has been shown that this AFM phase is a type-II or G-type antiferromagnet, such that adjacent lattice planes parallel to the {111} CsCl planes are antiferromagnetically coupled with respect to each other and ferromagnetically coupled within the plane 7 as shown schematically in Fig. 1(a) inset. The phase transition can be induced by subjecting the FeRh system to external stimuli including temperature 3 , pressure 8 , applied magnetic 9,10,11 and electric fields 12 , femtosecond laser pulses [13][14][15] and more recently spin polarized currents 16 .
The physical properties of FeRh have recently lead to an intensified interest in this system due to its potential to be incorporated into functional devices. For example, manipulation and control of magnetic order in FeRh via application of an electrical field 17 and spin polarized current 18 has demonstrated promise in low power spin based electronics for storage and artificial multiferroic based logic applications 19,20 . Very recent studies have also demonstrated that the first order meta-magnetic phase transitions in FeRh alloys can be induced by manipulating the atomic separation using ferroelectric substrates such as (001) oriented ferroelectric PMN-PT 17 or BaTaO 3 12,21 .
In order for FeRh to be included in real world devices, traditional scaling arguments 22,23 imply that very thin layers will need to be employed. The majority of previous work published on FeRh has focused on bulk-like properties in films on the order of several 10's of nm and above 7 . However, more recent work has investigated effects in low dimensional epilayers of FeRh [24][25][26][27] where it is known that interface regions exhibit symmetry breaking and Scientific RepoRts | 7:44397 | DOI: 10.1038/srep44397 low dimensionality that can significantly modify the physics of the system 28 . However, an understanding of the role of interface effects on the properties of the phase transition and how this may limit future FeRh based devices is still lacking.
In this work we demonstrate the methodology needed to create sub-10 nm thin films of FeRh on single crystal MgO substrates, which then allows the properties of FeRh in the thin film limit to be explored systematically. Experimental results are compared with simulations to develop a deeper understanding of the MgO/FeRh interface and demonstrate how the AFM/FM transition changes with varying film thickness which potentially sets a fundamental limit on FeRh layer thickness.
Results: Effect of film thickness
In this work a set of MgO/Fe 50 Rh 50 samples, where the thickness (t FeRh ) ranged from 2.5 nm to 10 nm were produced. Since the purpose of our work was to investigate the effect of the substrate/FeRh interface, no capping layer was used and hence the samples contained a single MgO/FeRh interface capable of exerting a crystallographic strain on the magnetic thin film. Magnetometry was used to investigate the meta-magnetic phase transition as a function of temperature. Figure 1(a) i and 1(a) ii show the thermal hysteresis loops, from which it is evident that as the film thickness is reduced the phase transition undergoes a significant change where the nucleation and transition temperatures shift to higher temperatures. The ferromagnetic nature of the 2.5 nm film is confirmed by Fig. 1(a) iii which shows the hysteresis (M-H) loop measurement at T = 423K. The change in the transition temperature and width is summarized in Fig. 1(b), which shows the normalised phase transition width σ ( ) T T r r where σ T r is the standard deviation of the fitted transition width (obtained from dM/dT) and T r is the temperature taken (a) i thermal hysteresis loops performed with an inplane field of 1kOe showing the AFM(green) to FM(red) regions. Also plotted is the maximum point of the phase transition M t m as a function of the temperature at which that this occurs T m ; (a) ii shows the thermal hysteresis for the 2.5 nm film along with the 10pt Savitzky-Golay filtered and drift corrected data; (a) iii shows the M-H data for the 2.5 nm film taken at an average a temperature of 423 K. The outset (above) shows schematically the magnetic order of the Fe sub-lattices before and after the phase transition in addition to the induced moment on the Rh atom; (b) the extracted normalised transition temperature distribution as a function of film thickness during the heating and cooling cycles; (c) ratio of the magnetization before, M 300K , and after at the maximum point of the phase transition M t m and (d) M t m as a function of film thickness fitted with a quadratic function.
at the mid-point of the transition. Figure 1(a) i shows that the transition moves to higher values as the thickness is reduced for both heating and cooling cycles. Figure 1(b) demonstrates the asymmetry between the heating (AFM/FM) and cooling (FM/AFM) cycles which is similar to that typically observed for FeRh films approximately ≤ 50 nm 29 . It is also apparent from the thinnest (2.5 nm) sample that the transition to the AFM phase is incomplete as indicated by the presence of the FM signal below the transition. Stabilized ferromagnetism at FeRh/ MgO interfaces can be understood as arising from anisotropic 30 graded 31 strain fields along the substrate film normal, which act to change the magnetic order of the strained region in addition to other interface related phenomena 30 . This is demonstrated in Fig. 1(c), where the ratio of the magnetization prior to and directly following the phase transition is plotted as a function of film thickness. This ratio is essentially zero for the films with thickness ranging from 10 nm-5 nm. However, in the thinnest sample the ratio dramatically increases to ≈ 70% due to the relatively large magnetization at 300 K and the reduced magnetization at full saturation. These data agree qualitatively with similar data in the literature 25 . Figure 1(d) shows the maximum value of magnetisation during the M vs. T curves (phase transition) M t m as a function of film thickness. These data strongly suggest the presence of a region of material that does not contribute to the measured magnetic signal, as the intercept in t FeRh is non-zero. Fitting the experimental data with a 2 nd order polynomial gives a value for the thickness of this region of 1.76 ± 0.15 nm.
As the magnetic properties of FeRh thin films are intrinsically linked to the crystal structure, x-ray diffraction (XRD) measurements were undertaken to investigate their crystallographic and texture properties. The XRD measurements were performed in the standard ϑ-2ϑ geometry where the observed diffraction is due to crystallographic planes parallel to the substrate surface, i.e. perpendicular scattering vector (q z ). Figure 2(a) shows the measured ϑ-2ϑ diffraction spectra for the thickness series. These data demonstrate a well formed B2 CsCl type structure shown by the presence of the FeRh (001) fundamental and (002) superlattice peaks. The data also demonstrate that as the thickness of the samples is reduced the relative intensities and the positions of the peaks change. This is highlighted in the inset where it can be seen that the peak position moves to lower angles as the film thickness is reduced (green line in insert). The perpendicular lattice constant c (along [001] denoted c 001 ) was extracted from the position of the (002) superlattice peak and compared to the value of the lattice parameter of ≈ 2.997 Å for the 10 nm thick film denoted as c 0 . This allows the strain along the, [001] direction normal to the film, c 001 , to be quantified as a function of film thickness. The strain ε 001 was calculated using equation (1), Structural analysis obtained by XRD in Bragg-Brentano geometry showing; (a) θ -2θ spectra for the thickness ranges 10 nm-2.5 nm, inset peak fits shows the shift in the FeRh (002) diffraction peak as the thickness of the film is reduced (highlighted are the (001) and (002) FeRh peaks); (b) shows the calculated perpendicular strain at 300K as a function of the film thickness relative to the 10 nm FeRh film (red line is a guide to the eye) and (c) shows the extracted ordering parameter S calculated from the integrated intensity of the fundamental (001) and superlattice (002) peaks, the shaded regions represent the films that exhibit AFM (green) or FM (red) behavior at 300K (red line is a guide to the eye).
Scientific RepoRts | 7:44397 | DOI: 10.1038/srep44397 Fig. 2(b) it is clear that as the film thickness is reduced, there is a corresponding increase in the perpendicular strain. This increase is monotonic between 10 nm-2.5 nm, with a significant increase in the strain of ≈0.65% for the 2.5 nm relative to the 10 nm film.
The XRD measurements of the (001) and (002) peaks also allow the degree of chemical order to be estimated. The chemical order parameter S, is a measure of the fraction of Fe/Rh lattice sites that obey the ordering conditions of the Fe/Rh atoms 32 . It can be calculated 33 where I I / 001 002 are the integrated intensities of the fundamental and superlattice peaks respectively. Figure 2(c) plots S as a function of the film thickness. These data demonstrate that as the thickness is reduced, the structural ordering S, within error, remains constant at ≈ 0.81 for films in the range the 10-5 nm. However, as the film thickness is reduced to 2.5 nm S decreases to ≈ 0.68.
The surface structure of these FeRh thin films was also investigated using atomic force microscopy and transmission electron microscopy (TEM). Figure 3(a-d) shows atomic force microscopy data that reveals a Volmer-Webber island type growth mode is present in these films as previously observed in FeRh films in a similar thickness regime 24,26 . Furthermore, it is found that the island size varies as a function of the film thickness. Figure 3(a-d) demonstrates that initially a nano-island topography exists where the island size is ≈ 20 nm for the 2.5 nm film which then grows laterally until coalescence between the islands, resulting in a stripe type pattern for the 10 nm film. This evolution of the film topography can be understood in terms of the relative binding energies between that of the film and the MgO substrate 24,26 . The insets of Fig. 3 show TEM selected area diffraction (SAED) patterns which reveal a clear cubic epitaxial relationship between the FeRh film and the MgO substrate, where the unit cell of the FeRh is rotated 45° in-plane relative to the unit cell of the MgO, such that MgO [200] is parallel to FeRh [110], shown schematically in Fig. 3(e). As the thickness of the film decreases, the intensity of the FeRh 100 and 010 SAED spots decreases, disappearing entirely for the thinnest sample. We hypothesise that this is not simply due to the decreasing volume, but is additionally attributed to the decreasing chemical ordering of the FeRh film when close to the MgO surface and an increased presence of perpendicular lattice strain in the film.
Atomistic Simulations/Modelling
In an attempt to understand the underlying mechanism for the observed increase in the phase transition temperature with decreasing film thickness, we have performed atomistic spin dynamics simulations of FeRh films of various thicknesses. We have included within the model of the FeRh films a 1.5 nm interface layer to approximate the experimental results. By performing a systematic variation of the possible magnetic orderings within this layer we can obtain an insight into the nature of its magnetic structure. We use the atomistic spin dynamics model 34 that treats each atomic magnetic moment as a localized atomistic spin and is based on a Heisenberg Hamiltonian formalism that, in this case can be written as below: Here J ij and D ijkl are the bilinear and four spin terms that take the values given in Barker et al. 35 . We take the anisotropy constant, K i , from Mancini et al 36 . These authors measured four anisotropy constants but here we include only the largest term which was shown to be more than two orders of magnitude greater than the others. The largest energy contributions in the Hamiltonian, eq. (2), are the bilinear and four-spin terms. The bilinear terms have ferromagnetic nearest and second nearest neighbor terms. The bilinear terms support the ferromagnetic state, whereas the next nearest neighbor terms supports antiferromagnetic order. The four spin term gives rise to antiferromagnetic order and has a different temperature scaling to the bilinear term. This allows the phase transition to arise due to the competition between the exchange interactions. The temperature dependent, quasi-equilibrium, magnetization is found by solving the many body Landau-Lifshitz-Gilbert equation of motion 37 . In order to create a feasible regime for the numerical simulations, used to determine the thermal hysteresis loops, a heating/cooling rate of 50 K/ns was employed. This rate is much faster than used experimentally in magnetometry measurements. To compensate for this, we solve the Landau-Lifshitz-Gilbert equation in the critical damping regime to drive the system more rapidly to the ground state. We have simulated the thickness dependence of FeRh with an interfacial AFM layer, with an FM layer and without any interfacial layer. We determine that for the latter two simulations, (with an FM layer and with no layer) the phase transition temperature always decreases as the sample thickness is reduced. This is contrary to our experimental findings; therefore we conclude that neither of these hypothetical situations represents the magnetic state of our samples. Only the simulation with the antiferromagnetic exchange layer, the 1.76 nm region that does not contribute to the magnetic signal, as shown in Fig. 1(d) is able to reproduce the increase in the phase transition temperature with sample thickness seen in Fig. 1(ai). The numerically determined thermal hysteresis loops 38 calculated for an AFM layer, Fig. 4, show good qualitative agreement with our experimental data in Fig. 1. Our XRD results demonstrate an increasing crystal strain in the thinnest films, believed to be associated with the FeRh/MgO interface. Such strain can significantly alter the magnetic ordering 39 and we propose that this is the reason for antiferromagnetic exchange in the layer near the FeRh/MgO interface.
The presence of antiferromagnetic order in the AFM interface layer favors the formation of the antiferromagnetic phase in the remaining part of the FeRh film. The exchange in this layer must be sufficiently strong to overcome the partial frustration at the interface, as the FeRh moves towards the ferromagnetic state. Otherwise, the transition temperature will decrease with decreasing thickness.
Strain based two-layer model
Atomistic spin dynamics modelling provides a detailed atomic level simulation of the ultra-thin FeRh films and our results show that the increase in the transition temperature can be explained by assuming that there is a thin, 1.5 nm, AFM coupled layer adjacent to the MgO substrate. This AFM coupling acts to support a strained α″ AFM B2 structure requiring higher temperatures to overcome the phase transition as the film thickness is reduced. Whilst this modelling provides a comprehensive description of the system, it is also extremely useful in developing technological applications to provide a simpler description which allows the essential features of the magnetic properties to be understood in a more intuitive manner. Here we present a simple two-layer model based on strain differences. In this model the two layers represent; i) the layer observed in the magnetometry data, approximately 1.76 nm, that did not contribute to the magnetic signal and ii) the bulk α″ AFM B2 phase which represents t FeRh − 1.76 nm of the film, Fig. 5(a). The lattice constants of the two layers were chosen such that the total weighted average approximately equaled the measured lattice constant for that particular layer, therefore, these constants were set as 0.3035 nm and 0.299 nm for the α″ AFM strained layer and α″ AFM bulk like layer respectively. The calculated strain was determined according to eq. (1) using the weighted average calculated lattice constant and normalized to the calculated lattice constant at 10 nm. In this two layer model we again assumed . Numerically determined thermal hysteresis loops as a function of FeRh thickness (data offset for clarity), with a 1.5 nm antiferromagnetic layer and the total thickness shown by the labels. As the thickness of the layers is decreased there is an increase in the transition temperature consistent with the experimental measurements of Fig. 1(a). For the 2.5 nm case there is no detectable phase transition.
Scientific RepoRts | 7:44397 | DOI: 10.1038/srep44397 the AFM strained FeRh layer to be 1.5 nm to approximate the layer measured magnetically. Figure 5(b) shows the results from the two-layer model for a 1.5 nm strained layer along with the measured strain as a function of thickness for comparison. This simple model agrees well with the experimental data providing a useful insight into the physical origin of the magnetic behavior. In reality, we expect the situation to be more complex than can be described by just two layers. This is suggested by the increase in the width of the transition as the film thickness is reduced indicating the presence of additional strain as shown in Fig. 5(c), where additional peak broadening is observed. In the model, the AFM layer is interpreted as a region with an increased lattice constant where the AFM coupling is strongest. Hence, as the film thickness is reduced, the film is comprised of a portion of the AFM layer and the strained region leading to a larger measured lattice parameter.
Discussion
In this work we have demonstrated that the magnetic properties of FeRh thin films are highly dependent on film thickness. Magnetometry revealed that as the film thickness is reduced from 10 nm to 2.5 nm the first order metamagnetic phase transition undergoes a pronounced systematic variation. We find that there is a large increase of ≈ 40 K in the transition temperature and a broadening of the thermal hysteresis. We attribute the changes in the magnetic behavior, importantly the increase in T r , to the structural changes measured by XRD, where it is shown that as the film thickness is reduced there is an increase in the perpendicular strain, along the [001] direction, such that the change in the latent heat energy required to nucleate the phase transition increases. This conclusion is further supported by recent ab initio electronic simulations which show that the body centred tetragonal (bct) expanded tetragonally strained structure is likely to be the ground state configuration for FeRh 40 . This could explain why we observe an increase in the transition temperature as the FeRh is increasingly strained as the thickness is reduced. We attribute the observed lattice expansion to the presence of a large interface strain field that acts along the [001] direction which is more significant in the thinnest films of FeRh due to its limited perpendicular extent. The net result of this strain is a film containing a hybrid of two phases of FeRh; the α″ AFM B2 structure as indicated by the presence of the fundamental (001) peak and a strained FeRh later which exhibits a larger AFM coupling, due to the larger lattice parameter, than is exhibited by the majority of the FeRh film. In the thinnest film a small (compared with fully ordered FeRh) ferromagnetic moment persists below the expected transition. XRD data reveals an increase in the strain as the film thickness is reduced which is accompanied by a significant reduction in the chemical order parameter S. The ground state energies of FeRh are known to be highly sensitive 30 to the termination layer element 41 , especially in thin films of the order of 5-8 monolayers. No evidence of in-plane strain accommodation, within the detectable limit of ± 0.5% was found through analysis of TEM selected area diffraction patterns for the films spanning the range 10 nm-5 nm. The lack of detectable in-plane strain accommodation indicates that the FeRh lattice is locked into position along the [110] direction at the MgO/FeRh interface and that the film is allowed to expand only along direction [001], consistent with the XRD data. The Volmer-Webber type growth, observed by atomic force microscopy offers an ideal platform to study the fundamental limits and physics of FeRh thin films grown in the ultrathin limit. This is demonstrated through the epitaxial growth of the B2 FeRh onto (001) MgO substrates forming islands of highly-ordered B2 CsCl FeRh which is highly textured.
Conclusions
In conclusion, we demonstrate experimentally that reduced dimensional epilayers of FeRh contain a graded perpendicular strain fields at the MgO/FeRh interface. We show that this strain affects the nucleation of the first order meta-magnetic phase transition. As the thickness of the films is reduced, the strain has a greater effect, increasing the energy barrier for phase nucleation and resulting in the start of the phase transition being shifted towards higher temperatures. Accompanying this increase, we observe an increase in the transition width and reduction in the ordering parameter S, which could set a fundamental limit on the thickness of these ordered binary alloys. Using atomistic modelling we demonstrate that the effect of the strain is to induce an antiferromagnetic layer, which supports the antiferromagnetic state in the FeRh layer resulting in an increased transition temperature with decreasing thickness. These results provide good evidence that the transition temperature and width of the thermal hysteresis can be controlled using appropriate choices of seedlayers where both the lattice constant and thermal expansion coefficient must be carefully matched to FeRh. We also introduce a simpler two-layer model that allows essential features of the interfacial layers to be described in a manner that can be easily understood and incorporated into models of larger systems, for example in simulations of heat assisted magnetic recording or spintronic devices.
Methods
Sample Growth and Preparation. The samples were grown by dc magnetron sputtering using an 11 target AJA sputter system from a Fe 50 Rh 50 alloy target, with a base pressure of better than 5 × 10 −9 Torr, a process pressure of Ar of 3 mTorr and a gun power of 100 W. All samples were deposited on 10 × 10 mm single crystal (001) oriented MgO substrates. A substrate temperature T Sub of 650 °C was used during film deposition, this temperature was then increased to 750 °C for post-deposition annealing. The films were left to cool under vacuum, until ambient conditions were established. Two samples were grown for each set of conditions, one of which was used for the Atomic force microscopy study and X-ray diffraction measurements. The other was then cut into a nominally 8mm disk using a South Bay Technology Model 360 disk cutter. The remaining material was then used to fabricate the Transmission electron microscopy wedge samples.
Magnetic Measurements.
The magnetic properties of the FeRh thin films were measured using a MicroSense model 10 vector vibrating sample magnetometer (VSM). The temperature hysteresis measurements were performed in a temperature range of 298 K-473 K with a step size of 3 K using a soak period of 60 s and a sampling average 50. This resulted in an effective temperature sweep rate of 1.17 Kelvin/minute. A 1 kOe inplane applied magnetic field was used to saturate the domain structure along the film plane during the measurements and signals were measured both parallel and perpendicular to this plane. Background subtraction was performed by measuring, under the same conditions, a bare MgO substrate which was then subtracted from the measurement of the film. A 10 point Savitzky-Golay digital filter was used to help enhance the signal-to-noise ratio (SNR) whilst maintaining the measured signal. From the smoothed data all analysis was performed. Volume normalisation of the measured magnetic signal was performed using the nominal thickness and the disk dimensions as measured by a digital Vernier caliper. Error considerations where taken from uncertainties in the film thickness, disk radius temperature measurement and magnetic moment. In order to extract the transition temperature T r , transition width σ Tr and magnetisation at the maximum point of the phase transition M , t m a Boltzman growth function summed with a quadratic function was used to fit the smoothed data from which parameters were extracted either directly from the fits or by differentiation of the processed data which were then fitting with a Gaussian function. The magnetization at 300 K was extracted by using a cubic fit of the data about 300 K. The magnetic hysteresis (M-H) data for the 2.5 nm sample was taken at an temperature of 423 K, using a field step size/ sweep rate of 250 Oe and with a sampling average of 70. Structural Analysis. Structural analysis was undertaken by X-Ray diffraction (XRD) and transmission electron microscopy (TEM).
The XRD data were collected on a Rigaku Smartlab X-Ray diffractometer equipped with a 9 kW rotating anode and operating at the CuK α1 (λ = 1.540593(2)Å) wavelength obtained via a Ge (220) double bounce monochromator. The stage used in this experiment was an Anton-Parr furnace. The step size used in these measurements was 0.01 degrees and the 2ϑ range was 20-70 degrees at a rate of 0.6 degrees/minute. In order to extract the structural ordering parameter S, lattice constant c and peak parameters such as the FWHM value, a 10 point Savitzky-Golay digital filter was used around the (001) and (002) peak positions. Following this, a Pearson VII peak function was fitted to the smoothed data which included a linear background, this allowed values such as peak position, integrated intensity and FWHM to be determined. Lattice parameter determination, hence strain calculations, were carried out using the (002) FeRh peak in order to reduce the effective angular resolution limit which varies as ϑ Cot ( ) allowing for higher accuracy measurements to be taken, as given by the differential Bragg equation. Errors in the lattice parameter, hence strain, where obtained through propagation of the angular resolution, assumed to be approximately 0.03 degrees, with the spectral bandwidth uncertainties. Uncertainty in the ordering parameter was taken from the errors in the areas of the fitted diffraction peaks.
TEM imaging was performed with an FEI Tecnai T20 TEM using a LaB 6 source and an accelerating voltage of 200 kV. The samples were thinned to electron transparency by mechanical polishing of the substrate using an Allied High Tech Multiprep automatic polishing machine. This uses a variation on the tripod polishing technique which is required for preserving the crystallinity of the samples, avoiding the surface amorphisation layer associated with ionic polishing techniques. The lattice parameter of FeRh was determined from the SAED patterns using the MgO substrate as a built-in standard. The measurement was optimized by employing a broad parallel electron beam over a large sample area to ensure small sharp diffraction spots and by taking repeat measurements from each pattern along both < 100> and < 110> type directions, considering both low order and higher order spots. In this way the accuracy of strain measurement was estimated to be better than 1%. The SAED patterns displayed in Fig. 3 have been cropped from the original data for clarity, and the MgO and FeRh spots are highlighted in green and red respectively.
Scientific RepoRts | 7:44397 | DOI: 10.1038/srep44397 Surface Analysis. Atomic force microscopy was used to investigate the surface topography. These measurements were taken on a Bruker Dimension Icon instrument in peakforce tapping mode, which allowed high spatial resolution imaging of the FeRh surface. The images shown in this study were performed over a 1 μ m area and at a resolution of 512 × 512 pixels with a scan frequency of 1 Hz. The peakforce setpoint used in this study spanned the range (1-5 nN). Data processing was used to flatten and subtract background planes from the data. Data availability. The data that support the findings of this study are available from the corresponding author upon request. | 6,918.8 | 2017-04-12T00:00:00.000 | [
"Physics"
] |
Metamaterial-Assisted Illumination Nanoscopy with Exceptional Axial Resolution
Recent advancements in optical metamaterials have opened new possibilities in the exciting field of super-resolution microscopies. The far-field metamaterial-assisted illumination nanoscopies (MAINs) have, very recently, enhanced the lateral resolution to one-fifteenth of the optical wavelength. However, the axial localization accuracy of fluorophores in the MAINs remains rarely explored. Here, a MAIN with a nanometer-scale axial localization accuracy is demonstrated by monitoring the distance-dependent photobleaching dynamics of the fluorophores on top of an organic hyperbolic metamaterial (OHM) substrate under a wide-field single-objective microscope. With such a regular experimental configuration, 3D imaging of various biological samples with the resolution of ≈ 40 nm in the lateral dimensions and ≈ 5 nm in the axial dimension is realized. The demonstrated imaging modality enables the resolution of the 3D morphology of nanoscopic cellular structures with a significantly simplified experimental setup.
Introduction
Super-resolution fluorescence microscopy, i.e., nanoscopy, has been widely used for examining the nanoscale structure and biological dynamics of living cells and tissues. [1]It overcomes the classical resolution barrier in optical microscopyi.e.200-300 nm in the lateral plane and 500-800 nm along the axial direction [2] by exploiting different physical mechanisms. [3]To improve the lateral resolution most of the state-of-the-art super-resolution nanoscopies rely on modifying the temporal dynamics of fluorophores to perform the single-molecule localization, such as photoactivated localization microscopy (PALM) [4] and stochastic optical reconstruction microscopy (STORM), [5,6] or taking advantage of the nonlinear responses of fluorophores to engineer the point spread function, such as stimulated emission depletion microscopy (STED) [7] and ground state depletion microscopy (GSD). [8]Additionally, their axial resolution can readily be enhanced by 4Pi or I5M super-resolution techniques. [9,10]In parallel with these fluorophore-engineeringbased super-resolution microscopies, structured illumination microscopy (SIM) has also been developed to obtain 3D superresolution images [11,12] by sampling the object with nonconventional illuminations.Nevertheless, each super-resolution fluorescence microscopy has its own strengths and weaknesses that need to be considered for specific biological applications.
In a different context, engineered optical materials, known as electromagnetic metamaterials, have been investigated in a wide range of research fields including the super-resolution microscopy. [13]These metamaterials have provided unique insights into the way of surpassing the diffraction limit, e.g., the perfect lens made of negative-refraction metamaterials allows for achieving an ideal image. [14]Moreover, they offer exceptional capabilities to extend the resolution of existing super-resolution techniques.For example, metamaterial-assisted illumination nanoscopies (MAINs) such as plasmonic structured illumination microscopy (PSIM) [15] and localized plasmonic structured illumination microscopy (LPSIM) [16,17] have improved the lateral resolution to ∼∕5 and ∼∕6, respectively, compared to the limit of ∼∕4 in the classical SIM, where is the wavelength of illumination.Very recently, the lateral resolution has been further enhanced to ∼∕15 by using either multilayered hyperbolic metamaterials (HMMs) [18,19] or organic hyperbolic metamaterials (OHMs). [20,21]While there has been significant success in improving the lateral resolution with the MAINs, accurately A time-varying speckle illumination was produced by shaking a multimode fiber with a step motor, and then directed to the stained bio-sample through the underneath OHM substrate.The excited fluorescence signals were collected by an objective lens and imaged with a sCMOS camera.B) Highk illumination enabled by the OHM.Optical modes with a high k in the OHM were excited by the incident speckle beam from its bottom surface, resulting in randomly distributed hot spots with a subwavelength volume on its top surface.A lateral super-resolution image with such a random high-k illumination was reconstructed based on the Blind SIM algorithm.C) Nanometer-scale axial localization enabled by the OHM high-k illumination.Since there is a one-to-one relationship between the distance-with respect to the OHM top surface-of a fluorophore and its photobleaching dynamics, the axial super-resolution information was obtained via this distance-photobleaching relationship based on a series of lateral super-resolution images.D) 3D OHM MAIN.Randomly distributed near-field high-k speckles generated in the OHM transfer the fine structural information of fluorophores-e.g. the lateral (Δr) and axial (Δz) separations of two fluorophores-into the far-field microscope.E) Typical scanning electron microscope (SEM) image of a bio-sample (Cos-7 cells) prepared on top of an OHM substrate.
localizing fluorophores along the axial direction remains a major challenge.
In this work, we demonstrate a 3D MAIN with a spatial resolution of ≈40 nm in the lateral dimensions and ≈5 nm in the axial dimension.The 3D MAIN is implemented based on an OHM substrate with a wide-field single-objective speckle-illumination microscope to extract the lateral super-resolution information through frequency mixing and the axial super-resolution information through the distance-dependent photobleaching dynamics of fluorophores on top of the OHM.We verify the 3D super-resolution capability by obtaining images of biological specimens.Compared with other 3D super-resolution methods mentioned above, the demonstrated 3D MAIN features simple implementation, low phototoxicity, and no need for special fluorophores (see Table S1, Supporting Information for a comprehensive comparison).Therefore, such an organic metamaterial-enabled super-resolution imaging technique with the nanometer-scale axial localization ability could open many exciting possibilities in biological research.
MAIN with Nanometer-Scale Axial Localization Accuracy
The 3D MAIN setup is illustrated in Figure 1A.A transmission speckle-illumination fluorescence microscope system was used in this work, which was also used in the previously reported MAINs [18,21] (see Experimental Section for details).By employing a step motor to stretch the input fiber coupled with a 488-nm laser, various speckle patterns were generated on the backside of an OHM substrate.High spatial frequency (high-k) speckle illuminations on the top surface of the OHM substrate were then created due to coherent superposition of the wavelets resulting from roughness-and/or particulate-induced scattering when the input laser was passing through the OHM (Figure 1B).These dynamically controllable high-k near-field speckles provided ultrafine random sampling of the object and thus allowed for high lateral resolution beyond the diffraction limit.[24] Emission signals from the fluorophores were collected by a 40×/0.6-NAobjective lens, and acquired by a scientific complementary metal-oxide-semiconductor (sCMOS) camera after a band-pass filter (520 ± 20 nm).The axial super-resolution information was obtained through the distance-dependent photobleaching decay rate [23] (Figure 1C).
To have a better physical intuition about the 3D MAIN, finitedifference time-domain (FDTD) simulations (see Experimental Section for details) were carried out.Figure 1D shows the simulated electric field intensity distribution of the illuminating speckles; the speckle result on top of a glass substrate is given in Figure S1 (Supporting Information).Compared to the conventional glass substrate, the OHM substrate is able to confine the input light into deep subwavelength volumes, which contributes to the high-k frequency mixing and thus the high lateral resolution.Such a strong light confinement ability of the OHM is due to its hyperbolic dispersion, [20,21] as shown in the inset of Figure 1D.In addition, the hyperbolic dispersion of the OHM leads to an exceptionally high Purcell factor, [22][23][24] which is the key factor for determining the axial position of fluorophores.The combination of the OHM's strong light confinement and Purcell effect presents a unique opportunity to resolve deep subwavelength features of fluorophores in 3D.
Steps to Obtain 3D Super-Resolution Image
The 3D MAIN is applied to obtain 3D super-resolution images of the morphology in close contact areas, e.g.focal adhesions or filopodia, for biological samples attached to OHM substrates, and the procedure flow is summarized in Figure 2. Cos-7 cells (Figure 1E) were grown on top of the OHM substrates and transiently transfected with a plasma membrane or actin localized fluorescent protein (see Figure S2, Supporting Information for details).Using the 3D MAIN system, 2000 different diffraction-limited images of the cells were recorded with spatially varied speckle illuminations.The Blind-SIM reconstruction [25] was then performed sequentially every 100 frames to obtain 20 super-resolution images (≈40 nm in the lateral resolution).With this time series of super-resolution images, the photobleaching lifetime of fluorophores was obtained by fitting the respective intensity decay curves, pixel by pixel.Adding knowledge about the distance-dependent photobleaching lifetime enabled the retrieval of the axial position of the fluorophores [23] (see Figure S3, Supporting Information for details).In the end, a 3D super-resolution morphology image of the cells was obtained.We provide an additional straightforward procedure flow of the 3D MAIN via numerical simulations (see Figure S4, Supporting Information for details).
3D Super-Resolution Imaging of Cos-7 Cells
Figure 3 summarizes the 3D MAIN super-resolution imaging results for the fluorescently labeled plasma membrane and actin of Cos-7 cells.Compared to the conventional wide-field fluorescence images (Figure 3A,D) and the super-lateral-resolution images (Figure 3B,E), our 3D MAIN allows for the visualization of a 3D morphology of the cell samples beyond the diffraction limit over all spatial directions (Figure 3C,F).Therefore, the contact areas of the cell membrane and actin can be investigated with greater precision using the 3D MAIN.
The resolution of the obtained 3D MAIN images of the cell samples is measured by the lateral separation and the axial localization accuracy.Figure 3G shows a ≈40-nm center-to-center lateral separation of two intensity peaks, which corresponds to a lateral resolution of ∼∕13.In general, 3D MAIN uses a biexponential function for the axial localization of fluorophores and only examines fluorophores in a distance range of less than 50 nm: In the photobleaching lifetime fitting process, the faster component results from the photobleaching of fluorophores located far away (d > 50 nm) from the OHM surface, while the fluorophores in the proximity of the OHM surface contribute to the slower decaying emissions (see Figure S3, Supporting Information for details).
Estimation of Axial Localization Accuracy
The axial localization accuracy of 3D MAIN is estimated via the standard errors of the mean (s.e.m., ), which can be obtained during the intensity decay fitting process. [23]Figure 4A shows the distribution for the 3D MAIN image of the fluorescently labeled actin in a Cos-7 cell (Figure 3F).As observed in our previous study, [23] most regions have small ( < 3 nm).However, there is a large ( > 3 nm) in certain areas, which is the signature of actin overlapping (Figure 4B).In this case, multiexponential functions must be used to resolve the overlapped actins; for example, Figure 4C-E shows the 3D MAIN superresolution image of the cell actin around the large- area with a tri-exponential fitting function and now the two actins are clearly distinct with a ≈5-nm axial separation and a localization accuracy below 3 nm.
For more intricate fluorophore distributions in complex cellular environments, more photobleaching processes must be considered.However, the resulting multi-exponential fitting typically requires a much denser sampling and is known to be sensitive to noise.In the current 3D MAIN implementation, we applied the Blind-SIM algorithm for image reconstruction which used 100 raw data frames for each super-resolution image; with a total acquisition of 2000 images before the fluorophores completely bleach, 20 reconstructed intensities per pixel can be used for the photobleaching lifetime fitting.This is the main limitation to the multi-exponential fitting as well as the axial localization accuracy.Recently, the use of deep neural networks (DNNs) for image reconstruction in super-resolution microscopies has demonstrated superior performance in many challenging cases including reconstruction with limited numbers of input frames. [26,27]Future application of the DNN based image reconstruction could greatly reduce the required number of raw images for each super-resolution frame, and correspondingly provide a much denser intensity decay sampling for a much more accurate multi-exponential fitting.
To ensure accurate imaging, the motor shaking rate should be kept below 1 Hz to avoid image distortion, considering the 200ms exposure time and the motor startup and braking time.Additionally, the photobleaching rate should be maintained below 0.01 Hz to balance between the signal decay time and the total imaging time.The optimization of motor shaking rate, photobleaching rate, and imaging capture rate need to be done toward practical applications in future works.
Conclusion
In conclusion, we have demonstrated a 3D super-resolution imaging technique, namely 3D MAIN, that reaches a lateral resolution down to ≈40 nm with an axial localization accuracy of Adv.Sci.2024, 2404883
(5 of 7)
≈5 nm for biological imaging.By using an OHM substrate, the large-spatial-wavevector illumination overcomes the lateral resolution barrier, while the distance-dependent (with respect to the OHM substrate) photobleaching dynamics of fluorophores enables high-precision axial localization.The 3D MAIN provides a powerful tool to unveil the subwavelength scale cellular architectures and could significantly impact cell biology.We envision this nanoscopy technology could dramatically improve our understanding in many active research areas, for example, the membrane contact sites such as the endoplasmic reticulumplasma membrane contact sites, which play important roles in cell signaling. [28,29]
Experimental Section
Optical Set-Up: An inverted fluorescence microscope (Olympus IX-83) was used with a 488-nm excitation laser (Coherent Genesis MX488-1000 STM) coupled through a multimode fiber and a microscope condenser. [18]A step motor was applied to mechanically shake the multimode fiber for generating the required random laser speckles.The speckle patterns at the fiber exit were imaged on the backside of an OHM-coated microscope slide, projecting diffraction-limited speckles.The speckle intensity onto the OHM was ≈50 W cm −2 .These diffraction-limited speckles were converted by the OHM substrate to sub-diffraction-limit speckles on top of the OHM, which were used to illuminate the fluorophores.The fluorescence signals were collected by a sCMOS camera (Hamamatsu Orca Flash 4.0 v3) after an emission filter (520 ± 20 nm).
OHM Film Fabrication and Optical Characterization: A solution was prepared by dissolving 100 mg of >98% regioregular head-to-tail P3HT (rr-P3HT) molecules (Sigma-Aldrich, average molecular weight ≈87,000 g mol −1 ) in 1 mL of chlorobenzene solvent, and then heated at 50 °C for 3 h.Subsequently, this rr-P3HT solution was spin-coated onto plasma-cleaned glass substrates.The thickness of the produced rr-P3HT OHM film was obtained based on variable angle spectroscopic ellipsometry (VASE) measurement, which was also used to determine the optical permittivity of the OHM film as previously reported. [20]Cos-7 cells were cultured on the OHM film substrate and transiently transfected (see Figure S2, Supporting Information for details).
FDTD Simulation of High-k Speckle Patterns: 3D FDTD simulations were conducted (Ansys Lumerical FDTD).The experimentally obtained permittivity of the OHM was used in the simulations. [20,21]To calculate near-field intensity distributions in the vicinity of the OHM-coated glass, a 2D xy-plane power monitor and a 2D xz-plane power monitor were placed.The perfectly matched layers (PMLs) were used along the x, y, and z directions, and the simulation domain size was 2 μm × 2 μm × 600 nm.Minimum mesh sizes of 5, 5, and 2 nm were set along the x, y, and z directions, respectively.Random speckles resulting from scattering due to impurities, surface roughness, and grain boundaries on the bottom surface of the OHM were generated using 100 randomly oriented point dipole sources with a wavelength of = 488 nm and random initial phases at the substrate bottom within an area of 1 μm × 1 μm; therefore, the average distance between two adjacent dipoles was ≈100 nm.The electromagnetic waves generated from these randomly polarized dipole sources pass through the OHM layer (180 nm in thickness), interfering with each other to produce high spatial-frequency speckle patterns on the top surface of the OHM.The cut-off spatial frequency of the speckle patterns was determined using the fast Fourier transform in the spatial-frequency domain.
Figure 1 .
Figure 1.MAIN capable of nanometer-scale axial localization.A) Schematic drawing of a wide-field speckle-illumination microscope used in 3D MAIN.A time-varying speckle illumination was produced by shaking a multimode fiber with a step motor, and then directed to the stained bio-sample through the underneath OHM substrate.The excited fluorescence signals were collected by an objective lens and imaged with a sCMOS camera.B) Highk illumination enabled by the OHM.Optical modes with a high k in the OHM were excited by the incident speckle beam from its bottom surface, resulting in randomly distributed hot spots with a subwavelength volume on its top surface.A lateral super-resolution image with such a random high-k illumination was reconstructed based on the Blind SIM algorithm.C) Nanometer-scale axial localization enabled by the OHM high-k illumination.Since there is a one-to-one relationship between the distance-with respect to the OHM top surface-of a fluorophore and its photobleaching dynamics, the axial super-resolution information was obtained via this distance-photobleaching relationship based on a series of lateral super-resolution images.D) 3D OHM MAIN.Randomly distributed near-field high-k speckles generated in the OHM transfer the fine structural information of fluorophores-e.g. the lateral (Δr) and axial (Δz) separations of two fluorophores-into the far-field microscope.E) Typical scanning electron microscope (SEM) image of a bio-sample (Cos-7 cells) prepared on top of an OHM substrate.
Figure 2 .
Figure 2. Procedure flow to extract both the lateral and axial super-resolution information in 3D MAIN.First, 2000 diffraction-limited images are recorded with an exposure time of 200 ms.Then, N = 20 lateral super-resolution images are reconstructed from every L = 100 successively diffraction-limited images using the Blind-SIM algorithm.Owing to the photobleaching, the emission intensity of the fluorophores drops exponentially over time; the axial super-resolution information is then obtained by fitting the intensity decay curve I ij for each pixel (i, j) of the 20 reconstructed lateral super-resolution images.Scale bars: 5 μm.
Figure 3 .
Figure 3. 3D MAIN super-resolution imaging of fluorescently labeled plasma membrane and actin of Cos-7 cells.A,B) Conventional wide-field fluorescence image A) and MAIN super-lateral-resolution image B) of the fluorescently labeled plasma membrane of a Cos-7 cell.C) 3D MAIN super-resolution image of the fluorescently labeled Cos-7 cell membrane.Scale bars: 2 μm.D,E) Conventional wide-field fluorescence image (D) and MAIN super-lateralresolution image (E) of the fluorescently labeled actin of a Cos-7 cell.F) 3D MAIN super-resolution image of the fluorescently labeled actin of the Cos-7 cell.Scale bars: 1 μm.G) Intensity profiles along the line 'G (1)' in (D) and the line 'G (2)' in (E), respectively.A ≈40-nm center-to-center lateral separation of two intensity peaks is clearly visible.
Figure 4 .
Figure 4. Estimation of axial localization accuracy in 3D MAIN.A) Distribution of standard errors of the mean of calculated distance value using a biexponential fitting function for the 3D MAIN image of the fluorescently labeled actin in a Cos-7 cell shown in Figure 3G.Scale bars: 1 μm.B) Conventional wide-field fluorescence image (top panel) and MAIN super-lateral-resolution image (bottom panel) of the cell actin around the area with a large .Crossing of actins is visible in the MAIN image but how these actins overlap is unknown.Scale bars: 100 nm.C-E) 3D MAIN super-resolution images (C,D) and height profile (E) of the actin around the large- area using a tri-exponential fitting function.The axial positions of the overlapped actins are resolved. | 4,209.8 | 2024-08-20T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Formative E-Assessment of Schema Acquisition in the Human Lexicon as a Tool in Adaptive Online Instruction Formative E-Assessment of Schema Acquisition in the Human Lexicon as a Tool in Adaptive Online Instruction
This chapter presents a comprehensive method of implementing e-assessment in adaptive e-instruction systems. Specifically, a neural net classifier capable of discerning whether a student has integrated new schema-related concepts from course content into her/his lexicon is used by an expert system with a database containing natural mental representa tions from course content obtained from students and teachers for adapting e-instruction. Mental representation modeling is used to improve student modeling. Implications for adaptive hypermedia systems and hypertext-based instructions are discussed. Furthermore, it is argued that the current research constitutes a new cognitive science empirical direction to evaluate knowledge acquisition based on meaning information.
Introduction
A significant number of cognitive oriented adaptive hypermedia systems (AHSs) for learning have been developed. Due to the alternative formative character of AHSs emphasizing learning processes during learning [1], many of these systems are developed mainly by considering users´ cognitive styles, learning style [2][3][4][5], previous knowledge before or during an AHS learning [6][7][8], or intellectual [9].
Typically, an AHS approach demands two types of information processing to achieve two goals [10]. One process consists of gathering information (dependent variables typifying personal and psychological attributes of a user [6][7][8]), which is used to assign a user to one of several learner models (cognitive classification). Based on the results, a second process adapts the hypermedia instruction (e.g., adaptive content selection, adaptive presentation, and adaptive navigation [11]). Figure 1 illustrates these processes.
Note from Figure 1 that achieving the second goal depends completely on achieving the first goal, that is, selecting a learner model. Thus, any weakness in achieving proper student classification demands urgent corrective behavior within the adaptation process to accurately infer the user goals and thus offer navigation support and content adaptation during instruction. Unfortunately, more often than not, the construction of user models is based on weak data collection (descriptive and/or psychological data), and this weakness leads to the implementation of mechanisms to enhance adaptation processes by minimizing the cost of adaptive behavior and increasing user control over adaptation [12], improving [13], and addressing user variability [14], etc. In other words, this process is driven by the corrective adaptivity of the system rather than adaptability in which the user can consciously participate in the adaptation [15].
It is assumed that weaknesses in student modeling frequently stem from using cognitive tools that are controversial, either poorly structured or poorly developed, and many tools are famous for lacking robust empirical support (e.g., learning style/cognitive style instruments [16]). Generally, these tools do not have a good reputation in cognitive science.
Thus, from a cognitive science point of view, there is clearly much to say about student modeling. As we will discuss in the next sections, by digital implementation of more sophisticated cognitive science tools to study human learning and by introducing a third goal regarding assessment in typical adaptive instruction systems, research directions can be expanded to provide innovation in student modeling to enhance AHS.
Considerations on cognitive science of human learning and cognitive modeling of students
Common sense in formal education assumes that the better we understand learners cognitive functioning during learning, the more effective the instruction can be achieved. Inside the educational technology fields, many intelligent tutoring systems (ITSs) claim to do this by modeling the way students take decisions [17] and solve problems while they are socializing [18] or by considering users emotional states during instruction [19]. Even when this approach has many positive implications to research and development on cognitive ergonomics and engineering psychology [20], it is our strong belief that the current state on ITS is still far from inheriting positive implications from cognitive science research advances. For instance, AHS innovation, instead of considering cognitive research to innovate student modeling to improve error-type analysis of learner's performance during learning, has rested on corrective adaptability of instruction to support learning outcomes [17]. This kind of evaluating a learner's performance resembles summative assessment of learning where the goal is to specify what a student does not know at the end of a course rather than knowing what a student knows during and after learning like in formative assessment of learning approaches [21]. This approach to evaluate learning can be extrapolated to many fields of digital educational technology [22,23].
To our current paper goals and to illustrate this point in a deeper way, we will introduce next a discussion inside the context of adaptive hypermedia systems (AHSs) to emphasize how education technology development is strengthened when contextualized by basic cognitive research. Here the main goal is to speak in favor of: a. Innovating education technology by constantly binding basic cognitive science research advances to develop education technology.
b.
Considering new empirical directions to integrate assessment of learning and instruction into single parallel formulations to support adaptability of instruction.
Thus, the following description of a formative-oriented AHS computational system is brought as an example on how improvement opportunities are at disposal for ITS research and development. This is achieved by focusing our attention on considering the human lexicon as the starting point to develop an AHS to support constructive learning outcomes.
The human lexicon as a potential cognitive construct to implement AHSs
The human mental lexicon is considered a memory capacity to store and meaningfully organize single concepts by connecting them through different types of semantic relations (a mental dictionary). This definition of one of our mental capacities was first appointed by Treisman in 1961 [24], and it is considered a central cognitive structure for language description and human learning (e.g., learning a language).
As it has been the case for most cognitive constructs introduced to explain the human mind, to consider a human lexicon as part of our cognitive architecture has not been an easy task. After heated academic debates, several views (cognitive models) regarding the lexicon have emerged, leaving different research groups to enroll into different theoretical considerations or views ranging from the possibility of a mental dictionary-like system up to the possibility of a no-lexicon view. Thus, currently, three dominant views prevail to guide academic research on this topic [25]: the multiple lexicons view implying different system stores for different lexical information like sensorimotor information, emotion or spatial information [26,27], the single-lexicon view where all lexical levels are integrated [28], and the no-lexicon view (lexical knowledge without a mental lexicon [29]).
In spite of controversy regarding this topic, the concept of a human lexicon has been appealing enough to bring attention from education technology developers. For instance, Salcedo et al. [30] presented an adaptive hypermedia model (LEXMATH) that can be used as an opportunity to illustrate this point. Specifically, these authors argued that by considering a student's lexicon, learner modeling is optimized. In this AHS model, students´ lexicons regarding general or specific topics are obtained through surveys and are maintained in a database. An ideal lexical domain is obtained from teachers, and during instruction, an expert system optimizes learning paths by adapting navigation support and teaching activities to minimize differences between students´ lexicons and the provided ideal lexical domain in field of mathematics.
These types of models point to a more robust direction to innovate student modeling since it empowers the AHS technology with a developed theoretical framework regarding human mental representation but still incomplete. However, notice that LEXMATH does not subscribe to a specific view or specific model within an academic view of the human lexicon. This model seems to rest on a commonsense view of considering a dictionary-like view of the human lexicon. This excludes the system from using robust methodology to assess specific assumptions of lexical behavior (especially regarding learning) promoted by a cognitive model. Rather, LEXMATH again describes a kind of error-type analysis approach to minimize differences between an expert and a learner where lexical knowledge acquisition (modification) uses indicators unfamiliar to robust cognitive lexicon views. As pointed before, this is not so uncommon since this approach to support cognitive-based instruction based on minimizing differences is frequently used inside modern approaches of ITS or AHS.
As we will describe next, alternative new empirical research directions that impose a strongest connection between basic cognitive research and education technology implementation empower innovation without losing our old tricks to ITS and AHS development. Specifically, to continue with our lexicon model discussion, it is described a cognitive constructive-chronometric system to asses human lexical oriented learning and at the same time improving student modeling to minimize corrective adaptability.
Interestingly, this model subscribes to the third view of the human lexicon, which is the nolexicon view. As it is expected, whenever an academic effort subscribes to a specific view, it immediately inherits academic criticisms form alternative views. However, by taking this step forward, some advantages are obtained: a. Methodology is obtained to measure specific assumptions about how lexical knowledge is acquired.
b. In contrast to other alternative lexicon views, the no-lexicon proposal is computational plausible under consideration of recent advances in computer science to model learners, namely connectionism models of mental knowledge representation.
c. Most importantly, as it will be described, the use of artificial neural net classifiers (ANNs) allows researchers to deal with cognitive theoretical developments, suggesting that schemata to assimilate knew knowledge does not really exist in memory but knowledge schema emerges as required for learning and thinking purposes.
Finally, by embedding these cognitive precepts about the human lexicon into AHS development, a prominent role is given to articulate dynamic assessment of learning to adapt and support digital instruction. This requires another way to explore adaptability of an ITS.
Adaptive instruction and the constructive/chronometric e-assessment approach
Instruction and assessment are integral parts of teaching to improve students´ experiences [31]. For instance, learning-oriented assessment (also referred to as formative assessment or assessment for learning) requires active participation of students in using feedback and selfmonitoring from instruction and assessment as keys to successfully acquire appropriate new knowledge from a course [32]. It is assumed that assessment provides explicit and implicit messages to facilitate a student's academic performance.
Let us first present a general framework of implementing dynamic assessment inside the context of AHS development. In this proposal, assessment is assumed to exert effects at various levels, and it constitutes by itself a domain and a goal. Figure 2 illustrates this point.
An e-assessment system that complies with these evaluation requirements, implementation viability, and cognitive science principles was first presented by Morales and colleagues [33][34][35].
At the core of their assessment system (EVCOG, for cognitive evaluator), there is a neural net classifier capable of identifying students who have integrated schema-related concepts from a school course into their lexicon (this schema-related concepts are obtained by using natural semantic nets; Figure 3A). The neural net classification capacity is based on the cognitive fact that once a student has integrated new knowledge into her/his long-term memory, a semantic priming effect (in a semantic priming study) is obtained from schema-related words only if meaningful long-term learning has occurred (single-word schemata priming [36,37]). Thus, the classifier uses a student's schema-related word-recognition times to assess whether the student has integrated new knowledge into long-term memory or has retained information in her/his short-term memory (e.g., to pass a test) or no new schemata were acquired at all. Figure 3B shows the role of this net classifier within a cognitive constructive-responsive/chronometric assessment of learning [38,39].
To train a classifier, hundreds of successful and unsuccessful learners´ schema-related wordrecognition patterns are presented to it. Achieving this requires first obtaining schema-related concepts from students before and after a course (after learning). Figure 4A shows a computer system for obtaining students' and teachers' concept definers to target schema-related concepts using a technique called natural semantic net mapping. This technique produces definitions (using single concept definers such as nouns and adjectives) for represented objects based on their meanings and not on free associations or pure semantic category memberships [40,41].
In this technique, the 10 highest-ranked definers of each target concept (SAM group) can be used to draw a semantic net, if desired. Some concepts serve as definers for more than one target concept. These are common definers, and other definers and target concepts are interconnected through them. Numerous common definers tend to emerge whenever there are close links among target concepts (schemata). . These concepts can be used to assess whether a student integrated new information into her/his long-term memory using digitized cognitive semantic priming techniques (chronometric evaluation; B). Word-recognition latency patterns are used by the neural net to discriminate between successful and unsuccessful learners.
A constraint satisfaction neural net (CSNN) is developed from concept cooccurrence through SAM groups such that the probability that two concepts cooccur or do not cooccur becomes their weight association in a rectangular matrix, with k possible connections with N concepts such that k = N(N − 1)/2. Thus, the weight association between two concepts (W) is calculated using the following derivative of the Bayesian formula: where X represents one concept in a pair of concepts to be associated, and Y represents another concept. In determining association values among concepts in a natural semantic network such as the one selected earlier, the joint probability value p(X = 1 and Y = 0) can be obtained by calculating how often the definer X of a pair of concepts appears in a list of definers in which Y does not appear, and likewise for the other probability values. These association values are used as an input matrix to the CSNN to simulate schemata of interest [42] (Figure 4B and C), and a large set of metrics for concept organization and structure can be obtained [41]. From schema simulations and semantic net analysis, schema-related word pairs are selected to implement semantic priming studies. Thus, students' word-recognition latencies to these word pairs are presented to the classifier for student classification (Figure 4, left).
Empirical support for e-assessment based on the human lexicon
To better describe these concepts, we will describe data resulting from application of constructive-chronometric assessment in an undergraduate psychology course on the computational mind. Figure 5A and B shows partial instances of definitions obtained from a set of 10 schema-related target concepts relevant to this course before learning (Figure 5, top panels) and after learning (Figure 5, bottom panels). The following target concepts were provided by the teacher of the course: mind, computation, von Neumann, Turing machine, connectionism, memory, computational mind, working memory, long-term memory, and HPI (human information processing). In developing a natural semantic net, participants are allowed 60 seconds to provide concept definers. Then, following each definition task, they rank each concept definer (between 1 and 10) in terms of how well they define the target concept. After the system has randomly presented target concepts, it calculates the 10 highest-ranking definers for each target (SAM group; Figure 5A). For later consideration in building an expert system, note that Figure 5A shows that the M value corresponds to the sum of ranks assigned by all the participants to each definer concept. This value is a measure of the definition relevance for the target concept.
Other values, such as the density of the net (G value) and the richness of the definers for each target (J value), are also calculated [40].
Note also that in Figure 5B, before learning (top panel), some of the targets lacked complete definitions. Moreover, lower common definers are obtained before learning. This lack of connectivity is reflected when a weight association matrix among concepts is calculated using Eq. (1) (Figure 5C, top). This is not the case for the weight association symmetric matrix obtained after learning, shown in Figure 5C (bottom). In turn, these connectivity matrixes can be used as an input matrix to many visualization tools, as shown in Figure 5D. Before learning, the visual concept organization allows one to immediately note that all the definers were arranged in two main groups connected by a single central one (PROCESSES). In contrast, at the end of the course, the net consists of a more sophisticated concept organization resembling a small world structure characterized by a set of highly clustered neighborhoods and a short average path length in which a small number of well-connected nodes serve as hubs. This net is a normal result of learning when using this technique [41].
This approach to evaluating learning emphasizes two aspects. First, the semantic net focuses on identifying meaning formation. For instance, at the end of the course, students centered their meaning formation around the core concepts of the computational mind: symbol; mind and brain; and the leading figures in this academic field, Turing and von Neumann. The teacher confirmed that this was the intent.
The weight matrix is used by a CSNN to simulate schema behavior, as shown in Figure 6.
Here, there is 100% activation of the SYMBOLS input. As a result, MIND and DURATION were the only output activated concepts. When the students were asked about this result, they argued that according to what they learned from the course, a core concept in cognitive theory is that all mind activities occur in time, even symbol processing and construction. This schema acquisition was also intended by the teacher. In addition, note from the surface plot in Figure 6 that balanced positivity and negativity of weight association values (from +10 to −60) enhanced correct discrimination among the schema-related concepts.
By selecting schema-related concepts from the computer models and semantic definers relevant to meaning formation (e.g., emergence of common definers in SAM groups or concepts relevant to a schema), schema word pairs can be selected to perform a semantic word priming study.
Schema-related concepts following the course involve longer word-recognition times since a whole schema is activated (not simply a lexical association).
To illustrate this point, Figure 7 shows interaction graphs describing a frequent result on schemata word-related time recognition. Figure 7A shows that at the beginning of the course, schemata related are not significantly differentiated from other semantically related word pairs. This is not the case at the end of the course where students required significant higher processing time to recognize schemata words (schemata priming). This schema priming is assumed to occur when schema information is stored in long-term memory, which likely explains why the neural net (after training) is useful for discriminating between successful and unsuccessful learners. This is relevant because even when we cannot see the existence of a schema in the lexicon, we can track its footsteps as evidence that long-term learning has occurred. On the other hand, it is not necessary to specify a lexicon; it is enough to say that lexical information is obtained and organized as proposed by a no-lexicon view. Figure 7B shows that this effect might vary depending on the knowledge domain and the effect of instruction [34,37,39].
Several possibilities are introduced by considering a cognitive assessment of learning like the one just described. Let us consider a study [43] carried over 60 first-semester bachelor engineering students who took a course on computer usability. Here, 15 students failed to pass the course, but after a post-season corrective course, they succeeded and achieved the course credit. Figure 8 shows the mental concept representations obtained by a constructivechronometric assessment of learning before and after the corrective course.
Note that at the beginning of the course, the EVCOG system shows that students have a mental representation with separated concept clusters (A). This leads to confusion in terms of meaning of a topic. After a corrective course, students presented a single unified schemata knowledge where DESIGN, INTERFACE, and USER showed meaningful centrality to knowledge representation (B). The teacher in charge of the course argued that after looking at the system cognitive report at the beginning of the course, she tried meaningful integration of topics by having the concept of DESIGN as the main reference for meaning formation. Chronometric assessment provided support to this learning process since schemata priming was not obtained at the beginning of the learning period but appeared at the end of classes supporting the idea that students not only successfully passed the course but also obtained long-term retention of schemata. Is it possible to obtain the same results with an ITS-AHS? Well, notice that now the problem is not cognitive modeling students but instructors. As it will be described next, current academic efforts are being made on this direction.
Adaptive e-instruction through e-assessment in e-learning environments: a proposal
Up to this point, the discussion of applying e-assessment to navigation support and adaptation of content has focused only on AHS. Note, however, that the same arguments can be applied to alternative e-instruction systems or alternative e-instruction. For instance, adaptive navigation support for AHS or in e-instruction can be implemented using the same assumptions by considering the model presented in Figure 9.
The student model
In a functional adaptive instruction system such as the one shown in Figure 8, the student model is a domain-specific well-trained classifier. Empirical research in several knowledge domains has shown that this type of classifier yields successful classification in 95-98% of instances [38].
Expert model: determining concept organization of meaning formation
During the defining of a target concept (in natural semantic net mapping), after a student decides which is the highest-ranking concept definer (indicated by its M value), the nexthighest-ranked concept from the set of definers depends on the concept frequency (F) in the definition task and the time required to produce it, that is, its interresponse time (ITR; see Formative E-Assessment of Schema Acquisition in the Human Lexicon as a Tool in Adaptive… http://dx.doi.org/10.5772/intechopen.81623 right column in the SAM group in Figure 5A). Thus, the M value of each definer can be correctly predicted (98% accuracy) using the following equation [41]: where A, B, C, and D are constants obtained from fit analysis. Here, word position in a SAM group is needed only to identify which definer ranks higher since the concept frequency has already been used to filter the SAM group.
Consider the case of a user searching for information on a web page (information foraging). This page must contain linked concepts sufficient for meaning formation (obtained using a natural semantic net). Then, after calculating the M values of selected concepts (considering the time taken by the user to select an available concept; ITR), a comparison can be made to check if the M values corresponding to searching for information on a web page correspond to a proper path of optimized M values corresponding to ideal meaning formation [44].
To illustrate this point, consider Figure 9. Here, a user has an initial representation state or initial meaning of web contents. This initial user conceptual organization is not assumed to be identical to the concept organization in a web page (isomorphic) but homomorphic. Information foraging through time (R) is based on a user cognitive strategy to obtain meaning from contents. Thus, transforming conceptual organization (T) and acquiring new concepts serve to obtain valid homomorphic representation of contents such that T'R = RT. A transformation path can be specified as: where O(t) relates to a specific conceptual organization (defined by natural semantic net parameters), which in turns defines R. Furthermore, by using some basic notation from automata theory [45], it is possible to specify a transition rule from 3 as follows: where meaning formation implies regulation of a transition rule ∂′(q, w) = T′ (Figure 10).
For example, consider a set of 10 highest-ranked concepts that provide most of connectivity in a natural semantic network [q 0 , q 1 , q 2 , q 3 , q 4 , q 5 , q 6 , q 7 , q 8 , q 9 ]. Here, proper meaning formation requires going from q0 to q9. Now suppose that after information foraging, a user produces a transition set like [q 1 , q 6 , q 0 , q 3 , q 5 , q 7 , q 4 , q 9 , q 8 , q 10 ] such that: Natural semantic net ∩ information foraging [q 0 , q 1 , q 3 , q 4 , q 5 , q 6 , q 7 , q 8 , q 9 ] Since user exploration of contents only missed one relevant semantic concept (q 2 ), it is assumed the user obtained a valid homomorphic mental representation of the meaning implied inside the web page even when the concept path position (estimated M value) is high (by considering Eq. (2)). Figure 10. Building a mental model from web page contents through meaning formation [44].
Formative E-Assessment of Schema Acquisition in the Human Lexicon as a Tool in Adaptive… http://dx.doi.org/10.5772/intechopen.81623 The expert system control mechanisms adapt navigation links that minimize differences between information foraging values and meaning formation (determined by transition rules specified by Eq. (4)), as well as by using the neural net classifier information (successful vs. unsuccessful integration of information in the user's lexicon).
Expert model: inference engine
The expert system includes a PROLOG backward-chaining inference engine that allows the system to build a valid "mental representation (GOAL)" based on natural semantic net data structures in the knowledge domain (templates) by request of a decision rule. This rule system considers whether schema priming for a specific module has been achieved by consulting the neural net classifier and by comparing the obtained path M values against an ideal descending organization of M values. If a semantic effect is not obtained, then the following events occur: 1. The subsequent knowledge modules remain disabled.
2.
The system instructs the inference engine to use the database to construct the closest mental representation based on the user's concept path (link set). Then, the navigation is modified based on the template that best approximates the user's initial exploration, and the user is prompted to try again.
Currently, research is being performed to achieve a dynamic optimization of search information by adapting navigational support based on minimization of differences between meaning values of the user and knowledge domains rather than waiting for the user to complete a knowledge module.
Knowledge domain
An adaptive e-instruction system (AHS/hypertext) within the present scope requires a database containing natural semantic networks similar to those described earlier. Here, templates are data structures containing SAM groups and their semantic values in which information can be accessed by a PROLOG-based inference engine. As the sample for developing these SAM groups is enlarged, better predictions for adapting navigational support can be achieved.
Interface
As shown in Figure 11A, when a student begins a learning session, she/he is presented with a menu of options of the course content.
Before and after exploring each module, a semantic priming study must be performed to provide the expert system module with information for adapting the navigation support by modifying the link structure in a module based on a meaning formation template. After a module or an entire course is completed, the user can obtain a cognitive performance report ( Figure 11B). This report serves as explicit assessment results that empower a user to adapt the searching of content (encouraging adaptability), whereas the link modification is an implicit message of corrective adaptability to improve proper meaning formation of the content. Selection of learning activities either using hypermedia or by modification of knowledge content depends on the expert model's evaluation of the user's meaning formation.
Conclusions
The goal of the proposed system is to promote assessment tasks as learning tasks, student involvement in assessment, and forward-looking feedback in adaptive e-instruction systems [32]. This system reduces the enormous delay in e-assessment innovation: e-assessment has been limited to mere digitization of traditional, sometimes ancient, evaluation methods [35].
A new empirical research line is opened in which student modeling is improved by using tools of cognitive science in adaptive e-learning systems in ways that were not possible before. We believe that research exploring the human lexicon as a way to adapt instruction will be at the center of future developments in AHS/hypertext. | 6,518.4 | 2018-11-05T00:00:00.000 | [
"Computer Science",
"Education"
] |
Four classes of interactions for evolutionary games
The symmetric four-strategy games are decomposed into a linear combination of 16 basis games represented by orthogonal matrices. Among these basis games four classes can be distinguished as it is already found for the three-strategy games. The games with self-dependent (cross-dependent) payoffs are characterized by matrices consisting of uniform rows (columns). Six of 16 basis games describe coordination-type interactions among the strategy pairs and three basis games span the parameter space of the cyclic components that are analogous to the rock-paper-scissors games. In the absence of cyclic components the game is a potential game and the potential matrix is evaluated. The main features of the four classes of games are discussed separately and we illustrate some characteristic strategy distributions on a square lattice in the low noise limit if logit rule controls the strategy evolution. Analysis of the general properties indicates similar types of interactions at larger number of strategies for the symmetric matrix games.
I. INTRODUCTION
In evolutionary games n × n payoff matrices are used to define the interactions among (equivalent) players following one of their n strategies against their coplayers defined by a connectivity network [1][2][3]. The systematic analysis of the games [4] and the classification of the resultant behavior are prevented by the large number of parameters (n 2 ) characterizing the interaction itself, particularly if n > 3. The first classification of the 2 × 2 games was suggested by Rapoport and Guyer [5] who considered the cases where the payoffs are characterized by their rank (1, 2, 3, and 4). Using this notation Liebrand [6] discussed the social dilemmas. The introduction of replicator dynamics [7] initiated a different classification based on the evolutionarily stable strategies [8][9][10][11] and phase portrait [12,13]. Very recently the games have been analyzed by distinguishing intragroup and intergroup interactions within the framework of population dynamics [14].
In recent literature of evolutionary game theory the twostrategy games are frequently characterized by four payoffs (P , R, S, and T ) referring to punishment, rewards for mutual cooperation, sucker's payoff, and temptation to choose defection when the two strategies are named as defection and cooperation in the terminology of social dilemma [15,16]. In that case, however, the interaction can be well described by only two parameters (T and S) without loss of generality when the dynamics is controlled by payoff differences and if we use a suitable payoff unit by choosing P = 0 and R = 1 [17]. In the two-dimensional S − T parameter space four quadrants are distinguished characterizing the harmony (T < 1 and S > 0), hawk-dove (T > 1 and S > 0), prisoner's dilemma (T > 1 and S < 0), and stag hunt (T < 1 and S < 0) games. These four types of games can be identified by the corresponding Nash equilibria. For example, in the region of prisoner's dilemma the game has only one Nash equilibrium dictating the choice of defection for both selfish players.
The existence of potential games [18,19] has raised the demand of finding another classification that allows us to distinguish clearly the potential games within the set of matrix (or normal) games. In a previous paper [20] we have shown that all the symmetric matrix games for n = 2 and 3 can be decomposed into the linear combinations of elementary games represented by orthogonal basis matrices. More precisely, the payoff matrices are built up from their two-dimensional Fourier components for both n = 2 and 3. Due to the general features of the Fourier series expansion the strength of each component can be evaluated straightforwardly. For the symmetric two-strategy games (n = 2) the first Fourier component can be interpreted as an irrelevant game where the equivalent payoffs eliminate the essence of games. The linear combinations of the first and second components represent games with self-dependent payoffs where the player's income is independent of the opponent's strategy. Similarly, the linear combinations of the first and third components describe games with cross-dependent payoffs when the player's income depends only on the coplayer's strategy. The direct interactions between the players are quantified by the fourth term resembling the coordination-type (or anticoordination-type) interactions on the analogy of the ferromagnetic (or antiferromagnetic) Ising model [21] with spins oriented upward or downward. In fact, these are the reasons why the multiagent games can be mapped onto an Ising-type model if the interactions among the players are described by symmetric two-strategy games. All these games are potential games that evolve into the Boltzmann distribution [19] if the strategy reversals are controlled by the so-called logit rule resembling the Glauber dynamics for the kinetic Ising model [22].
Similar concepts of decomposition were suggested previously in Refs. [23,24] without the introduction of a concrete set of basis games. The introduction of a suitable set of the orthogonal basis games, however, gives us a more sophisticated knowledge on the anatomy of matrix games. For example, the games with self-and cross-dependent payoffs are represented by the linear combinations of three orthogonal basis games characterized by matrices with uniform elements in columns and rows for n = 3. Evidently, the basis game with uniform matrix elements belongs to both types of the latter classes. For n = 3, the subset of the self-and cross-dependent games can be defined as the linear combination of (2n − 1) = 5 basis matrices. Additionally, three of the nine components describe games with symmetric payoff matrices that are the linear combination of coordination-type (or anticoordinationtype) interactions for the three possible strategy pairs. The ninth orthogonal component corresponds to the traditional rock-paper-scissors game, which is a zero-sum game with an antisymmetric payoff matrix, and the presence of this component prevents the existence of potential. Now we show that the above-mentioned general features are inherited for the symmetric n-strategy games and the "dimension" of the four classes of elementary games increases with the number of strategies. Because many relevant questions are related to the existence of potential, the next section is addressed to quantify the necessary conditions. In Sec. III we show how the payoff matrix can be built up as the linear combination of orthogonal basis matrices representing elementary games for n = 4. The application of the Walsh-Hadamard matrices [25] simplifies the calculations and invokes the theory of directed graphs for the graphical illustration of the inherent structure of symmetric matrix games. The general properties of the four classes of elementary games are discussed separately for the level of pair interactions and also for the spatial version of multiagent evolutionary games in the consecutive sections. The summary of this analysis implies discussions of those features that may remain valid for larger number of strategies for the symmetric games.
II. EXISTENCE OF POTENTIAL IN SYMMETRIC FOUR-STRATEGY GAMES
The symmetric matrix games are used to describe quantitatively the pair interactions between two equivalent players (x and y) who have four options to choose independently of each other. In the mathematical notation of game theory these (pure) strategies are denoted by the traditional unit vectors of a four-dimensional vector space as The payoffs for both players depend on the strategies they choose and are expressed by products as u x = s x · As y and u y = s y · As x , where the element A ij of the payoff matrix A defines the payoff for the first player if she chooses her ith strategy, whereas the coplayer selected the j th strategy (i,j = 1,2,3,4). The present symmetric two-person game is a potential game [18,19] if we can introduce a symmetric 4 × 4 potential matrix V that satisfies the following conditions: that is, for unilateral strategy modification the potential variation is equivalent to the payoff variation of the active player. The above equation expresses the case when the first player modifies her strategy from the ith to the kth while the second player uses her j th strategy. Similar requirements should be satisfied when only the second player changes her strategy. However, for the symmetric two-player games the latter condition is satisfied if the potential matrix V is symmetric (V ij = V ji ). The potential V exists if the sum of the mentioned payoff variations of the active player is zero along all the closed trajectories in the space of strategy profiles where only unilateral changes are allowed. The large number of possible loops is illustrated in Fig. 1, which shows the dynamical graph for the present four-strategy game [26]. In this dynamical graph the nodes represent strategy profiles (microscopic states) and the edges connect those strategy profiles that can be transformed into each other if only one of the players modifies her strategy.
In Fig. 1 the nodes are arranged in the same order as they appear within the payoff and potential matrices. Here the nodes are denoted by large boxes allowing as to give the payoffs (upper row) for both players and also the value of potential (lower row) for all the strategy profiles. This arrangement of nodes reflects the relevant symmetries. Notice that each subgraph, consisting of nodes within a single row or column, is complete and within these subgraphs the condition of the existence of potential is satisfied because here only one player changes her strategy.
According to the Kirchhoff laws [27] we can distinguish nine independent loops (see Fig. 1) if we apply the methods used in the analysis of electric circuits [20,28]. Three of the nine loops are located along the main diagonal and each one represents a symmetric 2 × 2 subgame where the potential always exists. In the present system the dashed-dotted (green) circles represent an antisymmetric pair of loops that give the same condition for the existence of the potential, namely, (4) Two additional conditions can be derived from the other two pairs of antisymmetric loops. Namely, the conditions along the (blue) dotted circles in Fig. 1
correspond to
which can be simplified as Similarly, the third condition is related to the four-edge loops indicated by dashed (red) circles in Fig. 1: which obeys the following form after some algebraic manipulation: In sum, the potential exists if the matrix components A ij satisfy the above three conditions defined by Eqs. (4)- (7). It is emphasized that in the deduction of these three criteria we have exploited the symmetries and the interdependence of four-edge loops within the dynamical graphs.
This method predicts (n − 1)(n − 2)/2 independent and relevant pairs of four-edge loops that should be taken into consideration when deriving similar conditions for the existence of potential when n > 3. Monderer and Shapley [18] and Hofbauer and Sigmund [12] have proved the existence of potential if similar conditions are satisfied for all the four-edge loops or the equivalent 2 × 2 subgames.
The potential matrix V can be evaluated as detailed below and the actual value of the potential matrix for a given strategy pair (s x ,s y ) can be expressed as s x · Vs y .
Up to now we have studied symmetric two-player games. Due to the linear relationship between the payoff and potential matrices we can introduce multiagent potential games with N players if the interactions between the players are composed of equivalent two-player potential games. In these systems the microscopic state (strategy profile) is defined by the set of individual strategies, S = (s 1 ,s 2 , . . . ,s N ) and the corresponding potential value is obtained as where the summation runs over the interacting players x and y.
For the illustration of the effect of some types of interactions on the macroscopic behavior in multiagent systems, we will consider models with nearest-neighbor interactions between the players distributed on a square lattice. If the evolution of the strategy profile S is defined by random sequential application of the logit rule, then these systems will evolve into the Boltzmann distribution [19]. For an elementary step of this evolutionary process we choose a player (e.g., x) at random and this player is allowed to select another strategy s x favoring exponentially her higher individual payoff. For the potential games this preference can be quantified by the potential variation for unilateral strategy changes. More precisely, the probability of choosing strategy s x is expressed as where in the microscopic states S and S the player at site x chooses s x = s x and s x = s x , respectively, and s x runs over all the possible strategies while the strategies are fixed for all the other players. For the logit rule, K quantifies the strength of the stochastic noises. In the limit K → 0, the players choose their best strategy and the system develops into one of the pure Nash equilibria characterized by the maximum value of U (S).
In order to demonstrate the richness in the stationary states and also the close analogy to physical systems when the interaction belongs to the coordination-type games, the mentioned system will be investigated by Monte Carlo (MC) simulations on a square lattice with L × L sites under periodic boundary conditions. During these simulations we have determined the average values of strategy frequencies ( k , k = 1, . . . , 4) in the stationary states. The linear system size is varied from L = 400 to 1400, the relaxation and sampling times are chosen between t r = t s = 10 4 and 10 6 MCS (during the time unit 1 MCS each player has a chance to modify her own strategy once on average). The larger system sizes and longer run times are selected when approaching the critical transition point in order to increase the statistical accuracy.
III. DECOMPOSITION OF THE SYMMETRIC FOUR-STRATEGY MATRIX GAMES
The idea of the matrix decomposition is based on the fact that a matrix A of rank n can be considered as a traditional vector of dimension n 2 if the components A ij are arranged into a column. The two-dimensional Fourier decomposition worked efficiently for three strategies (n = 3). Here we suggest a different approach. On the analogy to the traditional vector calculus now the payoff matrices are built up as a linear combination of basis matrices that are created from a set of four-dimensional orthogonal basis vectors. For later convenience we choose the following four orthogonal vectors composed of +1 and −1 as: and the dyadic (or tensor) products of these vectors, with elements g ij (m) = e (k) i e (l) j for k,l = 1, . . . ,4, serve as basis matrices (or elementary games) with labels (m = 1,2, . . . ,16) specified below. These basis matrices satisfy the conditions of orthogonality, where C(m) = 16 as |g ij | = 1 in the present case. In general, the payoff matrix can be expressed [25] as where the coefficients α(m) are given by the scalar product of the matrices A and g(m), which is defined on the analogy of the scalar product of two linear vectors.
IV. GAMES WITH SELF-AND CROSS-DEPENDENT PAYOFFS
Following our previous notation [20] the first basis matrix (labeled with m = 1) is defined as with elements g ij (1) = 1. This all-ones matrix represents the irrelevant component of payoffs for all cases when the decision or dynamics are controlled by payoff differences. There are three additional basis matrices, namely, which consist of columns with uniform values. The latter property is conserved for the following linear combinations: representing the subset of games with cross-dependent payoffs. For this set of games the players cannot modify their own payoffs by choosing another strategy. Consequently, the games with cross-dependent payoffs do not give contributions to the potential matrix. If this type of games defines the pair interactions in a multiagent model for a logit rule then the players choose their strategy at random. In opposition to the cross-dependent payoffs we can distinguish games with self-dependent payoffs when the payoff matrices are composed of uniform rows. The corresponding elementary games are defined as g(5) = e (2) ⊗ e (1) ,
g(7)
= e (4) ⊗ e (1) , (23) and the subset of games with self-dependent payoff can be given as The reader can easily check that the following potential matrix, satisfies the condition Eq. (3) for the games with selfdependent payoffs.
Notice that g(m) = g T (m + 3) (if m = 2,3,4) and this feature can be exploited by introducing another set of basis matrices (g (m), with m = 2,3, . . . ,6) when we distinguish symmetric and antisymmetric basis matrices as where r = 2,3, and 4. Evidently, the above basis matrices preserve the conditions of orthogonality. Accordingly, the games with self-and cross-dependent payoffs are spanned by the linear combinations of four symmetric and three antisymmetric basis matrices. The relevant properties of the above antisymmetric basis matrices can be illustrated by which possesses two pairs of identical columns and rows. The mentioned symmetries characterize g (6) and g (7), as well.
V. COORDINATION GAMES
Among the 16 dyadic products of the vector Eqs. (11) there are three symmetric matrices that we use to define the following three basis matrices as g(10) = e (4) ⊗ e (4) .
Notice that these three basis games are composed of only +1s and −1s in a way ensuring their orthogonality to g(m) for m = 1, . . . ,7 as the sum of payoffs is zero within each row and column. For example, From the rest of nonsymmetric dyadic products we can derive three additional symmetric basis vectors as because (e (k) ⊗ e (l) ) T = e (l) ⊗ e (k) . As all these basis matrices are symmetric, the contribution of their arbitrary linear combinations, to the potential matrix is V (coord) = A (coord) . This sixdimensional subspace of matrices is closed under transformations when exchanging the same two rows and columns subsequently. The latter transformations realize the exchange of labels simultaneously without introducing fundamentally new behaviors. The knowledge of the potential matrix V can be utilized to determine the preferred Nash equilibrium for both the two-player games and the spatial multiagent evolutionary games. For example, if max (V ij ) = V kk then the strategy pair (k,k) is a Nash equilibrium for the two-player game and the homogeneous distribution of the kth strategy is a stable state of the evolutionary games (introduced in Sec. II) in the limit K → 0. In such systems the K-dependence of the strategy frequencies have a typical behavior plotted in Fig. 2. The plotted MC data are obtained for a system where β(m) = 1/m if 8 m 12 and β(m) = 0 otherwise. In this model the homogeneous distribution of strategy 1 dominates the system behavior in the low noise limit.
As V is a symmetric matrix, its maximum values can occur in pair, e.g., max (V ij ) = V kl = V lk . In the latter cases the two-player game has two equivalent pure Nash equilibria, namely the strategy pairs (k,l) and (l,k). For the multiagent evolutionary games the system has two equivalent ordered strategy arrangements in the low noise limit on a square lattice that can be divided into two sublattices (denoted as X and Y ) on the analogy of the white and black boxes of the checkerboard. For the zero noise limit, the players choose strategy k in one of the sublattices while they follow strategy l within the opposite sublattice. This situation resembles the antiferromagnetic spin arrangements for the Ising-type models [21].
It is noteworthy that if the players in one of the two sublattices exchange their strategy labels l and k (this transformation is realized by exchanging the corresponding two columns or rows in the payoff matrix), then the resultant system has two equivalent homogeneous ordered states as it occurs in the ferromagnetic Ising model in the absence of external magnetic field. The mentioned transformations can be used to justify similar K-dependence (including phase transition(s), thermodynamic derivatives, responses to perturbations, etc.) in a group of systems satisfying some symmetries. Similar phenomena are illustrated in the three-strategy potential games [20] and are expected to be present in systems of larger number of strategies.
Within this six-dimensional subspace of matrix games one can find directions realizing clearly the coordination type 2 × 2 subgames with payoff matrices f (pq) (p < q = 2,3, and 4) if both players are constrained to choose either their pth or qth strategies. The components of matrices f (pq) with attractive Ising type (or coordination type) interactions between the strategy pair (p,q) can be defined as These matrices contain two rows and columns composed of 0s and each one can be obtained from by exchanging the same two rows and columns subsequently.
The above six matrices f (pq) are not orthogonal to each other in the sense defined by Eq. (13). At the same time these components span the whole subspace of coordination-type games.
In light of the above feature the coordination type interactions can be considered as the linear combinations of symmetric two-strategy subgames where the strength of coordination is defined for each symmetric strategy pair. This set of games includes cases when some of the f (pq) basis games can be present with negative weight factors that refer to anticoordination-type interactions. The anticoordination interactions enforce sublattice ordering when the players on sublattice X follow the first strategy and they choose the opposite one within the sublattice Y . There exists an other equivalent sublattice ordered state where the strategies are exchanged.
In the literature of physics the most frequently investigated system within this subset of games is the four-state Potts model (for a survey see [29]) that represents a universality class of critical phase transitions [30]. The Potts models [31] were introduced to study systems with n equivalent (homogeneous) ordered states that are transformed to a disordered strategy distribution above a critical noise level (K > K c ). The resultant order-disorder transition is continuous and the frequency i of states i converges algebraically to 1/4 when approaching the critical point K c . More precisely, | i − 1/4| ∝ (K c − K) β where β = 1/12 and (K c − K) → +0. In the present notations the four-state Potts model is composed of all the f (pq) matrices with equal weight factors.
For the two-state magnetic Ising model the equivalence between the ferromagnetic and antiferromagnetic ordering phenomena on the square lattice is related to the fact that the spin reversal on one of the sublattices transforms the attractive interactions into repulsive ones. Here f 12 is transformed into −f 12 if strategy labels 1 and 2 are exchanged on one of the sublattices. Evidently, a similar exchange of a strategy pair on one of the sublattices for the four-state Potts model creates an additional set of models exhibiting equivalent order-disorder phase transitions. The analogous behavior of these models is illustrated in Fig. 3 that draws a parallel between the snapshots obtained during the domain growing for the four-state Potts model and for one of its relatives. Similar equivalence between the three-state Potts model and its relatives was reported and discussed in detail for the evolutionary three-strategy spatial games [20].
The above-mentioned family members of the four-state Potts model are located along disjunct directions (half-lines) within the six-dimensional subspace of the coordination-type games. Within this subset of games, however, there exist many other combinations of elementary coordination games that exhibit different symmetries and order-disorder phase transitions. For example, the game represented by the matrices f (12) perform an Ising-type order-disorder phase transition as it is shown in Fig. 4. The numerical results show an Ising-type order-disorder critical transition for the strategy frequencies 1 and 2 , whereas 3 = 4 vary smoothly from 0 to 1/4 when K is increased. Our numerical data are consistent with the theoretical expectation predicting Evidently, similar K-dependence of the strategy frequencies occur within the sublattices X and Y when the pair interactions are given by −f (12) .
On the analogy of the above model, Fig. 5 shows another universal critical transition occurring when the pair interaction is defined by the matrix A = f (12) + f (23) + f (13) . In that case the corresponding potential matrix has three equivalent maximal values (V 11 = V 22 = V 33 ) that prescribes the existence of three (equivalent) homogeneous ordered states in the limit Monte Carlo data for the strategy frequencies versus noise on the square lattice when the pair interaction is defined as A = f (12) + f (23) + f (13) . Symbols agree with those used in Fig. 2 (here the circles and diamonds coincide).
K → 0 on the square lattice. The MC results show that 4 increases smoothly with K from 0 to 1/4 and 2 = 3 if 1 → 1 in the limit K → 0. The preliminary MC simulations indicate more complex behavior for a payoff matrix A = f (12) + f (34) that ensures four equivalent ordered strategy arrangements in the zero-noise limit. Evidently, the latter system is equivalent to those defined, for example, by A = f (13) + f (24) . Similar richness in the behavior and phase diagrams is reported for the Ashkin-Teller model [32] and for other systems exhibiting fourfold degenerated ground states [33].
VI. CYCLIC GAMES
In this section we discuss the last three basis games characterized by the following antisymmetric matrices: where, for example, illustrates their general properties. First we emphasize that g(16) can be interpreted as a straightforward extension of the rock-paper-scissors game. Here the four strategies cyclically dominate each other, which can be illustrated by the directed graph c in Fig. 6 because its adjacency matrix is identical to g (16). In graph theory [34], the simple directed graphs with n nodes are characterized by an n × n adjacency matrix C where C ij = 0 if the nodes i and j are not connected and C ij = −C ji = 1 if there exists a directed edge from node i to j . Figure 6 shows three directed graphs representing Hamilton cycles (directed loops including all edges). In fact, there exist three additional Hamilton cycles that can be obtained by reversing all edge directions simultaneously and are described by the matrices −g(m) for m = 14,15, and 16. Notice that the eight linear combinations of the cyclic basis games, namely, e = ±g(14) ± g(15) ± g(16), define rock-paper-scissors-type three-strategy subgames when the use of one of the four strategies is prohibited. The corresponding directed graphs are given by the eight (possible) directed three-edge loops with one isolated nodes. Notice that all antisymmetric n × n matrices can be described as combinations of the adjacency matrices of the possible directed graphs of n nodes with only a single directed edge. Within this subset of matrices the cyclic games are orthogonal to both the self-and cross-dependent matrices that restrict the analysis to those directed graphs where the numbers of outgoing and ingoing edges are equivalent for each node. The latter requirements are satisfied for graphs with a directed loop and also for those that are composed of directed loops without common edges. The above mentioned three-and four-edge directed loops represent graphically some inherent properties of the cyclic basis games for n = 4.
The presence of cyclic basis games [β (14),β (15) In fact, for the symmetric four-strategy games we can distinguish three pairs of four-state loops where the first player uses either strategy i or j , whereas the second player can select one of the other two strategies, namely, either strategy k or l (for the above discussed situation i = 1, j = 2, k = 3, and l = 4). In these cases the two players use two different pairs of strategies.
Among the four-state loops of the dynamical graph (see Fig. 1) we can distinguish two additional classes. The first class includes those loops where both players are constrained to use the same strategy pair. Three of the possible six symmetric 2 × 2 subgames are indicated by solid (yellow) circles in Fig. 1. Along these loops the symmetry of the game ensures that the sum of payoff variations vanishes as it happens to the symmetric two-strategy games [20].
Within the second class of loops the players have one common strategy and the second ones are distinct. The investigation of these cases can be mapped onto the analysis of three-strategy games where potential can exist in the absence of the corresponding rock-paper-scissors component that can be built up as a suitable linear combination of the three four-state cyclic basis games mentioned above. More precisely, Eq. (6) is equivalent to the orthogonality condition A · [g(14) + g(15) + g(16)] = 0 where the second term describes rock-paper-scissors-type cyclic dominance between the strategies 2, 3, and 4.
The three cyclic basis games [g (14), g (15), and g(16)] can be mapped onto each other by relabeling the strategies. This is the reason why the corresponding games exhibit similar behaviors resembling those observed for the rockpaper-scissors games on square lattice at different evolutionary rules. Due to the "cyclic" symmetries for all the three basis games the four strategies are present with the same frequency [35][36][37][38][39]. The strategy frequencies can be tuned by varying the strength of these components [40][41][42]. In the spatial systems the small domains invades each other cyclically along irregular interfaces. The size of domains can be enhanced by introducing additional coordination-type interactions [43].
VII. GENERALIZATION FOR SYMMETRIC n-STRATEGY GAMES
Most of the above features remain valid for all symmetric matrix games with n > 4 strategies. Namely, the all-ones matrix as well as the basis games with self-and crossdependent payoffs exhibit similar properties to those described in Sec. IV. Within this [(2n − 1)-dimensional] parameter space of games there is no direct interactions between the players. The potential is determined by the self-dependent components on the analogy of Eq. (25) and we can distinguish (n − 1) antisymmetric basis matrices.
The parameter space of the coordination-type interactions is spanned by n(n − 1)/2 symmetric f (pq) matrices defined on the analogy of Eq. (37) for p < q = 2, . . . ,n. The space of the whole symmetric n × n matrices is spanned by the linear combination of the coordination-type games and by the n symmetric matrices derived from the self-and crossdependent components as it happened for n = 4. The rest of the (n − 1)(n − 2)/2-dimensional subspace of the antisymmetric matrices involves the linear combination of the four-strategy cyclic subgames. The latter cyclic components can be derived from g(16) by adding all-zeros column(s) and row(s), as represented by the matrix for n = 5, which is the adjacency matrix of a directed graph with one directed four-edge loop (through the strategies 1, 2, 3, and 4) and one isolated node (representing strategy 5). On the analogy to e (c−1234) we can introduce many other cyclic games (e.g., e (c−1234) ) for four of n strategies that are orthogonal to all f (pq) as well as to A (self) and A (cross) . The relevance of the basis matrices e (c−ij kl) is justified by the fact that the scalar products A · e (c−1234) = 0 is equivalent to the general conditions required for the existence of potential (see Refs. [12,18,20,24]). More precisely, the condition A · e (c−ij kl) = 0 ensures that the Kirchhoff law is satisfied along the four-edge loop (i,j ) → (k,j ) → (k,l) → (i,l) → (i,j ) in the space of pure strategy profiles. Among the general properties of the n × n symmetric matrix games we have to underline the importance of the symmetric components of the payoff matrix A. Collaborating players can agree to share the accumulated payoff equally as it happens between fraternal players, friends, or family members [1,17,[44][45][46]. In that case the effective payoff matrix A (eff) = (A + A T )/2 does not contain the contribution of the antisymmetric components. In other words, the decision of the collaborating players is affected by neither the cyclic nor antisymmetric portion of the self-and cross-dependent components. Evidently, A (eff) is a potential game that always has at least one pure Nash equilibrium and the preferred one provides the maximal value of the potential when V (eff) = A (eff) . The resultant payoff can serve as a reference when considering the additional effect of social dilemmas or cyclic dominance.
VIII. SUMMARY
We have studied the decomposition of the symmetric 4 × 4 matrix games into the linear combination of elementary games defined by diadic products of the column vectors of the Walsh-Hadamard matrices. For n = 2 k (k is an integer) these matrices are composed of elements of +1 or −1 and have indicated clearly the inherent properties of matrices representing possible pair interactions in evolutionary games. Using this formalism we could distinguish four classes of elementary interactions from which any symmetric matrix games can be built up. The first class of interactions involves games with self-dependent payoffs that are defined by matrices with identical elements within each row. The direct interaction between the players is also missing for the second class of interactions called games with cross-dependent payoffs as here the player's income depends only on the decision of her coplayer and the payoff matrix contains columns of identical values. The third class of interactions defines the strength of coordination for each the possible strategy pair that may be either positive (attractive) or negative (repulsive). The fourth class of games summarize the effects of cyclic dominance and it can be considered as the extension of the traditional rock-paper-scissors game.
The research of the decomposition was originally motivated by the identification of potential games and by developing a method for the evaluation of potential if it exists. It is found that the potential exists in the absence of the cyclic components and the potential matrix itself can be expressed by a simple formula in the knowledge of the first three components.
One of the main advantages of the potential games is the fact that in multiagent evolutionary games the maximal value of the potential is achieved by a strategy profile that resembles the ground state in physical systems. In several cases, the four-strategy games are constructed from a two-strategy game by adding a new option (punishment, reward, reputation, etc.) and payoffs to each strategy via the introduction of a few parameters [47][48][49]. These latter models become potential games if the set of payoff parameters satisfy only one equation.
For n = 4 the application of the present diadic products has highlighted some hidden feature of interactions described by payoff matrices. It turned out that the most relevant cyclic basis games can be illustrated graphically by directed graphs with a single directed loop. This picture supports the extension of the above-described properties for symmetric games with n > 4 strategies.
The systematic analysis of spatial evolutionary games for logit rules goes beyond the scope of the present work. Now our investigations are restricted to some particular combinations of a few basis games within the subset of coordination-type interactions. The Monte Carlo simulations have indicated a richness in the stationary behaviors. More curious behaviors are expected when considering more complex evolutionary games, including the other three classes of interactions and modifying the dynamical rule and/or the connectivity structure. | 8,328.6 | 2015-08-27T00:00:00.000 | [
"Economics"
] |
9 Development , Differentiation and Derivatives of the Wolffian and Müllerian Ducts
The Wolffian ducts (proand mesonephric ducts) are the most important and earliest structures formed during the development of the urogenital system in vertebrates including humans. The Wolffian ducts originate in the prospective cervical region of the young embryo but later migrate caudally inducing the development of the pronephric and mesonephric tubules along their migratory route. In addition to being the inducers of the first two generations of the kidney, namely the pronephros and mesonephros, the Wolffian ducts also give rise to the ureteric buds which drive the growth and differentiation of the permanent kidneys, the metanephroi. The paired ureteric bud arises as outpouching from the caudal end of the Wolffian duct and induces the epithelialisation of the metanephric blastema leading to the formation of the renal corpuscles and tubular part of the nascent metanephric kidney, while the entire collecting system consisting of the ureter, the renal pelvis, the calyces and the collecting ducts take their origin from the ureteric bud.
Introduction
The Wolffian ducts (pro-and mesonephric ducts) are the most important and earliest structures formed during the development of the urogenital system in vertebrates including humans.The Wolffian ducts originate in the prospective cervical region of the young embryo but later migrate caudally inducing the development of the pronephric and mesonephric tubules along their migratory route.In addition to being the inducers of the first two generations of the kidney, namely the pronephros and mesonephros, the Wolffian ducts also give rise to the ureteric buds which drive the growth and differentiation of the permanent kidneys, the metanephroi.The paired ureteric bud arises as outpouching from the caudal end of the Wolffian duct and induces the epithelialisation of the metanephric blastema leading to the formation of the renal corpuscles and tubular part of the nascent metanephric kidney, while the entire collecting system consisting of the ureter, the renal pelvis, the calyces and the collecting ducts take their origin from the ureteric bud.
Gender specific contributions of the Wolffian ducts amount to the induction and development of the Müllerian (paramesonephric) ducts, the anlagen of the female genital ducts, while in males, the Wolffian ducts elongate to form the epididymal ducts and the vasa deferentia.The seminal vesicles are formed during regression and transformation of the mesonephroi.
The developmental significance of the Wolffian duct for the development of the excretory and genital system can be drawn from the extirpation experiments in vertebrate embryos where the absence of Wolffian ducts showed that neither kidneys, nor male or female genital ducts develop.
Human embryos shown in this article were collected by the late Prof. K.V. Hinrichsen during the years 1970 till 1985.They are from legally terminated pregnancies in agreement with the German law and following the informed consent of the parents.For the description of human embryos the Carnegie stages (CS) are used.
Wolffian ducts
The Wolffian ducts are named after the German anatomist Caspar Friedrich Wolff (1733-94) who first described the paired mesonephros, also called Wolffian body and its duct.The mesonephroi represent the second kidney generation of vertebrates.The first generation, the pronephroi precede the mesonephroi in a temporal and spatial sequence.The pronephric ducts are continuous caudally with the mesonephric ducts.Therefore, we use the term Wolffian ducts for the common pro-and mesonephric ducts.
Development of the Wolffian ducts
The Wolffian duct anlagen arise from the right and left intermediate mesoderm between somites and somatopleure.They first appear as continuous ridges caudal to the sixth pair of somites at CS 10 in embryos with ten somites.Since the developmental steps are comparable with other vertebrates one can see how the Wolffian duct anlage shown here as a mesenchymal ridge in the scanning micrograph of a chick embryo (Fig. 1a) segregate from the dorsal part of the intermediate mesoderm.The mesenchymal ridges segregate from the dorsal part of the intermediate mesoderm (Fig. 1b).In human embryos at CS 11, the Wolffian ducts undergo mesenchymal-epithelial transitions and form two epithelial canalized ducts (Fig. 1c), on either side of the somites and segmental plate, however, their caudal tips maintain their mesenchymal identity and help them to migrate caudad on the intermediate mesoderm to join the cloaca at CS 12 (3 to 5 mm, 26 days, 26-28 somites).
The Wolffian duct anlagen can be identified by the expression of the Pax2 gene (Fig. 1d), a transcriptional regulator of the paired-box gene family (see Torres et al., 1995) that controls the development of the different kidney generations.During urogenital development of vertebrates, Pax2 appears at first within the Wolffian duct anlagen and in successive order in the other kidney generations and even in the Müllerian ducts.Pax2 seems to induce the mesenchymal-epithelial transformation of the intermediate mesoderm (Dressler et al., 1990).Pax8 has synergistic effects, but knockout animals reveal no kidney defects.
Other genes have also been found to be important in early kidney development.Kobayashi et al. (2004) documented the expression of the LIM-class homeodomain transcription factor Lim1 in the Wolffian ducts and knockout animals fail to develop Wolffian ducts.The homeobox gene Emx2 was proposed to regulate the epithelial function of Pax2 and Lim1 (Miyamoto et al., 1997).
Migration of the Wolffian ducts
Experimental and morphological data suggest that the extension of the Wolffian ducts along their caudal path is not only the result of proliferation, but of active migration of the cells at their posterior tips (Jacob et al., 1992).Furthermore, experiments performed in chicken embryos (Grünwald, 1937;Jacob and Christ, 1978) show the significance of the mesenchymal tip of the Wolffian duct: as following its extirpation, migration of the duct stops and the mesonephros fails to develop on the operated side (Fig. 2).The metanephros and the genital ducts of both genders eventually fail to develop.The gonad although appearing normal, is considerably reduced in size on the manipulated side.
The cells at the duct tip extend long cell processes, which are in contact with the extracellular matrix (Fig. 3a).Also required for the caudally directed migration of the Wolffian duct are the special properties of the extracellular matrix through which the duct migrates.Epithelial parts of ducts implanted at the place of the tip cells were able to migrate towards the cloaca even if their cranial end was rotated.Only the intermediate mesoderm caudal to the duct tip induces and guides this migration.Matrix molecules like fibronectin are supposed to be an important component of the special substrate needed for the migration of the Woffian ducts (Jacob et al., 1991).Although fibronectin may be a prerequisite for cell migration, its nearly ubiquitous occurrence rules out a specific role in directed cell migration of this molecule in this context (see also Bellairs et al., 1995).It is suggested that polysialic acid plays a more specific role in the migration of chicken Wolffian ducts.NCAM polysialic acid had a similar distribution as fibronectin, and treatment of the living embryo with Endo-N specifically degrades polysialic acid and stops the caudal extension of the duct.A guidance cue identified in Axolotl is GDNF, which activates signaling through the c GFRα1-Ret receptor (Drawbridge et al., 2000).In the chicken embryos, the receptor CXCR4 was shown (Fig. 3b and Yusuf et al., 2006) to be expressed in the region of the developing mesonephros anlage.Furthermore Grote et al. (2006) suggested that Pax2/8 regulated Gata3, which itself controls Ret expression is necessary for Wolffian ducts guidance.
Development of pro-and mesonephros
During the caudad extension and migration of the Wolffian ducts they induce the formation of nephric tubules within the right and left intermediate mesoderm starting with the pronephros in the cervical region.The characteristics of the pronephroi are that they form external glomerula and their tubules drain into the coelomic cavity via openings called nephrostomata.These structures exist also in human embryos though in higher vertebrates, the pronephroi are only rudimentary with no significant excretory function.
Early in the fourth week follows the successive induction of mesonephric tubules extending from the thoracal to the lumbal region.These tubules drain into the Wolffian ducts (Fig. 4a) and their blind ends form typical renal corpuscles with Bowman's capsule and glomerulus (Fig. 4a-c).Each tubule may be divided in a secretory and a more faintly stained collecting part (Fig. 4c).The secretory part resembles the proximal tubule of the permanent kidney with well-established microvilli.
The formation of tubules is terminated by CS 14 with a total number ranging between 35 to 38.
Development and differentiation of the ureteric buds
The permanent kidneys, the metanephroi develop by interaction of the ureteric bud with the metanephric blastema in the lumbosacral region of the intermediate mesoderm.
Early in the fifth week (CS 14) ureteric buds branch from the posterior ends of the Wolffian ducts at the level of the first sacral segment (Fig. 5a).According to Chi et al. (2009) the epithelium in the caudal part of the Wolffian duct convert prior to budding from a simple epithelium into a pseudostratified.The exact position and outgrowth in dorso-cranial direction of the ureteric buds is critical to join the metanephric blastema and thus for the development of the permanent kidneys.
The ampulla-like blind end of each ureteric bud is surrounded by a cap of dense mesenchyme, forming the metanephric blastema (Fig. 5a and b).Reciprocal interactions between ureteric bud and metanephric mesenchyme are necessary for the outgrowth and branching of the ureteric bud on one hand and the mesenchymal-epithelial transformation and tubulogenesis of the metanephric blastema on the other hand (Fig. 5b and c).
Branching of the ureteric buds
The contact point of the ureteric bud and the metanephrogenic blastema represents the coming together of two functionally distinct kidney parts, namely the urine conducting and the urine producing system respectively.An appropriate out pouching site of ureteric bud from the Wolfian duct followed by its dichotomic patterning enable not just a formation of a functional urinary tract, but also ensure the viability of the metanephric kidney.Extensive research over the last decades in this field underlines the significance of appropriate ureteric bud outgrowth and patterning as urinary tract malformations are amongst the most common congenital defects accounting to around 1% of all congenital defects.Further impact of faulty ureteric bud branching also affects the absolute nephron number in the kidney which may play out as a predisposition to chronic renal failure.
The correct outgrowth of the ureteric buds and their dichotomic budding is controlled by a network of genes (see for review Constantini and Kopan, 2010) with GDNF/RET signaling as a main factor.GDNF is expressed in the metanephric mesenchyme and the Ret receptor tyrosine kinase and its co-receptor Gfr1 in the tip of the ureteric bud.It has been experimentally shown that it is not the expression, but the activity of the RET that is decisive for the site of ureteric bud out pouching selection.Wnt signaling transducter -catenin (Marose et al., 2008) and Gata3 (Grote et al., 2008), a zinc finger transcription factor, act together and are pivotal in modulating the RET activity at the prospective ureteric bud formation site in the caudal Wolffian duct.
The mode of branching was shown by Osathanondh and Potter (1963) using microdissection.At CS 16 the ampullated tip divides into two branches determing the cranial and caudal pole of each metanephros.
a b c
The mode of branching is unique to the kidney with lateral and terminal bifid branches (Al-Awqati and Goldberg, 1998).The terminal branch can no longer divide since it induces the formation of nephrons.
At CS 19 four to six generations of branching can be observed.Within the metanephrogenic blastema adjacent to the ampulla vesicles form.Each vesicle eventually differentiates into a tubule and the glomerulus (Fig. 6).
The first three generations of division dilate and fuse to form the renal pelvis, the fourth and fifth form the calyces.The further divisions -6 to approximately 15 -generate the collecting ducts.By the 22nd to 23rd weeks of gestation branching is completed.Fig. 6.Branching of ureteric bud.Semithin-section through the metanephros of a 25 mm (CS 22) embryo.Dichotom branching of the ureteric bud (UB).The ampulla-like blind end (A) induces the formation of vesicles (V) within the metanephrogenic blastema (MB).The vesicles differentiate into tubules and glomeruli (G).Bar = 100 m.
Differentiation of the Wolffian ducts in male fetuses
The stabilization and further growth of the Wolffian ducts depend on androgen that is produced in the testes of male embryos starting from the eighth week of gestation.An active stimulation is necessary to prevent regression of the ducts and the mesonephric tubules and to induce the differentiation of epididymides and vasa deferentia.The androgen receptor is first found in the mesenchyme surrounding the duct epithelium and interacts with different growth factors like EGF (epidermal growth factor) (see for review Hannema and Hughes, 2007).Expression of EGF and its receptor can be increased by androgen treatment and vice versa.EGF modulates sexual differentiation by enhancement of AR-mediated transcriptional activity and not enhancement of AR gene expression receptor (Gupta, 1999).
Development of the epididymis
Epididymal development depends on a cascade of molecular and morphological events controlling transformation and regression of mesonephric nephrons and the persistence of the Wolffian duct (Kirchhoff, 1999).In the male, some of the mesonephric tubuli eventually form the ductuli efferentes located in the caput epididymidis, while the Wolffian ducts differentiate into the right and left ductus epididymidis and the vas (ductus) deferens.During the transformation of the mesonephroi into the paired epididymis, two waves of regression are observed.The first wave of regression occurs in the most cranial nephrons and starts before the caudal parts of the mesonephroi are fully developed and is correlated with an inner descent of mesonephroi and gonads.Felix (1911) found this wave to be terminated in 21mm embryos (about CS 20).The second wave of regression includes the caudal part of the mesonephroi persisting as rudimental paradidymis.In the third month, the surviving mesonephric tubuli unite with the anlage of the rete testis.
The process of transformation from nephrons into ductuli efferentes remains poorly understood.In some vertebrates, a special mode involves the de novo formation of ductuli efferentes from the Bowman's capsules in the chicken (Budras and Sauer, 1975) or from the dorsal part of the giant nephric corpuscle of the bovine embryo (Wrobel, 2001).The appearance of the apoptotic p53 proteins and the antiapoptotic bcl-2 in the mesonephros from the seventh week on (Carev et al., 2006) coincide with the regression on one hand and the survival of some tubules on the other hand.
We investigated the development of the epididymis in human embryos from 14,8 mm (CS 18 ) to a 170 mm fetus.The CS 18 embryo reveals well-developed mesonephroi (Fig. 4c).The structure of the glomeruli is similar to those of the metanephroi with a thin capillary endothelium and podocytes.The structure of the proximal tubules resembles that of the proximal tubules of the permanent kidney, however the distal parts of the mesonephric tubules seem to have only collecting function.
Shortly later, already in the first wave of regression or at the beginning of the second period, degeneration of glomeruli and proximal tubuli starts.According to Felix (1911) only the distal parts of the tubuli adjacent to the testis (epigenital tubules) survive.They elongate to form coiled ductuli, which eventually join the rete testis.In our 26 mm embryo (eighth week) long and straight tubules are found near the developing rete testis.More medial sections (Fig. 7a) show nephric corpuscles with fused or degenerating glomeruli and thickened Bowman's capsule at the testicular side.
The 32 mm fetus (ninth week) exhibits condensed and small glomeruli.Near the anlage of the testicular rete, tubules with narrow or obliterated lumina are visible (Fig. 7b).However their morphological features do not elucidate whether they belong to remnants of degenerating proximal tubules or are new outgrowths from Bowman's capsules.A mesenchymal sheath forms around the wide Wolffian ducts, a prerequisite for the subsequent elongation and coiling of the Wolffian ducts since androgens are supposed to act via mesenchymal androgen receptors.
In a 45 mm embryo the paired Wolffian duct had increased in length and transformation into the ductus epididymidis starts in the proximal region with the characteristic coiling.During the enormous elongation up to six meters in the adult epididymis, the duct twists into another direction and folds onto itself.Constraints of the surrounding mesenchymal tissue are the supposed forces for the coiling and the narrow space which forces the duct to compact in the anterior region especially in the later corpus of epididymis (Joseph et al., 2009).
In a 68 mm fetus, the epididymal duct assumes an increasingly coiled arrangement.The blind ends of the prospective ductuli efferentes are dilated.
The 88 mm fetus, 13th week, shows the rete testis on either side fused with the ductuli efferentes.The epididymal duct is highly convoluted, but the distal part remains straight (Fig. 7c).It is lined by a non-specific cylindrical epithelium and is surrounded by many concentric layers of mesenchyme.
In the 170 mm fetus, 21th week, in the anterior region a dense coiling and an increased mesenchymal sheath are found.The mesenchyme is essential for the maturation of the ducts and especially for the formation of the vasa deferentia.At this stage, testis and epididymis establish contact with the jelly gubernaculum (Barteczko and Jacob, 2000).
a b c
The differentiation of the specific sections of the Wolffian ducts is regulated by the regional expression of Hox genes.In mouse embryos Hoxa9 and Hoxd9 are expressed in the epididymis and vas deferens, Hoxa10 and Hoxd10 in the caudal epididymis and the vas deferens, Hoxa11 in the vas deferens, and Hoxa13 and Hoxd13 in the caudal portion of the WD and seminal vesicles (Hannema and Hughes, 2007).
Differentiation of the distal part of the Wolffian duct
The secretion of testosterone stimulates also the differentiation of the distal parts of the Wolffian ducts into the vasa (ductus) deferentia and the seminal vesicles.As already mentioned, the regional differentiation of the Wolffian duct is related to Hox genes.E.g.Hoxa-10 domain of expression in male mice embryos has a distinct anterior border at the junction of the cauda epididymidis and the ductus deferens and extends to the sinus urogenitalis (Podlasek et al., 1999).
The development of the straight ductus deferens is characterized by the formation of the thick coat of circular smooth muscle cells.
The seminal vesicles sprout out of the Wolffian duct close to its entrance into the urethral part of the urogenital sinus between the tenth and twelfth week.The common ducts of vasa deferentia and seminal vesicles are called ductus ejaculatoria.
At around the ninth week of gestation, the ureteric buds have separated from the Wolffian ducts and their openings lie superior to those of the Wolffian ducts into the bladder part of the sinus urogenitalis.The common view that the Wolffian ducts are incorporated into the posterior bladder wall to form the trigone is now questioned and lineage studies have shown that the trigone mesenchyme derives from the bladder musculature (Viana et al., 2007).Apoptosis seems to play a role in ureter transposition.Regulation of the different growth and insertion of ureteric bud and distal Wolffian duct (vas deferens) is unknown.
Differentiation of the Wolffian ducts in female fetuses
In the female, the Wolffian ducts degenerate due to the absence of testosterone.Only rudiments persist as Gartner's ducts or cysts running in the broad ligament to the wall of the vagina.
Müllerian ducts
The Müllerian (paramesonephric) ducts are named after the German physiologist Johannes Peter Müller (1801-58).The formation of the human Müllerian ducts starts in the sixth week when other organs are already functional.They develop in close proximity and by induction of the Wolffian ducts.In male embryos they regress shortly after their formation under the influence of the anti-Müller-hormone produced by the Sertoli cells of the testes.In female embryos they further differentiate and form the oviducts (uterine tubes), the uterus and the vagina.
Development of the Müllerian ducts
The formation of the human Müllerian ducts starts in the sixth week (CS 16) as drop-like aggregations of cells beneath the cranial part of the Müllerian or tubal ridges corresponding to the thickened stripes of coelomic epithelium located near the Wolffian ducts (Jacob et al., 1999).Here, at the transition zone between pro-and mesonephros a discrete population of cells within the coelomic epithelium gives rise to the epithelium of the paired Müllerian duct as shown by lineage tracing studies (Guioli et al., 2007).In the 11,5 mm embryo ostium-like indentations of the coelomic epithelium were observed determining both so-called funnel-regions.Placode-like thickenings and deepenings of the coelomic epithelium form the anlagen of the Müllerian ducts.In some vertebrates including humans, these cranial parts of the Müllerian duct are supposed to contain remnants of nephrostomes from the last pronephric and the first mesonephric tubules (see for discussion Jacob et al., 1999;Wrobel and Süß, 2000).A solid cord of cells forms from each ostium or funnel region and rapidly grows caudally in close vicinity to the Wolffian ducts.It has been experimentally shown that in vertebrates the Wolffian ducts are required for the induction of Müllerian duct formation and in their absence no Müllerian duct can develop.
Labeling dividing cells with BrdU in chick and mouse embryos provide evidence that high cell proliferation of the Müllerian duct epithelium and the coelomic epithelium of the funnel region can be regarded as the motor of the caudal extension of the Müllerian ducts (Jacob et al., 1999;Guioli et al., 2007) ranging from 330 m length in a 12.5 mm embryo and between 1440 and 1220 m in a 17 mm embryo (Felix, 1911).Interestingly, in chick and human embryos two to four accessory openings into the coelomic cavity were observed in the cranial part of the Müllerian ducts (Felix, 1911;Jacob et al., 1999).Since these accessory funnels exist only during the robust expansion phase of the Müllerian ducts development, they probably supply more cells from the coelomic epithelium at the funnel field.Whether they are remnants of pronephric tubules has to be elucidated.
The major part of each Müllerian duct anlage canalizes (Fig. 8a) with the exception of the caudal tip.In 17 to 21 mm human embryos, at the point where the ducts are in close contact no basal lamina is present between each Müllerian and Wolffian duct (Fig. 8b).In this way the Wolffian ducts guide the Müllerian ducts to the lumbal region.However, Müllerian and Wolffian duct epithelium can be distinguished because of their distinctive morphological features (Laurence et al., 1992).The Müllerian ducts are more pseudostratified and reveal a b www.intechopen.comThe Human Embryo 156 a larger number of elaborate microvilli at their luminal surfaces.Wolffian ducts exhibit abundant intracytoplasmatic glycogen.Furthermore, immunohistochemical investigations in chick and mouse embryos have also shown that the Müllerian ducts differ from the Wolffian ducts in their expression of the mesenchymal marker vimentin characterizing them as mesothel while the Wolffian ducts are true epithelial tubes expressing cytokeratin (Jacob et al., 1999;Orvis and Behringer, 2007).From this and other experimental and molecular data available any cell contributions from the Wolffian ducts to the Müllerian ducts could be excluded (Jacob et al., 1999;Guioli et al., 2007;Orvis and Behringer, 2007).
While the Müllerian ducts grow caudally, their cranial parts are separated from the Wolffian ducts by circular mesenchymal layers (Fig. 8c) derived from the dissolved tubal ridges.
In the 22 mm embryo (CS 21), the Müllerian ducts cross the Wolffian ducts to extend medially and join each other.In the midline the two Müllerian ducts run caudally and fuse to a single tube, the uterovaginal canal.At CS 22 to 23 this canal inserts at the separated anterior part of the cloaca, the sinus urogenitalis.Here the mesenchyme of the ducts proliferates and protrudes the wall of the sinus urogenitalis forming the Müllerian tubercle.
At first the fusion of the Müllerian ducts is only external with the formation of a common basal lamina, because a septum still separates the fused ducts.A single lumen was found at CS 23 (Hashimoto, 2003).
Genes that are required for Müllerian duct formation are also found in kidney development: Lim1 specifies cells in the coelomic epithelium (Kobayashi et al., 2004) and Wnt-4 induces the invagination of these cells (Vainio et al., 1999).The Wolffian duct also induces the expression of Pax2 in the Müllerian duct although the anterior part of the Müllerian ducts is initially formed in Pax2 mutants (Torres et al., 1995), indicating an autonomy of the funnel region (for review see also Massé et al., 2009).
Regression of the Müllerian ducts in male fetuses
Regression of the Müllerian ducts in males is due to the production of anti-Müllerian hormone (AMH), also named Müllerian-inhibiting substance (MIS), by the Sertoli cells of the testes.A hormone for development of male reproductive duct different from testosterone was first postulated by Jost (1953).Secretion of AMH, a member of the transforming growth factor-β (TGF-β) family, starts in the eighth week of gestation and provokes the irreversible regression of the Müllerian duct in the eighth and ninth week (Rey, 2005).
AMH induces apoptosis within the Müllerian duct epithelium through a paracrine mechanism binding to the AMH type2 receptor (AMHRII) expressed in the mesenchyme around the epithelial tube.Allard et al. (2000) found a cranio-caudal gradient of AMHRII in the peritubal mesenchyme followed by a wave of apoptosis.They furthermore suggest that -catenin, playing a role in the Wnt signaling, mediates apoptosis.They also described that beside apoptosis, an epithelio-mesenchymal transformation is important for regression.
Apoptosis needs the disruption of the basal lamina correlating with the loss of fibronectin and expression of the metalloproteinase 2 gene (Mmp2) (see for review Massé et al., 2009).
The Müllerian ducts do not completely disappear.The most cranial parts are supposed to persist as appendices testis (see below) and the caudal parts as prostatic utricles.However, Shapiro et al. (2004) concluded from their immunohistochemical studies that the utricle forms as an ingrowth of specialized cells from the dorsal wall of the sinus urogenitalis.
Differentiation of the Müllerian ducts in female fetuses
In the female, where AMH is lacking, the uniform Müllerian ducts differentiate in very specific segment to give rise to the uterine tubes (oviducts), the uterus, cervix and the vagina.These specification along the antero-posterior axis is due to a specific Hox-code.As during the differentiation of the Wolffian duct in male fetuses, Hox genes are expressed according to a spatial and temporal axis.In the female reproductive tract the HOXA/hoxa genes 9, 10, 11 and 13 are expressed (Taylor et al., 1997) and their pattern is highly conserved between the murine and the human.Furthermore members of the Wnt family are necessary for a correct pattern and differentiation of the female reproductive tract.e.g.Loss of function of Wnt-7a was reported to result in a partial posteriorization of the female reproductive tract, specifically, the oviduct had acquired characteristics of the uterus and the uterus characteristics of the vagina (Miller and Sassoon, 1998).
Differentiation of the uterine tubes
The non-fused cranial part of the Müllerian ducts form the uterine tubes (oviducts) reaching from the abdominal ostium with anlage of fimbria to the insertion of the gubernacula Hunteri (later round ligaments) (Fig. 9a).During the twelfth week of gestation, the simple columnar epithelium grows more than the surrounding mesenchyme thus forming the characteristic folding of the epithelium lining a stellate lumen (Wartenberg, 1990).The paratubal mesenchyme proliferates and differentiates into the smooth muscle layers and the lamina propria.
Differentiation of uterus and cervix
The uterus develops from the fused upper parts of the Müllerian ducts but the fusion is at first not complete since a thick septum is formed at the fundus of the uterus between week 13 and 20 of gestation (Figs.9a and b).According to Muller et al. (1967), fusion of the ducts a n d r e s o r p t i o n o f t h e s e p t u m b e g i n s a t the region of the isthmus and proceeds simultaneously in cranial and caudal direction.Incomplete fusion of the Müllerian ducts or incomplete resorption of the septum gives rise to many malformations.Any form of duplicity of the uterovaginal canal may be found from uterus bicornis to complete duplication of uterus and vagina, uterus didelphys with double vagina.
The differentiation of the mesenchymal wall of the uterus into smooth muscle starts as in the uterine tubes during the third months (Fig. 9b and c).Initially the epithelium reveals not as high proliferation as the oviduct epithelium and the uterus lumen is lined by a smooth surface without folds.
The region specific differentiation of the epithelium within oviducts (uterine tubes), uterus, cervix and vagina seems to occur perinatally also under the influence of the abovementioned hox genes.The last step in differentiation of the epithelium is the formation of uterine glands.According to studies of Meriscay et al. (2004) in the mice, Wnt5a from stromal cells provides a specific signal that permits the luminal epithelium to form glands.
Differentiation of vagina
The development of the vagina is a matter of controversy and is under discussion.The solid caudal end of the uterovaginal canal inserts at the sinus urogenitalis between the openings of the Wolffian ducts and forms the Müllerian tubercle.Within this tubercle, the tissue of Wolffian and Müllerian ducts intermingle and make it difficult to define their genesis.Since the classic study of Koff (1933) on the development of the human vagina it is generally believed that the cranial part of the vagina is derived from the Müllerian ducts and the caudal part is formed from the sinus urogenitalis.Morphological studies of Forsberg (1965) argue for this view since the so-called Müllerian vagina has initially a pseudostratified columnar epithelium.Furthermore human males with complete androgen insensitivity syndrome but with functional AMH develop a shortened vagina.New genetic and experimental studies contradict this view (see for review Cai, 2009).In case of the shortened vagina it has been shown that the caudal part of the Müllerian duct is insensitive to AMH and under influence of androgen contributes to prostate development (Cai, 2009).Analysis of the vagina in testicular feminization mutated mice by Drews et al. (2002) demonstrated in male embryos, that the entire vagina arises from the Müllerian ducts growing caudal along the sinus urogenitalis together with the Wolffian ducts.In the male embryo, androgens binding to androgen receptors in the mesenchyme of the caudal Wolffian duct soon stop this caudal migration.Cai (2009) reviewed morphological, genetic and molecular studies and presented a model of the formation of the caudal vagina.The caudal ends of the Müllerian ducts insert into the sinus urogenitalis wall in which BMP4 is strongly expressed after induction by Shh (Sonic hedgehog).BMP4 mediates caudal extension of the uterovaginal canal.
The distal part of the uterovaginal canal, which extends caudad is at first a more or less solid cell plate known as vaginal plate.However Terruhn (1980) has found by injection technique that already at 14 th week of gestation, the vagina as well as the uterus revealed a lumen (compare Fig. 9c).Later at the 26 th week of gestation a functional plugging of the endocervical canal was observed presumably due to a secretory activity of the epithelium.
Remnants of the Müllerian or Wolffian ducts (hydatids) in adults
Hydatids of genital organs were first discovered by Morgagni.They are remnants of the cranial part of the Müllerian ducts and Wolffian ducts.In males, the frequent appendices of the testes develop from the funnel region of the Müllerian ducts.Due to a cranial crossingover of Müllerian and Wolffian ducts, the Müllerian ducts come close to the upper poles of the testes (see Fig. 11a).They are not pedunculated and contain connective tissue and many blood vessels.
Different types of appendices epididymides are found.They all arise from the cranial ampullated ends of the Wolffian ducts and are in most cases pedunculated (Jacob & Barteczko, 2005).They may be vesicular or solid and often reveal a twisted stalk (Fig. 10a).
Likewise, in females, hydatids are found that are often pedunculated.They may occur at the f i m b r i a e o f t h e F a l l o p i a n t u b e s d e r i v i n g f r o m t h e M ü l l e r i a n d u c t s , o r a s p a r a t u b u l a r appendices vesiculosae or hydatids of Morgagni deriving their origin from the Wolffian ducts or mesonephric tubules (Fig. 10b).
The clinical relevance of these structures is torsion of pedicle with acute syndrome within scrotum or abdomen.Tumors deriving from these vestigial structures are also described.
Summary and conclusion
The Wolffian ducts are the first appearing structures of the urogenital system and their migration and inductive properties are critical for the development of the permanent kidneys and the genital ducts in males and females.The development of the gonads, however, occurs independent of the Wolffian ducts.
a b
Shortly after the onset of somite differentiation, the Wolffian duct anlagen separate from the intermediate mesoderm.During caudal migration they induce the pro-and mesonephroi within the ventral part of the intermediate mesoderm.Near the caudal entrance into the cloaca (sinus urogenitalis) an ureter bud sprouts out from each Wolffian duct and grows dorso-cranially to join the metanephric blastema (Fig. 11a).Each ureter bud divides in a special dichotomy manner and forms ureter, pelvis, calyces and collecting ducts of the permanent kidney (Fig 11 b and c).
The Wolffian ducts need androgens for further differentiation.In males, each duct forms a coiled ductus epididymidis and the straight vas deferens (Fig. 11b).Sprouting of the seminal vesicles occurs near the urogenital sinus.
The ductus epididymidis together with some persisting tubules of the mesonephros differentiates into the epididymis, which via the rete testis is in close connection with the testis enabling transport and maturation of spermatozoa.Shortly before birth, the epididymis descends into the scrotum together with the testis.
In females, the Wolffian ducts do not differentiate further, but persists as rudimentary Gartner's ducts in the broad ligament lateral to the uterus.The Gartners's ducts are found generally lateral to the uterus, but might also reach down to the wall of the vagina and can give rise to cysts (Fig. 11c).The Müllerian ducts appear later in organogenesis but like the Wolffian ducts they develop at first in a similar manner in male and female embryos (Fig. 11a).The Müllerian ducts need induction of the Wolffian ducts with exception of the most cranial funnel region, which is supposed to include some nephrostomata from the regressing pronephros.The cells of the Müllerian ducts derive from the splanchnopleure exactly from the bilateral thickened stripes of coelomic epithelium, the Müllerian ridges.No cellular contribution from the Wolffian ducts was observed.The Müllerian ducts use the Wolffian ducts as guide to grow caudad, but in the lumbal region they cross the Wolffian ducts in the midline to form the uterovaginal canal (Fig. 11a).In females, they than differentiate into uterine tubes and the fused part forms uterus and vagina (Fig. 11c).New concepts of vaginal development contradict the classic view that the caudal part of the vagina derives its origin from the urogenital sinus, but is supportive of the view that origin of the vagina can be traced solely back to the Mullerian duct.
In males, AMH induces apoptosis and epithelio-mesenchymal transformation of the Müllerian ducts.Only the most cranial and the most caudal parts frequently persist as appendix testis and utriculus prostaticus (Fig. 11b), demonstrating special properties of these regions.
The differentiation of the indifferent ducts into their special structures in adults is listed in table 1.From this it becomes clear that all anlagen of the urogenital system are identical in the indifferent stage of development.The female differentiation is more passive while male differentiation needs genetic and hormonal factors.Early developing structures leave their trace in the adults as a vestigial organ which might be of clinical interest.
A better understanding of the organogenesis of the genital ducts under their well orchestrated genetic control during critical period of development would greatly help in diagnosing congenital malformations early and would serve as a guideline for designing therapeutic modalities for the treatment of disorders of the urogenital system.Table 1.Anlagen of genital ducts and their differentiation in male and female organs and vestigial structures in parenthesis.
Fig. 1 .
Fig. 1.Formation of the Wolffian ducts a) Dorsal view of the Wolffian duct at stage 10 HH after extirpation of the ectoderm on the right side.Note the anlage of the Wolffian duct (arrows) adjacent to the last somite (S) and the anterior part of segmental plate.N, neural tube; SP, somatopleure.Bar = 100 m.b) Transverse semithin section through the mesenchymal anlage of the Wolffian duct (arrow).IM, intermediate mesoderm.Bar = 25 m c) Canalized epithelial duct (W) on the ventral part of the intermediate mesoderm (IM).E, ectoderm; S, somite; SP, somatopleure; Bar = 25 m.d) Pax2 expression in the Wolffian duct anlagen in a stage 10 HH chick embryo (ten somites) as shown by in situ hybridization; Bar = 0.3 mm.
Fig. 2 .
Fig. 2. Extirpation of the caudal tip of the Wolffian duct.Transverse section through a chicken embryo sacrified two days after extirpation of the caudal part of the Wolffian duct.Control side with well developed mesonephros (M), Wolffian duct (W), and gonadal anlage (G).On the operated side (left) the mesonephros shows only rudimentary tubules (vv) and a smaller gonadal anlage (arrow).Bar = 100 m.
Fig. 3 .
Fig. 3. Migration of Wolffian ducts.a) Scanning electron micrograph of the tip region from a stage 13 HH (Hamburger, Hamilton) chick embryo Wolffian duct.Note the cell processes, which are in contact with the extracellular matrix.Bar = 10 m.b) In situ hybridization of a stage 9 HH chick embryo showing CXCR4 expression domain in the posterior half of the most caudal somites and in the intermediate mesoderm (white arrow).Bar = 0.2 mm.Since the migration of the Wolffian ducts is a crucial step in the development of the urogenital system the search for the molecules that guide migration and regulate insertion of the ducts is still ongoing.Research over the years has brought some molecules to light
Fig. 4 .
Fig. 4. Development of mesonephros a) Sagittal section through the cranial part of the mesonephros of a 7.5 mm (CS 14/15 ) embryo with the longitudinal Wolffian duct (W) on the left side.Arrows, openings of tubules (T) into the duct; G, glomerulus.Azan staining.Bar = 50 m.b) Sagittal section of the whole mesonephros (Ms) of a 10 mm (CS 16) embryo.Note the serially arranged nephrons with tubules and mesonephric corpuscles.W, caudal dilated part of the Wolffian duct.HE staining.Bar = 500 m.c) Transverse semithin section of a 14.8 mm (CS 18) embryo W, Wolffian duct; asterisk, secretory part of a tubule; T, collecting part; G, glomerulus; Bar = 50 m.
Fig. 9 .
Fig. 9. Formation of oviduct, uterus and vagina a) Scanning electron micrograph on the broad ligament of a 78 mm CRL (crown-rump length, about 12 weeks) female fetus.The upper border of the ligament is formed by the uterine tubes (UT) and the fundus of the uterus (FU).The insertions of the gubernacula (G) mark the transition from oviducts to uterus.O, ovar; R, rectum.Bar = 1mm.b) Uterus anlage of a 4 month fetus.Note the thick layer of smooth musculature (SM) and the remnant of the septum (arrow).Gartner ducts (G) are found within the uterus wall.Bar = 100 m.c) Uterus and vagina anlage of a 150 mm (4 month) fetus.CU, corpus uteri; Ce, cervix; V, vagina; R, rectum; B, bladder; U, urethra; arrow, excavatio rectouterina.Bar = 1mm.
c
www.intechopen.comDevelopment, Differentiation and Derivatives of the Wolffian and Müllerian Ducts 159 | 9,095.8 | 2012-03-02T00:00:00.000 | [
"Biology",
"Medicine"
] |
Alternate architecture for the Origins Space Telescope
Abstract. We report on our investigation into adapting the design of the James Webb Space Telescope (JWST) to the needs and requirements of the Origins Space Telescope. The modifications needed to the equipment and insulation of the JWST design to achieve the 4.5-K design temperature for Origins are introduced and detailed. The Webb thermal model is modified to the Origins design and used to predict the heat loads at 18 and 4.5 K. We also describe the needed development of the JWST Mid-Infrared Instrument’s cryocooler needed to reach the temperature necessary for Origins. The capabilities of the various modified cryocoolers are discussed. We show that three modified coolers are needed to achieve the performance required for Origins. Finally, we show that the baseline instruments and needed coolers can be accommodated for volume, mass, and power in the Webb architecture.
Introduction
In order to perform its scientific mission, the Origins Space Telescope must have optics that operate at 4.5 K with instruments at or below this temperature. 1 Given the challenges, real and perceived for large cryogenic missions such as Spitzer and the James Webb Space Telescope (JWST), realizing a mission like Origins might sound like engineering or science fiction. We do not subscribe to this world view and believe that large cryogenic missions are indeed possible. The primary objective of our report is to show that as exotic and hard as it sounds, a 4.5-K telescope in the size needed for Origins is realizable and technologically feasible. From the other articles in this special section, it is clear that the Origins mission does have a strong baseline design for the Origins observatory, telescope, and instruments. This report is intended to reinforce the conclusion of these studies, namely achieving a 4.5-K, 6-m class observatory is possible by showing there is another viable architecture. It should be noted that our study concentrates mainly on the thermal performance of the telescope optics and is not a complete mission study.
We provide our reinforcing argument to that of the mission study team by analyzing a modified design for the Webb telescope and showing that operational temperatures of 4.5 K can be reached for a finite and small number of coolers, similar to what the Origins study has found. 1,2 We also define a path to modify current state-of-the art coolers to provide the necessary lift to reach 4.5 K.
The foundation of our Origins telescope conceptual design uses the design of the JWST as a point of departure (see Fig. 1). We detail the modifications needed to transform the Webb telescope's design into a concept for Origins telescope.
We describe in some detail how to modify an existing mature cryocooler to the cooling needs of Origins requirements, underscoring the argument that reaching 4.5 K is a challenge of engineering design and not one of technology. This report concludes with a discussion of packing the hardware for our JWST-derived Origins design into the mass and volume allocation of JWST. *Address all correspondence to Jonathan W. Arenberg<EMAIL_ADDRESS>A secondary objective of this work is to demonstrate that a well-considered telescope architecture, such as JWST, can be used with small changes to meet different science missions. The motivation here is quite simple: reuse the JWST to the extent possible, resulting in a 6.5-m diameter primary to satisfy the needs of Origins. A mission requiring a cryogenic payload such as Origins can reuse the investments made in realizing the JWST sunshield. If the basic architecture and some of the hardware are reused, the JWST-derived Origins mission would avoid much of the non-recurring expenses involved in the design of the original mission. Large-scale design reuse is one means of achieving the increase in engineering productivity necessary to maintain the viability of strategic missions under a fixed budget. 3,4 In some respects, this study is deja vu. A previous decadal study called Single Aperture Far Infra-Red (SAFIR), another large-aperture cryogenic mission, pursued the same path of concept development as we are investigating. This path is adapting the JWST design for the Far-IR mission with the addition of active cooling. 5,6 The SAFIR study was performed in the early 2000s, long before JWST's Critical Design Review, which was held in 2010. The SAFIR study made two seminal assumptions: the first is maturation of the planned cryocoolers for JWST's Mid-Infra-Red Instrument (MIRI), and the second is the success of the JWST design. At the time of this writing, in 2020, both of these design assumptions have been validated; JWST is in final system preparations for launch and the MIRI cryocooler is integrated and on board. This paper is a more detailed and nuanced repetition of the analysis performed for SAFIR, based on validated flight JWST thermal models and the defined Origins instruments.
In adapting the Webb telescope design for Origins, we have taken a step-by-step approach. The first step in this process is to realign the Webb design to the challenge of the Origins mission, while changing as little of the existing design as possible. The modifications to the Webb architecture and design are discussed and detailed in Sec. 2. Our Webb-based Origins telescope design is then implemented into a system-level thermal model derived from the validated Webb system thermal model that is used to calculate temperatures and heat loads. The locations in the design where heat will be actively removed are identified and the heat loads at those points calculated. Summing over those loads informs the total heat loads to be removed and determines the number of coolers needed.
The Webb cryocooler, originally designed for the MIRI instrument, achieves ∼6.2 K at the cold head. 7 We extensively discuss the clear maturation path to extending Northrop Grumman's MIRI cooler to 4.5 K operation in Sec. 3. In Sec. 4, we show the packaging needed to accommodate the Origins equipment, coolers, cooler radiators, and instruments in the existing JWST volumes. Finally, we show that the mass properties of the redesigned Webb continue to meet launch and momentum management needs, implicitly making the argument that the resulting design can be launched and operated in the Sun Earth L2 halo orbit as planned for Webb and Origins.
Changes to Transform Webb Telescope into the Origins Telescope
This section describes the changes made to the Webb telescope configuration to allow it to meet the thermal requirements of the Origins telescope. This reconfiguration was also done with the subordinate objective of making the fewest changes to the Webb design. The "genetic proximity" of the two designs maximizes the reuse of design is at the foundation of demonstrating that a well-considered space observatory architecture can be (largely) re-employed to serve other science missions, thus providing a path to increased engineering productivity and reduced mission development times. 4 Figure 2 shows a picture of the proposed configuration and identifies the major changes to the Webb architecture to transform it into a design for Origins.
Architectural Changes Transforming Webb into Origins
The primary change from JWST to Origins is the addition of active cooling, the result of which is telescope optics at 4.5 K. The low operating temperature of the Origins telescope is the foundation of a vast increase in sensitivity that is the hallmark of the Origins concept. Thus, achieving these mirror temperatures is our design's chief objective. 1 The 4.5-K operating temperature cannot be achieved through a passive design. Achieving 4.5 K for the mirrors and instrument interfaces requires the addition and accommodation of a number of cryocoolers. A collateral goal of the Origins design effort is to limit the number of cryocoolers needed, thus minimizing the resources needed to make Origins feasible.
To provide an image of the thermal system, consider Fig. 2, a thermal schematic representation of a cooled system. It illustrates the design parameters of the cooling problem: discrete thermal stages or zones characterized by a specific temperature T i , the amount of heat to be removed at each stage Q r i , and the heat conducted and radiated into the i'th node or stage of the system Q ext i . The specific power (SP) and specific mass (SM) are defined as the power or mass of a cooler needed to extract a watt of power at a given temperature T. For cryocoolers, SP and SM are typically represented as power laws and have the form E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 3 0 6 for specific power. Figure 3 shows a plot of the known cryocoolers for space and shows the SP curve. SM is represented by In Eq. (2), γ represents the irreducible mass for any cooler regardless of temperature. The system optimization is to minimize system power and mass while achieving the required temperature at T 4 . Writing out the cooling system mass M and power P yields In Eqs. (3) and (4), Q number of coolers and δ i has the value of 0 if there is no active cooling for the i'th stage and 1 is there is active cooling. Equations (1)-(4) express the thermal system design problem. The designer must determine the number of stages and the temperatures of these stages to minimize overall power P and mass M of the cooling system. Figure 3 shows a plot of specific power for space cryocoolers and shows that α ∼ −2.25 implying that simply removing heat only at the coldest point is not a good option for the design of an active cooling system like Origins. System optimization is not simply a matter of increasing Q r 4 until T 4 is reached; it requires a careful strategy of removing heat at higher temperatures where it can be removed more efficiently. The Origins architecture seeks to thermally isolate the coldest stage, the mirrors, limiting the heat that must be removed by the cryocoolers at the lowest temperature, thereby minimizing their number and required power, mass, and volume needed for cooler accommodation.
The key design change from the JWST to Origins is moving the warm (near-room temperature) instrument electronics from the cold side of the observatory. On Webb, the warm instrument electronics are located on the cold side of the observatory, proximate to the optics. This architecture presented a major challenge for the thermal design of JWST. Moving these warm electronics to the spacecraft is a critical design modification of the Webb design needed for Origins. Our Origins design places the warm-side instrument electronics in the spacecraft bus, physically well removed and isolated from the coldest areas of the observatory, thus removing the major source of conducted and radiated parasitic heat on the cold side of the Webb design. We also removed from the parent Webb design the harness radiator, instrument harnesses, and instrument radiators fixed and deployed, as they are not required in the Origins design. To minimize parasitic heat loads to the coldest stage, the JWST insulation on the instrument enclosure and backplane, which is mixture of MLI and black Kapton SLI, is replaced on our Origins design. The insulation for the Origins design completely covers the instrument volume with high-performance MLI to minimize refrigerant loads for instruments. For Origins, the backplane is covered with MLI with low-ε surfaces facing the mirrors, limiting radiative coupling between the structure and mirrors. The JWST insulation on the Aft Optics Subsystem (AOS) is also replaced. For Webb, the AOS is covered with black Kapton SLI with Kevlar mesh, in our Origins design, the AOS is insulated with MLI to better thermally isolate AOS. Origins cools the fine steering mirror (FSM) and the tertiary mirror using heat straps connected to a 4.5-K cold finger, replacing the AOS radiators employed in the Webb design.
The JWST wiring harness, which runs the full extent of the telescope from the room temperature spacecraft to the mirrors, is attached to the backplane and makes use of copper conductors. In the Webb design, the use of low-thermal-conductance phosphor bronze (PhBr) is largely restricted to the transition harness running from the bottom to the top of the deployable tower assembly (DTA), where harness temperature gradients are the largest on Webb. The Origins harness would employ only PhBr or other low-thermal-conductance material throughout the harnesses. The exclusive use of PhBr in the harness will drastically lower the conducted parasitic heat load throughout our Origins design.
On JWST, the DTA surfaces are bare, in the Origins design the surfaces are covered with MLI and low-ε surfaces to prevent heat from the 300-K vibration isolator assembly, near base of DTA, from radiating into the cold side.
Thermal Stages and Loads
The validated JWST thermal model has over 118,000 nodes and was modified as described in Sec. 2.1 to represent our Origins design. With this model, we calculated the temperatures and heat flows through the Origins design. Our Origins design has two temperature stages or temperature intercepts at 18 and 4.5 K. At each intercept, the heat is removed by active refrigeration by the cryocoolers the loads at each of these locations is determined by the thermal model.
The 18-K heat loads are considered first. The surfaces shown in yellow on the backplane support fixture in Fig. 4 are cooled to 18 K to reduce the required cooling load for the telescope optics and science instrument interfaces. This is done by transferring heat from these locations to a limited number of cold fingers using high-purity aluminum straps. The total heat to be removed from all locations on the telescope structure at 18 K is 203.9 mW with no additional margin (as yet) included.
Several other locations are also in the 18-K stage. On Webb, all thermally significant harnesses that run from the warm spacecraft bus to the cold telescope must pass through an electrical interconnect panel called ICP6. These harnesses passing through ICP6 include those for controlling primary mirror segment assemblies (PMSA), SM, and FSM, as well as telescope deployments mechanisms and heaters, temperature sensors, etc. Cooling ICP6 to 18 K requires 339 mW power to be removed, larger than the telescope structure loads. Origins also calls for the cooling of the Cold Junction Box (CJB) located near ICP6, on underside of floor of the backplane support. The harnesses for controlling PMSAs, SM, and FSM pass through the CJB. Cooling CJB to 18 K helps augment cooling at 18 K locations and further reduces the heat load on mirrors operating at 4.5 K by removing potential parasitics at a higher temperature, where they are more efficiently removed. The predicted 18 K heat load on the CJB is 38 mW.
Summing up all of the heat loads at 18 K, including those from the structure (204 mW), ICP6 (339 mW), and the CJB (38 mW), produces a total of 581 mW.
Let us now examine the 4.5-K loads. The thermal loads on the optics are also calculated directly from the Origins thermal model. Figure 5 shows the heat loads on the primary mirror.
The total 4.5-K load from the primary mirror, without margin, is 118.8 mW, the loads on the individual mirror is shown in Fig. 5. The very noticeable "hot spot" in the mirror is due to an asymmetry in the design of the Webb harness. This hot spot is clearly an area that can be investigated for further improvement especially given the highly non-linear dependence of radiated stray light. We have investigated the performance of the PM as shown. Lightsey et al. 8 provided an assessment of the stray light performance of this configuration.
The 4.5-K loads on the other telescope optics are 9.9 mW for the SM, 1.7 mW for the TM, and 4.7 mW for the FSM. This gives an unmargined total heat load of 135.1 mW at 4.5 K for the telescope optics.
Cryocoolers
The changes in configuration as described in Sec. 2 are necessary for the passively cooled ∼40 K Webb telescope to be capable of being actively cooled to the temperatures needed for Origins. To reach the 4.5-K operating temperature requires active cooling, via cryocoolers. This section details the developments need and possible to extend the current start of the art to the levels of performance needed for Origins. The maturation of coolers is also the subject of companion paper in this special issue. 9 Following the description of cryocooler development paths, we will present an analysis that determines the number of cryocoolers needed to meet the required loads defined in the previous section.
Path to a 4.5-K Cooler
The cryocoolers needed for Origins do not yet exist, their development is part of the planned activities. 9,10 In this section, we detail our thoughts on the development of the MIRI cooler developed for JWST to meet the needs of Origins.
Our design paradigm of the maximum reuse of Webb architecture for Origins suggests that that the cryocooler that will be used for the MIRI instrument will be re-employed for Origins, meaning no entirely new cooler development is needed for Origins as was the case for Webb. The Webb MIRI cooler is designed to reach 6.25 K not 4.5 K and cannot be used to meet the needs of Origins design unmodified. The required lift for the Webb MIRI cooler is 55 mW at 6.25 K and 232 mW at 18 K. The MIRI cryocooler is the state-of-the art and is our point of departure for a cooler for Origins.
Northrop Grumman has developed and manufactured a large number of space flight cryocoolers covering the temperature range from 6 to 200 K. 11 The cryocooler that we modeled in this study is based on modifications to the MIRI flight cooler manufactured by Northrop Grumman Space Systems for NASA's JWST shown in Fig. 6. [12][13][14] As a result, the modeled cooler is a relatively mature Technology Readiness Level 6 (TRL 6) and includes major subsystems that are TRL 8. The MIRI cryocooler was designed to cool a sensor and its shields to 6.2 and 18 K, respectively. We will show that with minor modifications this cooler will meet the needs of Origins. Figure 7 shows a block diagram of the MIRI cooler. The cooler is a hybrid pulse tube/Joule Thomson (JT) cooler. A three-stage pulse tube cooler precools the circulating helium gas for the lowest temperature JT stage. The three-stage pulse tube cooler was originally designed as a 10-K cooler that can also provide additional cooling at each of its three stages for other components including thermal shields. 15 The pulse tube and JT coolers have independent working gases and independent gas volumes. For this study, no changes were made to the pulse tube precooler-all changes were made to the JT cooler.
The existing MIRI JT cooler uses the TRL 9 Northrop Grumman High Efficiency Cool (HEC) compressor with the addition of rectifying reed valves to pressurize and circulate the 4 He JT gas. It acts as a vibrationally balanced single compression stage compressor. The JT recuperators (R2, R3, and R4) through which the JT stage's pressurized helium flows are precooled at each of the three pulse tube stages. The precooled helium then passes through Fig. 6 The JWST MIRI cooler and its main components.
the JT recuperator (R1) and is further cooled to 6 K at the JT expander. The bypass valve can be used to precool a large heat capacity load, if necessary. The fact that the JT cooling stage can be located many meters from the basic cooler as it is on MIRI is especially relevant to the cooling of a large reflector.
The MIRI cooler can be modified to operate efficiently at lower temperatures, as previously described, by adding another JT compressor to act as an additional compression stage, increasing the recuperator tube size modestly, and optimizing the length of the JT restriction. 16 These modifications are indicated in the red text in Fig. 7. In the previous report, a point design for a large telescope cooled to 4.5 K was presented. 16 In this current study, we extend this modeling to provide predictions of the performance of the modified MIRI cooler over a broad range of conditions suitable for supporting a range of potential future missions such as Origins.
For this modeling effort, we changed the operating conditions and/or changed the working fluid. In addition, we modeled changing the single-stage JT compressor to a multi-stage higher pressure ratio JT compressor. In all cases, we changed the operating conditions of the PT precooler in order to optimize precooling for the JT cooler but made no hardware changes to the PT precooler. For the JT cooler working fluid, we modeled the use of 4 He for temperatures above 4 K and the use of 3 He for temperatures below 4 K where it becomes advantageous.
The model used in this study was previously used in the development of the MIRI cooler and has been compared against measurements from that cooler. 14 Both a detailed SAGE model and a reduced model were developed and compared against the measured performance of the MIRI cooler. For this work, the reduced model and NIST REFPROP 9 fluid properties were used. 17 The current MIRI stage cooler was designed for cooling a primary load at 6.2 K with an additional upper stage heat exchanger for remotely intercepting parasitic loads at 18 K. Figure 8 shows the predicted cooling versus input power of this configuration with its single-compression stage JT compressor when 4 He gas is used as the working gas for cooling at 6 and 18 K. The model includes the power dissipation of the MIRI flight electronics, hence we report the power going into the cooler's drive electronics, i.e., the spacecraft bus power. Based on the ∼5% to 10% difference between the model and measurements over the range of measurements shown in Fig. 8, we expect that the modeling results for the proposed configurations have a similar ∼ AE 10% uncertainty band for the cases that use 4 He with higher uncertainty for the configurations that use 3 He.
The curves with circle symbols are model predictions, the square symbols are measurements made with the MIRI cooler at an intercept load of 335 mW and correspond to the dashed line model predictions. Next, we show the predicted performance of the cooler when we use 4 He and augment the existing single-stage JT compressor with an additional compressor stage with larger swept volume. This allows achieving the same mass flowrates as the MIRI cooler but with lower input pressures and a larger compression ratio, thus allowing lower temperature operation. For this configuration, we can use a design variant of the MIRI compressor that has larger pistons. We have previously demonstrated cooling at 4.5 K with the development model of the MIRI cooler, utilizing a laboratory compressor for the additional lower compression stage. 18 Subsequently, we have developed to Critical Design Review level a flight version of the compressor with pistons that are approximately twice the diameter of the MIRI JT compressor. This new design variant has been originally developed for use in a heat engine application, then prototyped and advanced to released drawing status on subsequent NASA programs. 19 Because of the development of this compressor for a thermoacoustic power convertor (TAPC), we call this the TAPC compressor. It uses the same motor and flexure design as the existing MIRI compressor, the only significant difference is the larger piston diameter. The TAPC compressor would need reed valves added to it to make it a JT compressor, similar to the prior addition of reed valves to the HEC pulse tube cryocooler compressor to make it a JT compressor. The MIRI reed valve design would be scaled to the larger area needed for the lower pressure operation. This is not expected to be a difficult development. The use of this new design variant of the JT compressor for the additional lower stages is the basis for the model predictions that follows.
There are two configurations in which we can utilize the TAPC design as an additional compressor. The simplest of these is to add it as a lower stage in the usual back-to-back configuration, which then results in a two-stage compression of the helium. Referring to Fig. 7, the first stage of compression is from the TAPC compressor sides "C" and "D" and the second stage is from the MIRI compressor sides "A" and "B." The predicted lift at 4.5 K for this two stage configuration is given in Fig. 9. The predicted lift shown in Fig. 9 is about 20% greater than the measurements and predictions in the prior demonstration, which is because the current predictions allow the use of a JT restriction length that is optimized to the design point, as opposed to previously having to use the JT restriction hardware that was optimized for 6.2 K operation. With this configuration, the cooler is predicted to lift 157 mW at 4.5 K with no intercept load, or 97 mW lift at 4.5 K with 250 mW of load intercepted at 15 K.
For additional lift at 4.5 K, we can make each half of the TAPC compressor act as an independent compression stage, similar to the configuration flown by ESA on the Planck mission. 20,21 Referring to Fig. 10, the first stage of compression is then from the TAPC compressor sides "D," the second stage of compression comes from the TAPC compressor side "C," and the third stage of compression is from the back-to-back MIRI JT compressor sides "A" and "B" This provides three stages of compression but reduces the swept volume of the lowest stage by a factor of two. Figure 10 shows that this is an acceptable trade that results in a significant increase in lift at all bus powers. With this configuration, the cooler can lift 200 mW at 4.5 K with no intercept load, or 140 mW lift at 4.5 K with 250 mW of load intercepted at 18 K.
Since the power split between the pulse tube precooler and the JT compressor can be varied by command, there are many possible combinations of intercept temperature, intercept load, and lift at 4.5 K. To illustrate a representative set of conditions, we have modeled the case in which the intercept load is always four times the load at 4.5 K, both for the case of a constant 18 K Fig. 9 Predicted heat lift at 4.5 K using MIRI cooler with two stage compression implemented via back-to-back TAPC compressor with 4 He. Fig. 10 Lift at 4.5 K for the configuration, in which the TAPC compressor is set up as two stages, and the MIRI compressor is used as the third stage, and the gas used is 4 He. Arenberg et al.: Alternate architecture for the Origins Space Telescope intercept temperature, and for the case where the intercept temperature is allowed to vary, so as to minimize the total power needed. These predicted results are shown in Fig. 11.
Lower temperatures can be reached by switching to 3 He from 4 He and reducing the operating pressure. Efficient operation at these low temperatures is achieved by splitting the two halves of the MIRI JT compressor into independent stages, which in conjunction with the split TAPC compressor provides four-stage compression. Referring to Fig. 7, the first stage of compression is from the TAPC compressor sides "D," the second stage from the TAPC compressor side "C," the third stage is from the MIRI compressor sides "B," and the fourth stage is the MIRI compressor sides "A". This four-stage compression not only accommodates the lower pressure, the resulting high compression ratios also mean that substantial cooling power is achieved at mass flowrates substantially lower than in the MIRI cooler application. This reduced mass flow rate allows recuperator lengths to be reduced by a factor of four, which helps to counteract the pressure drop in the recuperators that results from the lower gas density and lower operating pressures. To further accommodate the reduced pressure, the tubing diameter in the lower recuperator is increased by approximately two times relative to the MIRI cooler dimensions. The predicted lift in this configuration at 2.5 and 1.7 K is shown in Fig. 12.
The addition of the second compressor and its electronics is estimated to increase the mass of the cooler by ∼10 kg.
A hybrid pulse tube/JT cryocooler based on the MIRI cooler can provide an effective solution for large cryogenic space missions that require significant cooling in the 4-to 2-K range like Origins. The required modifications are confined to the addition of an existing design compressor with its drive electronics and the resizing of the throttle. This approach reduces the hardware development needed for achieving the lift requirements of next generation space telescopes and can guide future demonstrations at correlated conditions in the lab.
How Many Cryocoolers are Needed for Origins?
In Sec. 2.1, we have developed an estimate of the load to cool the telescope to 4.5 K. The required lift is 581 mW at 18 K and 136 mW at 4.5 K. However, our analysis is not complete. The cooling needed to provide the 4.5-K interface to the instruments has not been accounted for. At the time when we performed this quick conceptual study, the instrument loads were not well known. A companion paper in this issue from DiPirro and colleagues provides a basis for an estimate of the missing load at 4.5 K. 2 DiPirro reports that the total load for the baseline Origins design is 90 mW. Examination of the heat flow diagram in the reference shows that much of this 90 mW, 53 mW, are from the structure and telescope, leaving 37 mW as the estimated instrument load missing in our analysis.
Consulting Fig. 11, we can see that one three-stage split TAPC + MIRI configuration will lift 100 mW at 4.5 K and 400 mW at 18 K for 500 W of bus power. Our design calls for three of these coolers with a total lift at 4.5 K of 300 mW and 1200 mW at 18 K ( Table 1).
The margins for this configuration are significant: over 100% for the 18-K load and 73% for the 4.5-K load. We do believe that for a study of this type we have acceptable margins for a conceptual demonstration. Typically margins are decreased as the design and models mature, since our design paradigm is to reuse the Webb design, it is proper to consider it more mature than other conceptual models. Second, in our quick study, the target mirror temperature was 4 K, not 4.5 K, which induces some degree of conservatism. Since the heat load on the 4.5-K optics is primarily conduction-dominated, the ratio of heat load on 4.5 K optics to that on the 4.0-K optics 0.96 of that portion of the load. Finally, these margins are for a system with three coolers; the baseline Origins has four. The addition of a fourth cooler to this design would significantly overachieve insofar as thermal margin is concerned.
Packaging
The first objective of this packaging study was to demonstrate that the volume within the existing Webb design can accommodate the equipment for Origins in terms of volume, mass, and power.
The first task is to find volume in the spacecraft for the relocated instrument electronics. In the Webb configuration, the instrument electronics are located on the cold side of the sunshield. In the Webb design, heat from the single MIRI cooler is rejected on a radiator panel forming a side of the bus. This is the location planned for the instrument electronics.
The second objective of our packaging study is to find a means to reject the 1.5-kW coming from the three cryocoolers needed for Origins. The needed power rejection is accomplished using a deployable room temperature radiator made of panels. Each panel measuring is 1.25 m × 0.7 m, with one panel per cooler. The ensemble of radiators is capable of rejecting nearly 1.6 kW at 290 K. This radiator is located in the space now used for the MIRI in JWST and this is the initial location for the relocated instrument electronics. A sketch of this layout is given in Fig. 13. Figure 13 also indicates that there is room to grow the radiators in case increased margin is desired.
Volumetric accommodation of the science instruments was assessed by taking volume footprints of the existing Origins instruments and fitting them to the available volume in the JWST ISIM and instrument electronics area. The results of the instrumentation packaging study are shown in Fig. 14. Basically, all of the instrument envelopes can be accommodated. There are few areas of interference with current backplane struts and the Origins instrument envelopes. These small interferences are not viewed as "show stoppers," as the instruments are represented by their envelopes and these envelopes were developed for the Origins baseline, not this Webb-derived architecture. Given the small size of the interferences, we are quite confident a deliberate effort to accommodate the Origins instruments would be successful and not upset the architecture. We also examined the electrical power budget, which is higher for Origins than for JWST, requiring the addition of two extra panels to the current five-panel array. This larger array can be packaged for launch as shown in Fig. 15. This thicker stack does encroach on the keep-put zones defined by the Ariane 5 launch vehicle. This small envelope violation is not a "show stopper" for two reasons. The first is that that envelope violations are small and in family with others that we know of that have been waived for JWST. Second is that when this JWST-derived Origins design might launch, the Ariane 5 will be retired, so a small violation is no reason to reject a concept at this level of development.
We also considered that the longer length of the solar array may have potential interference with the inviolate sunshield deployment envelopes. To accommodate the longer solar array, the angle of deployed solar array had to be adjusted down 10 deg as shown in Fig. 16.
Finally, a mass assessment was made. This mass assessment is based on the JWST mass properties for the reused Webb hardware and study values for the instruments. All told, the Origins wet mass is ∼6505 kg, with small mass margins on the JWST legacy hardware and the same mass growth allowance used by the Origins study and reported there for that portion of the hardware. 1 Our study also indicates that the center of gravity of the alternate architecture is compliant to the requirements of JWST momentum management solution, allowing us to conclude at the conceptual level that once the alternate architecture has reached operating temperature, it can be operated in the same efficient manner as will JWST, allowing reuse of all the apparatus and software of Webb planning and operations.
Summary
This brief report has shown that the concept of using a modified Webb architecture can meet the thermal needs of Origins. Our results reconfirm the SAFIR studies and Origins study that a large cold telescope is viable. Our alternate architecture for Origins employs three modified MIRI coolers with well understood and demonstrated modifications to achieve the 4.5 K required for Origins. It is worth noting that baseline Origins design requires a number of state-of-the-art coolers, four, similar to our findings, which call for three. 1,2 The Origins team approached the design problem in a very different way and came to a similar conclusion about the cryothermal design, namely, that Origins requires only a small number of existing coolers to achieve its mission. In short, 4.5-K class large observatories are viable and possible. The challenge to realizing the thermal performance of Origins is one of architecture, design, and workmanship, not technology. By repurposing the Webb architecture for Origins, we have also demonstrated that there are possible observatory architectures that can accommodate telescopes of various spectral ranges and operating temperatures. This exciting possibility opens the way for planned reuse of architecture, design, and hardware. Such a program of programs could provide the needed increase in productivity and launch tempo that could enable a sustainable future for flagship missions such as Webb and Origins. 3,4 Mark Michaelian received his BS, MS, and PhD degrees in aerospace engineering from the University of Southern California. He has more than 19 years of experience in the aerospace industry and has contributed to the JWST using his background in cryocooler technologies and propulsion.
Tanh Nguyen is a retired Northrop Grumman Technical fellow for cryocoolers and thermophysics devices. | 8,535.8 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Tauray: A Scalable Real-Time Open-Source Path Tracer for Stereo and Light Field Displays
Light field displays represent yet another step in continually increasing pixel counts. Rendering realistic real-time 3D content for them with ray tracing-based methods is a major challenge even accounting for recent hardware acceleration features, as renderers have to scale to tens to hundreds of distinct viewpoints. To this end, we contribute an open-source, cross-platform real-time 3D renderer called Tauray. The primary focus of Tauray is in using photorealistic path tracing techniques to generate real-time content for multi-view displays, such as VR headsets and light field displays; this aspect is generally overlooked in existing renderers. Ray tracing hardware acceleration as well as multi-GPU rendering is supported. We compare Tauray to other open source real-time path tracers, like Lighthouse 2, and show that it can meet or significantly exceed their performance.
INTRODUCTION
We introduce an open-source, cross-platform real-time 3D rendering tool called Tauray. Tauray focuses on using ray tracing techniques for generating real-time content for multi-view displays, such as VR headsets and light field displays. It includes multiple different rendering modes, such as forward path tracing and DDISH-GI [Ikkala et al. 2021], and supports using multiple GPUs.
Even though some existing renderers do support rendering on multiple GPUs, their focus tends to be on increasing the total throughput, leveraging alternate frame rendering or focusing on high-spp dynamically scheduled tiled rendering. Tauray's multi-GPU support instead aims to minimize latency and only splits the workload in ways that also benefit low-spp rendering and do not introduce additional latency beyond necessary memory transfers. Figure 1 shows Tauray rendering a scene in real-time on a light field display (Looking Glass portrait). The left photograph pair shows 1 spp path tracing with 5 light bounces denoised with SVGF, running at around~50 ms per frame. The right-side pair shows DDISH-GI with an 8 × 8 × 8 probe volume, with 256 8-bounce rays per probe, running at~15 ms per frame. Both sides are rendering 64 different views at 512 × 683.
While there are already several open-source rendering tools available (a comparison table is included in supplementary material), the novelty of Tauray is in its unique combination of the following features: • Open-source code. (https://github.com/vga-group/tauray) • Hardware-accelerated path tracing.
• Multi-GPU support for real-time rendering.
• Real-time light field & virtual reality (VR) display output.
IMPLEMENTATION
Tauray uses the Vulkan API for rendering. Many Vulkan extensions are used in Tauray, but none of them are vendor-specific. These choices keep Tauray from being locked to just one GPU vendor or operating system. Tauray runs on both Linux and Windows operating systems, though multi-GPU support is limited to Linux. Tauray records command buffers and assigns descriptor sets only at the start of the program or when models are dynamically added to or removed from the scene; this reduces CPU overhead and provides the GPU drivers more opportunity for optimization. This approach is not optimal for rasterization-based rendering, because it prevents per-frame culling of drawcalls. Since ray tracing-based methods do not issue draw calls for individual scene objects but rather one command to start tracing rays, there is no need to modify the contents of command buffers for each frame.
Scenes are specified using Khronos' glTF 2.0 format. Keeping shading complexity low is crucial for ray tracing performance [Dunn 2019], so we keep our material model simple by limiting it to the core metallic-roughness workflow in glTF, plus transmission for transparent objects. Specifically, Tauray currently uses the isotropic GGX/Trowbridge-Reitz [Trowbridge and Reitz 1975;Walter et al. 2007] BSDF and Lambertian diffuse BRDF. Additionally, spherical lights with a radius and directional lights with non-zero angular diameter are supported through a custom plugin that extends Blender's glTF exporter.
Rendering modes
While Tauray supports many rendering modes for debugging, dataset generation and comparison purposes, its primary focus is on methods aiming for photorealism: a forward path tracer and DDISH-GI [Ikkala et al. 2021] are available. Both methods use the crossvendor Vulkan extensions VK_KHR_ray_tracing_pipeline and VK_KHR_acceleration_structure for ray tracing.
The forward path tracer supports Next Event Estimation, Hashbased Owen scrambling [Burley 2020] and Russian Roulette sampling [Arvo and Kirk 1990;Kahn 1955]. SVGF [Schied et al. 2017] and BMFR [Koskela et al. 2019] are available for real-time denoising. Box and Blackman-Harris filters are available for primary ray sampling, in order to achieve anti-aliasing. Temporal Anti-Aliasing [Karis 2014] is also supported.
The DDISH-GI renderer supports locally rendered or streamed spherical harmonics probes. Both the client-side renderer and the probe server are available in Tauray. The streaming mode uses Figure 2: Diagram of the cross-GPU memory transfer as implemented in Tauray. Resources tied to the secondary GPU are marked with the green color, while the resources of the primary GPU are in purple. Host process resources are in red. When more than two GPUs are used, all non-primary GPUs form a similar pair with the primary GPU.
ZeroMQ [Hintjens 2013] and is resilient to poor bandwidth, high network latency and unstable connections. This method is well suited for light field rendering, because the probes can be reused for all views. The ray tracing workload is independent of the number of views and pixels, and multi-view rendering scales practically as well as plain rasterization does.
Multi-GPU rendering
Multi-GPU rendering is implemented so that devices from different Vulkan device groups can co-operate. Since there is currently no DMA extension that enjoys cross-vendor support, we pass the GPU-to-GPU memory transfers through host memory. This memory transfer is done in a way that avoids synchronizing the host process with the GPUs, as shown in Figure 2. We exploit two Vulkan extensions that are typically used for cross-API interoperation: VK_KHR_external_memory_host and VK_KHR_external_semaphore.
We use the external memory extension to create buffers with the same host-provided memory access for both GPUs taking part in the memory transfer. Then, we issue commands to write the transferred data from the sender GPU to this host buffer. Once that transfer is finished, we can issue a read command on the corresponding buffer on the other GPU. Because regular Vulkan semaphores do not work across devices, external semaphores are used to synchronize the read after writing on another GPU. During this process, while the OS and GPU driver on the host CPU most likely are involved, the host process (Tauray) itself does not need to synchronize with the GPUs for this memory transfer.
The ray tracing workload of each view is split between every GPU taking part in rendering. Tauray provides a way to fairly easily program new workload splitting methods; a scanline-based approach is included as an example, which is adequate for when the GPUs have matching performance. Due to our low-latency realtime aim, splitting the workload by alternate frame rendering (AFR) is not considered, as it does not reduce latency beyond a single GPU [Monfort and Grossman 2009]. Certain short tasks, such as scene data refreshes (typically in the order of 0.1 ms in total) and acceleration structure updates are duplicated on each GPU. This is done when transferring their results would incur greater latency or there is no guarantee of data compatibility between the GPUs. Figure 3: Typical multi-view, multi-GPU rendering pipeline diagram of Tauray; the details can change depending on parameters. The "G-Buffer" needed by many post-processing steps (like denoising) is rasterized on the primary output GPU in Tauray whenever possible. In our case, rendering the G-Buffer separately on the primary GPU is generally marginally faster than distributing it to multiple GPUs.
Stereo & light field rendering
Both stereo and light field rendering use the same rendering architecture for multi-view rendering. Views are stored in an image array instead of separate images. This lets all viewport-related rendering stages of Tauray operate on multiple views in a single pass, which minimizes the overhead involved with launching and synchronizing shaders. Rasterization-based render passes are accelerated for multi-view rendering using the VK_KHR_multiview extension. Compute and ray tracing render passes operate on all views in one pass. In an example 128-view case, doing this allows the path tracing renderer to roughly halve the total frametime and go from about 90% GPU utilization to 99-100% utilization. Figure 3 shows an overview of the multi-view rendering pipeline in Tauray.
As an alternative for brute-force rendering of all viewports, Tauray also has a simple, real-time capable spatial reprojection implementation for quickly generating more viewports from only a few rendered viewports, though it does not fill in the disocclusions intelligently yet.
VR is supported with the OpenXR API. As an example platform for real-time light field rendering, Tauray also supports rendering to the Looking Glass light field displays [Looking Glass Factory, Inc. 2022]. Content for arbitrary multi-view displays can be generated with offline rendering by setting up a grid of cameras. Spatial and temporal reprojection modes are also available, enabling the reuse of samples across different views and frames.
COMPARISON TO RELATED WORK
We compare Tauray to three other renderers: Falcor, Lighthouse 2, and Blender (Cycles). The first two renderers were chosen, because they are also open-source renderers with similar real-time path . All scenes are lit by one punctual directional light. This choice was limited by each renderer having somewhat different feature sets, and this was the lowest common denominator for an identical lighting setup.
All benchmarks in Tables 1 and 2 are measured when path tracing at 1920 × 1080 with 2 ray bounces (effectively 3, as all compared renderers implement next event estimation). Because the renderers do not provide identical denoising schemes, denoising is disabled. RTX 2080 Ti GPUs are used for the measurements. For all renderers, the timing measurements were full frame times as measured on the CPU. For Table 1, performance is averaged over 5 separate runs of 50 frames each. For Table 2, performance is averaged over 10 runs.
Lighthouse 2 was modified to do two light bounces instead of just one. Furthermore, the offline rendering benchmarks are done by using accumulation of 8 spp frames due to higher per-frame spp counts causing Lighthouse 2 to run out of memory on our setup.
In both online and offline cases, Tauray is consistently as fast or faster than the compared renderers. The dual-GPU setup in Lighthouse 2 seemed to be poorly supported: GPU utilization was low, 30-40% on both GPUs and its self-reported "frametime overhead" is in the order of ten milliseconds. Unfortunately, CUDA runs out of memory with Lighthouse 2 while loading the Emerald Square scene. Figure 4 shows how Tauray scales lineary to path tracing multiple views simultaneously for real-time light field rendering. These measurements are done on a single RTX 2080 Ti. The Looking Glass Portrait is used as the output light field display, so timings include compositing the views into the format the display expects. The blue line represents performance with multiple views (512 × 512 each), while the red line represents single-view performance with an equivalent total number of pixels.
Other than resolution and view count, settings are the same as earlier.
The multi-view rendering overhead depends greatly on the scene: at 128 views, Emerald Square and Sponza were about 26-29% slower to render than the single-view equivalent with the same total number of pixels, while this same metric for the Breakfast room scene is only around 4%.
CONCLUSIONS
We introduced a scalable cross-platform real-time 3D rendering tool called Tauray. To our knowledge, it is the first open-source hardware-accelerated path tracer optimized for real-time rendering on light field and stereo displays. We demonstrated the optimized and scalable performance of Tauray: In both online and offline cases, Tauray's speed consistently matches or exceeds all compared renderers (Blender, Lighthouse 2, Falcor) and GPU setups. Tauray also scales efficiently for rendering multi-view content for VR and light field displays, roughly linearly with the number of views. | 2,734.2 | 2022-12-06T00:00:00.000 | [
"Computer Science"
] |
Al-Kitab
: In this era, where business execution is primarily reliant on information
Introduction:
A blockchain is a distributed system composed of growing collections of information known as blocks that are securely linked using cryptography [1]. Each block additionally carries a timestamp and the previous block's cryptographic hash. Although blockchain records are not immutable, they are secure by design and represent a distributed computational system with a high degree of fault tolerance. Figure 1 below depicts how a blockchain works [2].
Figure 1. How blockchain works[3]
It necessitates the processing of massive amounts of personal data about employees, such as names, addresses, bank account information, Social Security numbers, and payroll information. All this private information needs to be protected from theft, loss, snooping eyes, hackers, and denial of service attacks. Given that it involves processing personal data, payroll is one of the important human resources areas that blockchain data protection regulations have an impact on. As a result, appropriate technologies must be employed to safeguard and positively affect payroll. Blockchain is a fundamental technology for protecting the payroll system that will herald in the second phase of the Internet, one in which value may be traded as opposed to only information. The General Data Protection Regulation requires the implementation of organizational and technical protections to protect personal data [4]. These metrics could include, for example, the following: • Workstations, servers, and storage areas must be kept secure.
• Encryption protocols should be established to secure data in transit and at rest.
• Specific security policies should be developed and implemented to protect confidential data.
• Confidentiality requirements must be established to develop best practices for data protection.
Payroll management software may have features (such as password protection, access control, secure storage, and so on) that are by certain sections of General Data Protection Regulation (GDPR) security regulations. Protect private payroll information; A risk assessment can help in identifying whether users, procedures, and systems put private payroll information in danger. Once possible risks have been identified, internal controls and policies can be put in place to mitigate them.
Blockchain adoption will make employee payments and all associated deductions and deposits in real-time using a payroll application on the distributed ledger of the blockchain.
All employee records and employer-matching payments will be immediately accessible at the audit level to different government entities. A payroll application on the blockchain will enable immediate worldwide payroll compliance at a fraction of the cost of existing payroll compliance using a fiat cryptocurrency [5]. . Given the results, no proposed solution was presented to improve the efficiency of the salary system, and the matter was limited to a general proposal for blockchain technology. This study proposed an encryption technology with a data ingestion method to protect and improve data. system efficiency.
Helliar et al. investigate the prevalence of both unauthorized and authorized blockchain
technology in organizations. This study surveyed 67 businesses and discovered that unlicensed blockchains were less likely to be adopted due to security, scalability, and regulatory compliance concerns. Permissioned blockchains, on the other hand, were seen as more appealing due to their greater access control, lower risk of attack, and ability to integrate with existing systems. The authors do, however, point out that the level of control over licensed blockchains can limit their potential for innovation and growth. Overall, the study suggests that organizations carefully weigh the benefits and drawbacks of various types of blockchain before deciding on adoption 3. R. T. Ainsworth and V. Viitasaari were interested in payroll tax and blockchain technology. what will it take to get the blockchain up and running; regulatory compliance; and whether will businesses be able to use the blockchain. And how will you raise awareness across organizations so that everyone can see the advantages of it? Some locations might only need to upgrade to the most recent technology, depending on where it is typical to create value through technology. The second is that several legacy systems need to be migrated, and adopting blockchain technology will cost time and money [9]. They concluded that it would not be difficult to locate businesses willing to move their payroll to the blockchain. Of course, smaller IT firms will probably want to test the new technology on their payroll systems first. The issue was that no practical application of the study that was intended to address the issues with wage systems was offered. Restricted to field studies for other studies without putting out fresh concepts or methods.
P. M. Madhani Focus on Blockchain research in the industrial sphere and the key benefits
of being of interest to firms, researchers, and practitioners alike. Conclude Blockchain technology can aid in the simplification of work operations and the resolution of numerous process challenges. The report outlines many blockchain qualities and derived benefits that can be used as building blocks for HR managers in various businesses to utilize blockchain solutions [10]. The report detailed the following benefits of using blockchain in HR in several industries: • It aids in the improvement of several HR operations. The blockchain allows HR managers to focus more on strategic HR duties while reducing time, cost, and administrative work, allowing them to improve overall process efficiency and effectiveness.
• Blockchain deployment allows HR managers to spend more time on other strategies by more successfully managing some essential resource procedures.
• Blockchain technology improves the performance of human resources in firms where process automation is allowed by the smart contract system. • The blockchain's state of development, prioritizing and choosing a particular human resource procedure.
• Blockchain use in the HR sector is being hampered by organizations and change resistance. There was a discussion of the challenges facing technology applications. Human resources' ignorance of the workings of blockchain technology as well as its functions: • Blockchain developers and the lack of qualified resources to manage it.
• The inflexibility of IQ contracts is another barrier because it could result in unfavorable outcomes (lack of accountability) in unanticipated circumstances (ie, scenarios that are not accounted for in computer codes). The highest administrative support, organizational technological readiness, employee motivation, and training of HR specialists in blockchain adoption are the major forces behind a successful blockchain application in HR [11]. Because of the results, it was confusing for the blockchain system and devoid of a coherent conclusion for the application of the system in human resource management, and the contributions were modest without proposing and implementing new blockchain technologies.
H. Demirhan
Discuss the efficiency of the tax collection system and how to collect taxes at the lowest possible cost. It is critical to ensure the effectiveness of the tax collection system by providing clear, controllable, secure, and real-time information. Changes and advancements in information and communication technology have driven the government to seek innovative methods of revenue collection. Where debates have focused on the application of blockchain technology (or, more colloquially, cryptocurrencies) to the public sector, as well as the application of blockchain technology in a tax system. In terms of data and transparency, the qualities, and benefits of several blockchain systems were examined.
It was determined that blockchain technology can be used in a variety of contexts to lessen the overhead expenses and administrative burden of tax collecting. Therefore, the researcher attempted to explain how blockchain technology could be used concerning taxation. Some points were made, including how blockchain technology represents a new approach to taxation, how it decreases tax spending, how it increases transparency and accountability, how it can be used to reduce tax evasion, and how it can lighten the administrative burden of collecting taxes [12]. In this study, the characteristics, and benefits of many blockchain systems were examined and it was found that they have a benefit for the tax system, but there was no direct contribution or testing of this system. It aims to add the contribution related to the actual application of the cryptocurrency system with an encryption algorithm to obtain tangible results that support the hypotheses. 6. D. Hanggoro, et al. They talked about data storage and how it may provide privacy, security, and data integrity to keep sensitive data healthy. The researchers planned to create a blockchain application to store employee attendance data from the company's human resources department. The results of the installation reveal that the blockchain may be utilized functionally as a data storage system for attendance management and payroll systems. Furthermore, a new blockchain called Hyperledger Composer has been proposed, which has a rapid validation time and a representational state transfer API (REST API) called composer-rest-server that allows the Hyperledger blockchain to connect with other components [13]. Block transaction times are used to assess Hyperledger Composer's performance. The blockchain was assessed in three ways: 1. Directly inside Hyperledger Composer.
2. Using Angular web application through REST API.
3. Using JMeter through the REST API.
Consequently, testing for constructing transaction blocks varies from 1 to 17 milliseconds in a live experience in Hyperledger Composer, from 5 to 296 milliseconds when using JMeter tools with the REST API, and from 1 -4270 milliseconds when using the Angular Web Application. The outcome demonstrates that the composer-rest-REST server's API performs better than Ethereum in terms of clock speed. Given the typical transaction time, it was found through these results that the composer-rest-server can manage systems that need quick transaction times, like voting systems, health monitoring, and Internet of Things (IoT) applications. It has been concluded from this study that there are techniques and a practical application used that show the success of the blockchain system in managing the payroll system, but there was no contribution in the aspect of protection and data security, as no algorithm was proposed to protect and encrypt the data. The aim is to propose an encryption algorithm such as ECC to ensure high security in the payroll system. 7. Huaqun Wang, et al. are interested in the secure storage of remote data by cloud computing, to verify the integrity of the data remotely. A special model based on a proven secure blockchain and relying on RSA encryption technology is proposed. At the same time, the system performance is analyzed in two parts: Analysis of theory and model implementation.
The results show that the proposed PDP scheme is safe, effective, and practical. The conclusion from this study is good in the field of building a system based on blockchain technology to store and protect data. The weakness of this study lies in the use of the RSA algorithm to protect data, as its encryption is weak and takes a long time due to the use of 8. Chen, Lanxiang, et al. in this article stated, electronic health records (EHRs) have data leaks that jeopardize patient privacy (eg health conditions). Since the EHR data often doesn't change after being uploaded to the system, the blockchain can be utilized to make it easier to share this data. A searchable, blockchain-based encryption scheme for electronic health records has been made. The results showed that the proposed technology ensures the integrity, anti-tampering, and traceability of the electronic health record index. Because of the conclusions, we find that there is a lack in the process of securing data content and weak encryption methods that are not based on advanced protection algorithms. Protecting data content is not only preventing access to it. The content must be encrypted to prevent unauthorized persons who have succeeded in accessing the data from reading its content.
The study concluded that blockchain-based searchable encryption is a promising approach for improving the security and privacy of EHR sharing, and the proposed scheme offers good performance and scalability for practical applications [15]. 9. Owoh, N.P., and M.M. Singh, the Proposed study was concerned with the implementation of the blockchain on a large scale. It proposed securing. Sensitive sensor data in a mobile (client/server) blockchain. To this end, an integrated mobile blockchain framework has been proposed that guarantees the master agreement between clients and edge nodes. For efficient encryption of the sensor data, the Diffie-Hellman algorithm was used. Finally, the processes in the framework were analyzed and the results showed that the key pairing between the blockchain client and the edge node is a good process and the data encoded in the framework file is secure as the attacker cannot gain privacy [16]. The conclusion from this study is that mobile applications with blockchain did not give convenient and strong connections due to edge nodes (mining). In addition, the encryption process is heavy and not fast and requires a small key level to obtain the maximum level of security using a shared secret. 10. [17] Feng, Qi, et al. suggested protecting cryptocurrencies and discovered that key protection alone is not sufficient in protecting currencies, and to avoid hacking, a fraudulent key or key theft was used. It has been focused on the Edwards-curve Digital Security Algorithm (EdDSA), which contains many technologies that have been implemented in many cryptocurrencies (such as Cardano, Zcash, and Decred) and designed EdDSA's first effective two-party signature protocol. The security of the proposed protocol is mathematically proven. Results from a performance evaluation of the protocol show that it performs well for the Ed25519 curve, with a single signature operation in the malicious setup taking about 3.32ms between two devices. It was concluded that there are more processes and devices to work using the multiplier EdDSA's proposed protocol incurs fairly large computational and communication costs. Therefore, one possibility is to design an optimized version that is compatible with many devices. The contribution of this study lies in the development of a practical and secure two-party EdDSA signature scheme with key protection, which can enhance the security of cryptocurrency systems and other applications that use EdDSA signatures.
11. Liang, Yifei, and others concluded that a blockchain-based DSA test platform should be built for spectrum management. They state that deploying blockchain in future networks has the advantage of addressing problems exposed in traditional centralized spectrum management systems, such as high-security risks and low allocation efficiency. It was found that the blockchain-based spectrum reference architecture can be employed in the next generation of mobile communications, 6G [18]. The problems of this research are related to testing the proposed mechanisms and system evaluation for each form in various 5G and/or 6G mobile communications. A series of mechanisms need to be developed to support the proposed blockchain-based spectrum management architecture, which includes a capacity generation mechanism, an incentive mechanism, and a pricing mechanism, among many others.
Discussion
As previously stated, Blockchain technology can boost the effectiveness of payroll systems, which is a critical operation for every government, industry, and other organization because it ensures it.
Employees should be paid on time and accurately. Furthermore, payroll systems face challenges from decentralized Blockchain technology, ghost workers, cybercrime, and other human manipulations. Depending on Table 1 the blockchain's capabilities can be used to overcome problems with payroll systems in developing countries. The proof of authority for the encryption algorithm used was the best study result also, the security requirement was the most logical and successful in security applications in terms of verification and scalability as well as audibility, privacy, and Anonymity. The remainder of the paper demonstrates how blockchain technologies are a promising approach to minimizing payroll system issues. the challenges It was explained by an examination of the current literature.
The significance of blockchain technology in general, as well as its application in payroll, has been explained.
In Table 1 Requirement for Studies has been mentioned. From the Security requirement field, some criteria were used to measure the efficiency and safety of the system in the studies presented.
• Verifiable has an Information Security Program in place to secure the confidentiality, integrity, and availability of information assets while also meeting regulatory, industrial, and contractual obligations [19].
• Security Compliance is a process that an organization undergoes to ensure that it complies with the set standards and regulations [20].
• System Capability is the ability of a system to execute a particular course of action or achieve a desired effect, under a specified set of conditions [21].
• Expense Management and Processing ensures that every expense claim is accounted for and reimbursed as quickly as possible while keeping tabs on all activities [22].
• Payroll Tax Management software mitigates the organization's and payroll system's responsibilities for payroll tax rates [23].
• Scalable security is a security approach and toolset that may grow or decrease capacity to handle a greater or smaller load, based on demand changes [24].
• Completeness is to ensure that a comprehensive set of requirements has been produced and documented that describes all security system functions required to meet needs, as well as their associated performance, environmental, and other non-functional requirements [25].
• Privacy is the right to decide how information is viewed and utilized, and it includes skills like using tools and managing information shared online [26].
• Eligibility is a determination that a system is able and willing to safeguard classified security information. The three security clearance eligibility levels are: Confidential, Secret, and Top Secret [27].
Challenges of Blockchain for Payroll System
Relying on previous studies and conclusions, the major challenges of blockchain technology were diagnosed as mentioned below: • Non-adoption
• Interoperability
One of the key problems that needs to be solved is interoperability, as this is one of the main reasons why businesses have been slow to adopt blockchain technology. Due to its inability to send and receive data from other blockchain-based systems, the majority of the blockchain is kept in isolation and does not interact with peer networks [32].
Other difficulties are caused by the absence of a universal standard. The lack of a global standard led to interoperability issues that increased costs and complicated procedures. The lack of a specific version of blockchain technology discourages new investors and developers from entering the market.
Given the number and complexity of these blockchain issues, though, many of the blockchain's biggest hurdles reflect the typical growth pains with any new technology. We conclude that the above-mentioned difficulties give the need for technological improvements, as evidenced by the list of challenges of Blockchain adoption. Also, payroll systems are busy dealing with it. Things will undoubtedly become more interesting if they are fixed and the many bottlenecks that currently prevent widespread adoption can be narrowed down and can be used in different storage areas as these problems do not affect the business fundamentally.
Blockchain Benefits in Payroll Processes
In a decentralized system, blockchain transaction ledgers may be easily tracked. It contributes to a more transparent and indisputable transaction history. The employee payroll management system must adhere to numerous rules, and blockchain will aid in reducing inconsistencies and saving time for the HR department.
Payroll management software today aids in the tracking of time, attendance, benefits, payroll, fraud prevention, and schedule management. Business HR leaders are investigating the use of blockchain in the aforementioned processes to streamline and power them [33].
The use of blockchain technology will improve the payment procedure for contract workers. Organizations, institutions, and universities will have a workforce that functions on a contract basis in addition to full-time personnel. Because the bills must be verified, these contract workers must wait longer to be paid. Companies that use blockchain payroll processing software may automate the verification process and pay contract workers as soon as the work is performed [34].
Blockchain will offer precise time and attendance information. Blockchain technology protects the accuracy of the employee database and prevents tampering with it. This means that payroll software with blockchain for small enterprises, schools, and colleges will ensure that the time, attendance, and departure data recorded by the system is true and that no one has tampered with the database [35].
Businesses and governmental entities will be able to use cryptocurrency for payment.
Decentralization is the best feature of cryptocurrencies, so it will be advantageous for many reasons if the payroll processing software pays the employee in cryptocurrencies. The first step will be the adoption of uniform remuneration for the whole workforce, which will end global inequity. Many governments throughout the world either do not accept or outright forbid cryptocurrencies. These nations must pay in local currency because using cryptocurrency is not an option for them. Blockchain-enabled payroll software will guarantee the accuracy, speed, and transparency of payroll processes [36].
We determined that incorporating blockchain into payroll software will completely transform the labor and payment processes. Blockchain can transform many facets of human resources and payroll operations.
Conclusion
It was concluded that the blockchain is solution-driven in the issues of the payroll system and in managing the business efficiently and effectively. It is a technology that has the ability and capabilities to solve problems related to payroll systems.
Such as centralization, data manipulation and inconsistency, cybercrime, phantom workers, and seamless auditing. An authorized blockchain will establish decentralization, data integrity, data availability, transparency, and space for data auditing. In this paper, we have outlined the capabilities and capabilities of the blockchain and how it can fit into solving salary issues as well as challenges in applying the technology. It was concluded that the application of blockchain technology to the payroll system will make all records of employees and matching payments of employers immediately available at the audit level to various government agencies. Implementing payroll on the blockchain will enable instant payroll compliance around the world at a fraction of the cost of current payroll compliance using a fiat cryptocurrency.
Future research is required to design and evaluate a blockchain framework that is authorized for payroll systems and is encrypted by the high-efficiency block encryption algorithm to ensure their high security. | 5,041.4 | 2023-08-20T00:00:00.000 | [
"Computer Science",
"Business"
] |
Investigation of microstructure and selected properties of Al2O3-Cu and Al2O3-Cu-Mo composites
The scope of work included the fabrication of ceramic-metal composites from the Al2O3-Cu and Al2O3-Cu-Mo and examining their microstructure and selected properties. The composites were fabricated by the slip casting method. The rheological behavior, microstructures, X-ray analysis, and mechanical properties were investigated. The rheological study demonstrated that all of the obtained slurries were non-Newtonian shear diluted fluids and stability on time. In both slurries, the flow limit is close to 0 Pa, which is very beneficial when casting the suspensions into molds. The X-ray analysis reveals Al2O3, Cu, and Mo phases in all specimens. No new phases were found in both types of composites after the sintering process. The results provided that the hardness for Al2O3-Cu-Mo composites was equal to 10.06 ± 0.49 GPa, while for Al2O3-Cu, it was equal to 6.81 ± 2.08 GPa. The K1C values measured, with the use of Niihara equation, for composites with and without the addition of Mo were equal to 6.13 ± 0.62 MPa m0.5 and 6.04 ± 0.55 MPa m0.5, respectively. It has been established that the mean specific wear rates of Al2O3-Cu and Al2O3-Cu-Mo samples were 0.35 × 10–5 ± 0.02 mm3 N−1 m−1 and 0.22 × 10–5 ± 0.04 mm3 N−1 m−1, respectively. It was found that molybdenum addition improved wear resistance properties of the composites. Graphical abstract Graphical abstract
Introduction
One of the most challenging objectives in modern engineering is the continuous design and development of advanced materials for high-performance applications. Advanced ceramics due to the wide range of current and future applications can be considered one of the main directions in the advancement of this challenge. Despite remarkable mechanical properties, such as high hardness and wear resistance, high Young's modulus, chemical stability, and thermal shock resistance of the advanced ceramic materials, their use in engineering is limited. Low toughness and plasticity, in comparison to the metals, in combination with high sensitivity for the presence of flaws in the structure effectively exclude ceramics from high-performance applications. Considering the above, the necessity for new ceramic-based materials with enhanced properties has been clearly demonstrated [1][2][3].
Composites and especially ceramic-metal composites may be considered a possible solution. In the last decade, a lot of attention has been paid to composites [4][5][6][7]. This group of composites successfully combines the advantageous properties of individual components-ceramic and metal-allowing to obtain new material with completely different, promising properties with a wide range of possible applications. For this reason, fabrication of ceramic-metal composites still constitutes a research field addressing a lot of interest [1][2][3].
Alumina is one of the widely used engineering ceramic materials. Its low density, high hardness, and thermal and chemical stability combined with the ductility of the metal particles result in the composite material with promising properties [8,9]. Moreover, the literature data shows that the application of ceramic nanostructures in composites is an important factor for environmental remediation [10][11][12][13]. Nowadays, it is important to pay attention to the selection of components on the ecological aspects. Multiple research proved that the incorporation of the ceramic matrix with even a small amount of ductile metal particles significantly enhances the toughness of the final composite. The dispersion of metal particles such as Ni [14][15][16], Mo [17,18], Ag [19], W [20], and Cu [21][22][23][24][25][26][27][28] in the ceramic alumina matrix was investigated and in all has been found to improve the fracture toughness of obtained composites.
Due to the high ductility combined with proper thermal and electrical conductivity of Cu particles, alumina/copper system has a permanent scientific interest. It is believed that favorable properties of both components enable to fabricate composite with enhanced both mechanical properties and conductivity. Improvement of the ceramic/metal properties, based on the reinforcement models, indicated the need for a homogenous distribution of small metal inclusions in the ceramic matrix in order to fabricate composite material with desired properties [21].
The review of specialist literature on this system indicated that Al 2 O 3 -Cu ceramic-metal composites can be successfully obtained with the use of different manufacturing methods such as hot pressing [22][23][24][25], uniaxial pressing with pressureless sintering [27], or slip casing [28]. Research study made by Oh et al. on the alumina/copper system allowed characterizing composite samples fabricated with the use of hot-pressing procedure [22][23][24][25]. Three different fabrication processes were applied to obtain powder mixtures. First, one commercially available Cu powder was used as a source of Cu particles in metal phase. The other two were based on obtaining copper particles as a result of CuO powder reduction [22,26] or calcination and reduction of Cu nitrate [26]. It was noted that powder mixture preparation influences the microstructure and properties of hot-pressed Al 2 O 3 -Cu composites and thus mechanical property enhancement can be obtained by using Al 2 O 3 -CuO and Al 2 O 3 -Cu nitrate more effectively. J.G. Miranda-Hernandez et al. analyzed the correlation between microstructure and mechanical properties in alumina/copper composites with different copper contents in the metal phase. Composites were manufactured by the combination of mechanical alloying and pressureless sintering with the use of commercially available alumina and copper powders. The research revealed that liquid copper during the sintering process goes inside the bulk of the sample results in densification enhancement and porosity elimination. An increase in copper content also caused the decrease of hardness with fracture toughness and conductivity improvement at the same time [27]. In the research made by M. Stratigaki et al., alumina/ copper composites with different metal particle contents were fabricated with the use of slip casting technique. Commercially available powders and deflocculant enable authors to obtain alumina/copper samples with good relative density and homogenous spatial distribution of metal particles. It was observed that increasing the Cu amount in the composite results in impact resistance behavior enhancement with limited loss of stiffness and hardness. Scientific knowledge to date allows for following statements that the optimum combination of properties was observed for composites with low Cu particle amount (under 5 vol%.) [28].
Available specialist literature has proven that the addition of the small amount of copper to the ceramic Al 2 O 3 matrix results with mechanical property enhancement [22,27,28] and higher thermal shock resistance in comparison to the monolith Al 2 O 3 [24,27]. Metal particles were also observed to cause significantly inhibited grain growth of the Al 2 O 3 matrix [22]. Significant for this particular composite system is the fact that the copper melting point (1084°C [29]) during the consolidation process is below the usual sintering temperature of the matrix. Due to the poor wettability between copper and alumina, molten copper shows a tendency to remain spherical in shape. A combination of the above with the difference in thermal expansion coefficients of the components may interfere with the proper adhesion at the Al 2 O 3 -Cu interface. This can also cause part of the reinforcement to flow out from the material during the sintering [30][31][32].
The research in this manuscript focused on fabrication ceramic-metal composites from the ternary Al 2 O 3 -Cu-Mo system and examination of the second metal phase addition influence on selected physical and mechanical properties of the samples. The slip casting method was used to prepare Al 2 O 3 -Cu-Mo composites with 15 vol% of metal content. This technique is widely used in ceramic manufacturing due to its effectivity, good initial particle package in green bodies in comparison to the conventional dry-pressing methods [33], and the possibility to obtain complex-shaped and large-sized materials [34][35][36]. The present study attempts to evaluate the influence of Mo particle addition as a second component of the metal phase in the composite on the microstructures and selected properties were investigated. The rheological behavior of prepared suspensions used to fabricate composites was studied. Selected physical and mechanical properties like hardness, fracture toughness, and tribological behavior of the obtained composite and the correlation between the microstructure and mechanical properties were examined. First of all, we believe that the investigation presented in this manuscript will give a starting point for other scientists to produce composites from the Al 2 O 3 -Cu-Mo system. In addition, we hope that this research regarding the additive of Mo to Al 2 O 3 -Cu composites will lead to improvements in the physical and mechanical properties of ternary composites from the ceramic-metal system for a broad range of applications. In summary, the results obtained will allow determining the relation between microstructure, phase structure, and basic properties of the Al 2 O 3 -Cu in comparison to Al 2 O 3 -Cu-Mo.
Experimental procedure
The research has been carried out on alumina, copper, and molybdenum powders. The Al 2 O 3 TM-DAR (Tamei Chemicals, Japan) powder was high purity of an average particle size 126 ± 13 nm (measured on Zetasizer Nano ZS, Malvern Instruments), density 3.80 g cm −3 measured on AccuPyc II 1340 Pycnometer (Micromeritics, USA), and a specific surface area 10.85 ± 0.07 m 2 g −1 (calculated from BET). The SEM image (Fig. 1a) reveals that the Al 2 O 3 particles are characterized by spherical morphology. It was found that alumina powders were firmly agglomerated. The mean particle size distribution determined by dynamic laser scattering (DLS) for copper (Sigma-Aldrich, Poland) was estimated to be 15.4 ± 5 μm, whereas molybdenum (Createc, Poland) powder was 13.4 ± 5 μm. The metal powders characterize by high purity equal 99.99%. Analysis of the images of the morphology reveals that cooper powder has close to a spherical shape (Fig. 1b). As can be seen in the SEM images, Mo powder has an irregular shape (Fig. 1c). The specific surface area for the copper powder was 0.29 ± 0.01 m 2 g −1 , whereas the molybdenum powder characterizes by a specific surface area equal to 0.24 ± 0.01 m 2 g −1 (calculated from BET). As organic additives, there were used di-ammonium hydrogen citrate (DAC) and citrate aside (CA) delivered by POCH SA. Distillate water was used as a solvent in the experiment to produce the composites.
The two series: Al 2 O 3 -Cu (series I) and Al 2 O 3 -Cu-Mo (series II) composites were prepared with the use of slip casting method. The composite specimens were obtained according to the slip casting technique schematically shown in Fig. 2. In this study, suspensions with 50 vol% of solid content were fabricated. A total metal share in the composite system was 15 vol%. In the case of three-component system, the ratio of Cu to Mo was 1:1.
The mixture of the slip casting components was fabricated in several steps. In the first step, the dispersing agents were dissolved in distilled water. The addition of 1.5 wt% of the dispersing agents (DAC and CA) allows obtaining the slurries which are flowable and stable over time. Then, ceramic and metallic powders were added. The prepared slurries were mixed and degassed in a planetary centrifugal mixer THINKY ARE-250. The suspensions were heterogeneous for 8 min at the rate of 1000 rpm and then degassed for 2 min at the rate of 2000 rpm. In the next stage, the mixture was then cast into identical gypsum molds. The use of porous molds permitted the removal of water from the prepared slurries as a result of the capillary action force which resulted in the green body. In the next stage, the mixture was then cast into identical gypsum molds and the obtained specimens were Fig. 1 The morphology of the base powders. a Al 2 O 3 . b Cu. c Mo dried for 48 h at 40°C in the laboratory dryer. The dried and shrunk sample could be removed from the gypsum mold easily. The sample was sintered at 1400°C in reducing atmosphere (80% N 2 and balance H 2 ). The dwell time of the composites is 2 h. During the sintering, the heating and cooling rate was 2°C min −1 . The selection of the sintering temperature in the experiment was determined by the low melting point of copper equal to 1084°C [29] as one of the metal phase components. The use of a lower sintering temperature will prevent liquid metal from leaking during the sintering process.
Several methods were used to establish the properties of the slurries and microstructure and selected properties of the obtained composites.
The stability of the suspension during the fabricated samples by the slip casting method is significant. Therefore, macroscopic observation of two slurries Al 2 O 3 -Cu (series I) and Al 2 O 3 -Cu-Ni (series II) during the sedimentation test was made. Slurry samples with volumes of 15 mL were poured into individual test tubes. The experimental time was 48 h. The sedimentation test was conducted at room temperature (25°C), and images were gained with a digital camera every 2 h.
The rheological measurements are an enormously serviceable technique for investigating shear stress as a function of shear rate and viscosity as a function of the applied shear rate [33,37]. Rheological tests enable the characterization of tested fluids [38,39]. Such measurements make it possible to determine what kind of fluid it is [40][41][42]. Furthermore, the results of rheological tests allow statements, which fabricated slurry characterizing the highest or lowest viscosity. For this purpose, the viscosity of slurry was studied by the Kinexus Pro rheometer (Malvern Instruments, UK) equipped with plateplate geometry. The diameter of the rotating geometry was 40 mm with the gap between plates equal to 0.5 mm. During the investigation, the value of the shear rate was increased from 0.1 to 300 s −1 . The measurements were done at 25°C.
An examination of the essential characterization of the produced composites was conducted.
A scanning electron microscope (JSM-6610 SEM) was employed to characterize the microstructure of the composites. The samples were ground and polished to see the metal phases clearly in SEM investigation and to eliminate the effect of surface flaws on mechanical properties. The composites mounted in resin were ground with abrasive paper in the range of 80 to 4000 gradations and polished using diamond pastes: 3 μm and 1 μm gradation. Before observations, samples were carbon coated using Quorum Q150T ESS coating system. The chemical compositions were characterized by using energy-dispersive X-ray spectroscopy detector. EDX chemical composition analyses of the specimens were done at acceleration voltage 15 kV.
The phase analysis was carried out by X-ray diffractometry (XRD) at sweep rates of 0.02°and counting time of 0.5 min −1 with a 2θ range from 20 to 100°by a Rigaku MiniFlex II diffractometer. The analysis was performed with Cu Kα: λ = 1.54178 Å, 30 kV, 15 mA.
The selected physical properties of the sintered specimens were measured by the Archimedes method.
In this study, the hardness and fracture toughness were measured with a Vickers hardness tester (WPM LEIPZIG HPO-250) with the use of the indentation method under the load of 196 N, calculated from the length of cracks which developed during a test using Anstis and Niihara equations [43][44][45][46]. In the presented work, the fracture toughness has been calculated based on two formulas [43][44][45][46]: & Anstis equation for l/a > 1.5: & Niihara equation for 0.25 < l/a < 2.5: [43][44][45][46]. Friction is one of the most common physical phenomena in nature and one of the main processes occurring between moving elements, which leads to energy losses and wear-related losses. The wear usually causes the need to replace elements, assemblies, or entire machines. Research on friction processes and implementation of its results is of a great importance for improving the economic and energy efficiency of enterprises using machines in the production process. The wear resistance tests allow predicting the behavior and durability of elements in the work environment better. They are the basic tool in mechanical design of machine components.
The ball on disc tribological tests were performed using The Bruker's Universal Mechanical Tester (UMT) TriboLab™. Its 2-dimensional force sensor DFH-20 enables measurements in the load range from 2 to 200 N with 10-mN resolution. Measurements of friction force and normal force allowed obtaining the coefficient of friction (COF) of specimens. The WC ball of 9.5-mm diameter was used as a counterbody. The balls' holder was idle during the whole experiment. After fixing rigidly to rotating holder, unlubricated samples were tested under a constant load of 100 N for 10 h at ambient conditions. Circular wear tracks with a 5-mm diameter were formed on the tested samples. Tests were carried out at a revolution speed of 300 rpm, which corresponds to about 0.16 m s −1 . The sliding distance equaled 5760 m. The measured data were recorded every 0.01 s.
All samples were weighed before and after tribological testing. Change in mass Δ of tested samples was determined with Mettler Toledo Analytical Balance with readability down to 0.1 mg.
The wear volume V (mm 3 ) was calculated as quotient of specimen's weight loss Δm (g) and its density ρ (g cm −3 ). The specific wear rate w was defined according to the standard ISO 20808:2016 as the volume loss V per distance L (m) and the applied load F n (N): The values of the coefficient of friction, the wear volume, and the specific wear rate were calculated as a mean of results collected for 5 samples. Figure 3 shows the collected photos during the sedimentation test. This experiment demonstrates that even after 4 h, both slurries were stable and any phase separation was not noticed. During the investigation time, in the case of Al 2 O 3 -Cu-Mo suspensions after 8 h, slight sedimentation was observed in contrast to the Al 2 O 3 -Cu suspension, which was stable throughout the experiment. Conceivably worse the stability over time of a slurry containing molybdenum is due to Mo density, which promotes the particle molybdenum sedimentation in the suspensions. These results demonstrated that particles were dispersed well in Al 2 O 3 -Cu slurry even after 48 h. Nevertheless, the obtained results show that both suspensions may be suitable for forming composites by the slip casting method since they are stable for a minimum of 2 h from the moment of mass preparation.
The results of the rheological experiments have been collected in Fig. 4 and Fig. 5. On the basis of viscosity results shown in Fig. 4, it was reported that both suspensions are shear of the thinning fluid. It was found that the Al 2 O 3 -Cu slurry (marked on the chart as A) has a maximal viscosity equal to 0.421 Pa s at a minimal shear rate of 1.3 s −1 . It was observed that when the shear rate increases to 260 s −1 , the viscosity decreases to about 0.0112 Pa s. However, on the basis of Fig. 4, it was found that Al 2 O 3 -Cu-Mo suspension (marked on the chart as B) has a maximal viscosity equal to 0.26 Pa s at a minimal shear rate of 1.3 s −1 . It was found that when the shear rate increases to 260 s −1 , the viscosity decreases to about 0.0027 Pa s. These results lead to the conclusion that the addition of molybdenum particles reduces the viscosity of the suspension compared to a slurry containing alumina and copper.
The flow curves of the suspensions shown in Fig. 5 provided evidence of both produced slurries exhibiting antithixotropic behavior (rheopexy). It may be concluded that the molecular structure appears in the suspensions which causes an increase of viscosity. This is due to the existence of metal particles (Cu or Cu and Mo) in the alumina particle suspension and they are electrostatic interactions. The flow curves in Fig. 5 demonstrated that for Al 2 O 3 -Cu suspensions, the minimum shear rate of 1.3 s −1 shear stress was 0.547 Pa, while the maximum shear rate of 260 s −1 shear stress was 2.9 Pa. For Al 2 O 3 -Cu-Mo slurry, the flow curves provided information that the minimum shear rate of 1.3 s −1 shear stress was 0.164 Pa, while the maximum shear rate of 260 s −1 shear stress was 0.7 Pa. The resulting suspensions were distinguished by high fluidity. The measurements have shown that the flow limit is close to 0 Pa in both suspensions, which is very beneficial when casting the slurries into molds.
The microscopic observations were carried out on crosssectioned fragments of the samples and the images are shown in Fig. 6. It has been determined that the distribution of the metallic phase in the examined specimens has a homogeneous character. The darker regions on samples revealed on the micrographs correspond to the ceramic matrix, while the bright areas are metal phases. The performed scanning electron microscopy observations revealed changes in the microstructure of the samples depending on the composition of composites. It was found that in the case of Al 2 O 3 -Cu composites, the metal phase particles have gentle edges and their morphology did not change significantly compared to the initial Cu powder. However, in the case of the ternary composite, it was found that the metallic phase particles distributed in the ceramic matrix have uneven edges. From the observation of SEM images, it may be concluded that in the case of a composite of the Al 2 O 3 -Cu-Mo system, the presence of two types of metallic particles (lighter and darker) was noticed. With respect to the Al 2 O 3 -Cu-Mo composites, further studies have to be done because based on microscopic observations, we are not able to clearly determine which lighter or darker bright areas correspond to which Cu or Mo phases. For this purpose, a chemical analysis of the Al 2 O 3 -Cu-Mo composite composition was carried out further in the manuscript. Figure 7 presents the SEM pictures with the microanalysis of the chemical composition made by an energy-dispersive spectroscopy (EDS) technique of Al 2 O 3 -Cu-Mo. The presence of aluminum, oxygen, nickel, and molybdenum was found. As can be observed, a characteristic is that the copper atoms and molybdenum atoms (blue and pink color respectively) occupy the different areas. It must be remembered that they were added and mixed with alumina as a separate phase so the distribution of phases is relatively homogenous. Direct EDS measurements allowed us to confirm two separate metallic phases in the final material. This result suggested that Fig. 7 The microanalysis of the chemical composition of Al 2 O 3 -Cu-Mo composites during synthesis, the copper and molybdenum particles did not combine and react with each other. To examine the phase composition, X-ray analysis was carried out.
The phase composition of Al 2 O 3 -Cu and Al 2 O 3 -Cu-Mo composites sintered in is presented in Fig. 8. The XRD patterns reveal the presence of aluminum oxide and copper in the case of Al 2 O 3 -Cu materials. In the case of composites from the Al 2 O 3 -Cu-Mo system, the X-ray analysis provided the information that this system contains three phases such as aluminum oxide, copper, and molybdenum. From the above information of XRD patterns, it may be concluded that the used reducing atmosphere for sintering materials prevents new phases and reactions between components during sintering.
It was found that Al 2 O 3 -Cu composites were described by relative density equal to 96.60 ± 0.74%, while the higher relative density equal to 98.93 ± 0.59% was measured for Al 2 O 3 -Cu-Mo composites. The higher value of density may be attributed to the addition of molybdenum particles in composites.
The tests of mechanical properties and wear behavior of the obtained composites allowed us to assess the effects of the addition of the molybdenum particles in the Al 2 O 3 -Cu materials compared to composites without them.
Vickers's hardness and fracture toughness analysis revealed the dependence of the obtained results on the specimens' chemical composition. The Vickers hardness results, shown in Table 1, correlate very well with the obtained relative densities. Samples from the ternary Al 2 O 3 -Cu-Mo system, prepared with an equal amount of Cu and Mo in the metallic phase, were characterized by higher hardness values in comparison to the reference Al 2 O 3 -Cu samples. On the basis of results, it was found that average hardness measured for specimens with metallic phase including both Cu and Mo was equal to 10.06 ± 0.49 GPa while for reference samples with Cu, it was equal to 6.81 ± 2.08 GPa. The results obtained ( Table 1) provided evidence that molybdenum particles improve the hardness of the composites.
The changes noticed in the sinter fracture toughness are presented in Table 2. Although the Al 2 O 3 -Cu-Mo composites are distinguished to increase the Vickers's hardness to compare to the Al 2 O 3 -Cu materials, it does not visibly affect the In addition to the microstructure analysis and its impact on the mechanical properties, the initial tribological test was carried out. The results of pin on ball dry friction test are presented in Fig. 8 for the Al 2 O 3 -Cu and Al 2 O 3 -Cu-Mo composites. Direct measurements have shown that the mean value of the coefficient of friction of the tested materials was similar (Fig. 9). The obtained result for Al 2 O 3 -Cu samples was 0.58 ± 0.05 and the COF of Al 2 O 3 -Cu-Mo was 0.57 ± 0.08. The value of COF of Al 2 O 3 Cu stabilized after approximately 1 h while COF of Al 2 O 3 -Cu-Mo was slightly increasing between the 30th minute and the 7th hour of test, and then reached a plateau.
Results of average wear volume calculations are presented in Fig. 10a. Based on the average wear volume, it can be inferred that the value obtained for Al 2 O 3 -Cu composite samples was 1.99 ± 0.14 mm 3 while Al 2 O 3 -Cu-Mo composites equaled to 1.28 ± 0.21 mm 3 . It was found that these values were used to determine the specific wear rate. Since this coefficient is related to applied load and sliding distance, it is more universal and makes comparing samples tested in different conditions more reliable. Data collected in our experiment are presented in Fig. 10b. It has been reported that the mean specific wear rates of Al 2 O 3 -Cu and Al 2 O 3 -Cu-Mo composite samples were 0.35 × 10 −5 ± 0.02 mm 3 N −1 m −1 and 0.22 × 10 −5 ± 0.04 mm 3 N −1 m −1 respectively. Molybdenum addition improved wear resistance properties of the material, but even without this compound, tribological test results are very satisfying.
Conclusions
The investigations present the fabrication of alumina matrix composites reinforced with copper (series I) and copper and molybdenum (series II). The correlation between the studied microstructures and measured selected properties of the obtained composites was analyzed. Furthermore, the tribological behavior of the fabricated materials was examined. Obtained results reinforce the potential use of a slip casting method for producing composites from the Al 2 O 3 -Cu and Al 2 O 3 -Cu-Mo systems. Based on the sedimentation test, it can be concluded that both suspension examinations in the investigation may be suitable for forming composites by the slip casting method since they are stable for a minimum of 2 h from the moment of mass preparation. The rheological investigations revealed that all of the obtained slurries were non-Newtonian shear diluted fluids. The fabricated composites were subjected to XRD analysis, microstructural observations, and mechanical property measurements. The X-ray investigations provided evidence that in Al 2 O 3 -Cu and Al 2 O 3 -Cu-Mo systems, no new phases were revealed after the sintering process. The observations exercised with scanning electron microscopy revealed a good interface of metal particles to alumina matrix in both types of composites. Moreover, based on observation, it was found that the distribution of the metallic phase in obtained specimens has rather a homogeneous character. The experiments demonstrated that the addition of molybdenum in the Al 2 O 3 -Cu materials caused an increase in hardness without a significant impact on the fracture toughness of obtained composites. The obtained results showed that addition molybdenum to Al 2 O 3 -Cu system improved wear resistance properties of the material.
The received results gain new notices about the correlation between parameters of the production process and microstructure and properties of the materials of the Al 2 O 3 -Cu-Mo. The realization of the submitted research methodology provided basic knowledge about producing ceramic-metal composites from the ternary system. Furthermore, the presented results concerning Al 2 O 3 -Cu-Mo composites are new scientific advance in the fundamental understanding of ternary system of ceramic matrix composite science.
Funding The study was accomplished with the help of the funds allotted by The National Science Centre within the framework of the research project "OPUS 13" no. 2017/25/B/ST8/02036. This investigation is supported by the Foundation for Polish Science (FNP) -START 2019 scholarship.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 6,974.8 | 2020-11-20T00:00:00.000 | [
"Materials Science"
] |
Understanding heterogeneous mechanisms of heart failure with preserved ejection fraction through cardiorenal mathematical modeling
In contrast to heart failure (HF) with reduced ejection fraction (HFrEF), effective interventions for HF with preserved ejection fraction (HFpEF) have proven elusive, in part because it is a heterogeneous syndrome with incompletely understood pathophysiology. This study utilized mathematical modeling to evaluate mechanisms distinguishing HFpEF and HFrEF. HF was defined as a state of chronically elevated left ventricle end diastolic pressure (LVEDP > 20mmHg). First, using a previously developed cardiorenal model, sensitivities of LVEDP to potential contributing mechanisms of HFpEF, including increased myocardial, arterial, or venous stiffness, slowed ventricular relaxation, reduced LV contractility, hypertension, or reduced venous capacitance, were evaluated. Elevated LV stiffness was identified as the most sensitive factor. Large LV stiffness increases alone, or milder increases combined with either decreased LV contractility, increased arterial stiffness, or hypertension, could increase LVEDP into the HF range without reducing EF. We then evaluated effects of these mechanisms on mechanical signals of cardiac outward remodeling, and tested the ability to maintain stable EF (as opposed to progressive EF decline) under two remodeling assumptions: LV passive stress-driven vs. strain-driven remodeling. While elevated LV stiffness increased LVEDP and LV wall stress, it mitigated wall strain rise for a given LVEDP. This suggests that if LV strain drives outward remodeling, a stiffer myocardium will experience less strain and less outward dilatation when additional factors such as impaired contractility, hypertension, or arterial stiffening exacerbate LVEDP, allowing EF to remain normal even at high filling pressures. Thus, HFpEF heterogeneity may result from a range of different pathologic mechanisms occurring in an already stiffened myocardium. Together, these simulations further support LV stiffening as a critical mechanism contributing to elevated cardiac filling pressures; support LV passive strain as the outward dilatation signal; offer an explanation for HFpEF heterogeneity; and provide a mechanistic explanation distinguishing between HFpEF and HFrEF.
Introduction
Heart failure (HF) occurs when the heart is unable to maintain sufficient cardiac output (CO) to supply the metabolic needs of the body at normal cardiac filling pressures [1], resulting in the signs and symptoms of heart failure, including dyspnea, edema, fatigue, and reduced exercise capacity.Heart failure affects more than 64 million globally [2].It is often accompanied by reduced ejection fraction (HFrEF, EF < 40%), but about half of incident heart failure cases have a preserved ejection fraction (HFpEF, EF > 50%) [3].
Management of HFrEF has greatly improved and survival rates have increased over the past 20 years due to the advent of effective therapies [4].However, outcomes in HFpEF remain poor, and therapies effective in HFrEF have not shown the same benefits in HFpEF clinical trials [5,6].Sodium-glucose cotransporter-2 (SGLT2) inhibitors are the first class of drug to demonstrate clinical benefit [7,8], but the reasons for their success while others were not effective remain unclear, highlighting our limited understanding of HFpEF.
A major challenge to improving outcomes in HFpEF is that the underlying mechanisms are heterogeneous and not well understood [9].While HFrEF patients typically have a history of ischemic heart disease and a predictable pattern of progressive ventricle dilatation, HFpEF patients often have cardiac geometries and risk factor profiles that are similar to non-HF patients with left ventricle (LV) hypertrophy [10].However, the distinguishing difference is that they have higher cardiac filling pressures [11][12][13].A subset of HFpEF patients have a history of ischemic heart disease and impaired cardiac contractility [14,15], but many have reported normal systolic function [16].Hypertension is the most common comorbidity in HFpEF, but blood pressures in HFpEF patients are also not different from those with LV hypertrophy (LVH) [10].Increased myocardial passive stiffness is observed in nearly all studies of HFpEF [17][18][19].This stiffening was originally thought to be due to increased collagen deposition and fibrosis, but more recently, evidence has emerged that the myocytes themselves become stiffer due to hyperphosphorylation of titin filaments [20][21][22].Still, increased cardiac stiffness alone does not seem to explain the signs and symptoms of HFpEF, as elevated LV stiffness is also observed in LVH patients who do not have heart failure [15,23].
Multiple other abnormalities in diastolic, systolic, and vascular function have been reported in HFpEF, but again none seem to be sufficient to explain the syndrome.Delayed ventricle relaxation during diastole is frequently but not always observed in HFpEF [24], as indicated by higher isovolumic relaxation times (IVRT) [19,25] and relaxation time constants (tau) [19].Vascular stiffness is increased in HFpEF [17] and is strongly associated with aging and obesity-both risk factors for HFpEF [10,[26][27][28][29][30].Impaired ventricular-vascular coupling [31] may play a role; both arterial elastance (Ea) and ventricular end-systolic elastance (Ees) are increased in HFpEF relative to healthy controls.Venous compliance and venous capacitance are decreased in HFpEF patients relative to patients with non-cardiac dyspnea, and the severity is correlated with obesity [32].Renal dysfunction is a co-morbidity in more than half of HFpEF patients [33,34].However, each of these mechanisms also occur in many patients who never ultimately develop HFpEF [10,17].
While none of these mechanisms individually seem to explain HFpEF, each likely plays some role in at least a portion of the HFpEF population.However, the relative contribution of each is not known.In addition, some observed functional changes may be causal, while others may be adaptive or maladaptive consequences of other causal mechanisms.A better quantitative understanding of the relative contributions of potential mechanisms of HFpEF is needed in order to better treat and prevent this disease.
In addition to the uncertainty around causal mechanisms of HFpEF, another mystery of HFpEF is why elevated cardiac filling pressures do not cause outward remodeling and ventricle dilatation in these patients.Abnormally high filling pressure is a key distinguishing feature between patients with HFpEF and non-symptomatic subjects [11][12][13].Elevated filling pressure generally is associated with volume-overload, and other conditions of volume overload (e.g.impaired systolic function, mitral regurgitation) are characterized by progressive eccentric or outward dilatation [35].However, HFpEF is the anomaly-filling pressures are elevated, but outward remodeling is limited.
Mathematical modeling has been utilized extensively to improve our understanding of cardiac growth and remodeling laws under normal and pathologic conditions [36][37][38][39][40][41], and can be a tool for evaluating potentially multi-factorial causes of HFpEF.Most previous models treat the cardiovascular system as a closed system and do not account for the role of natriuretic control of fluid status and blood volume/pressure by the kidney.However, because elevated cardiac filling pressure is a key feature of heart failure, hemodynamic feedback control of fluid status provided by the kidney likely plays a critical role.We have previously developed a model that links cardiac and kidney function [39], and used this model to investigate pathophysiology and pharmacologic responses in HFrEF [42].A key feature of this model, compared to closesystem cardiac models that treat fluid volume as constant rather than controlled by the kidney, is that the effects of alterations in cardiac and vascular function on fluid status can be simulated mechanistically; congestion and elevated cardiac filling pressures emerge mechanistically from fluid retention as an adaptive renal response to maintain CO.Similarly, changes in afterload are simulated mechanistically, e.g.hypertension can be induced mathematically through renal mechanisms that cause sodium/water retention and/or neurohormonal feedback on peripheral resistance [43].The effects of changes in preload and afterload on cardiac remodeling over time can then be simulated through growth and remodeling laws that link myocardial loading to changes in myocyte diameter/length, and subsequently to LV wall thickness and chamber volume [39].The model has been validated by reproducing clinical trials in LV hypertrophy [39] and HFrEF patients [44][45][46].
In this study, we utilized this model to investigate mechanisms of HFpEF.We first evaluated the relative contributions of the commonly postulated HFpEF mechanisms in producing a resting state of HFpEF.We then investigated how these mechanisms alter mechanical stress and strain in the myocardium, and how these mechanical signals may produce the distinctly different cardiac remodeling patterns in HFpEF versus HFrEF, even while cardiac filling pressure is similarly elevated in both.Thirdly, we evaluated the role of titin stiffening, compared to changes in collagen type and content, on mechanical stress and strain felt by the myocardium in HFpEF.Mathematically understanding the mechanisms of HFpEF is a step toward better understanding the disparate response to therapies between HFpEF and HFrEF, and may improve our ability to design therapies for HFpEF in future.
Mathematical model
We utilized a previously published cardiorenal model, summarized schematically in Fig 1, that integrates a cardiac-ventricular function model [42,43], originally developed by Bovendeerd et al. [47] and Cox et al. [48] (Fig 1A and 1C) and a model of renal function and volume [49][50][51] .Model equations, parameters and initial conditions have been described in detail previously [39] and are given in the supporting material, S1 Text and S1-S6 Tables.
Mathematical definition of HFpEF
In order to evaluate mechanisms that contribute to HFpEF, a minimal set of criteria were specified to define HFpEF mathematically.By definition, HFpEF requires ejection fraction greater than 50%.In addition, HFpEF is differentiated from other conditions by the presence of elevated cardiac filling pressures [10,52,53].Clinical diagnostic guidelines for HFpEF [54] recommend using diagnostic score that takes into account multiple echocardiographic measurements and natriuretic peptide levels (NPs) (after first ruling other causes).The key components of this score -elevated E/e', increased left atrial volume index (LAVI), and elevated N-terminal pro-brain natriuretic peptide (NT-proBNP) or BNP -are all surrogate indicators of elevated cardiac filling pressure.If this score is inconclusive, invasive measures of resting LV EDP or pulmonary capillary wedge pressure (PCWP) are used to confirm diagnosis.Thus, elevated filling pressures are a critical defining feature of HFpEF.While some HFpEF patients may have reduced cardiac output, CO is often normal [55], especially at rest, and is not a key component of the diagnostic criteria [54].
Thus, the minimal criteria for HFpEF were considered as: 1) left ventricle end diastolic pressure (LVEDP) � 20 mmHg and 2) EF� 50%.For this analysis, we focused on elevation of LVEDP at rest.In the early stages of HFpEF, filling pressures may be normal at rest and may only become elevated with exertion [9], but this group has much better (near age-normal) prognoses compared to patients with elevated filling pressures [56], and we did not consider them in this analysis.
Modeling potential mechanisms of HFpEF
To determine the relative contributions of proposed mechanisms to producing a state of HFpEF, the effect of each potential pathophysiological mechanism on LVEDP and EF was evaluated by changing model parameters associated with that mechanism, as summarized in Table 1 and described below.
Increased LV passive stiffness
As described previously, the myocardium was modeled as a homogenous orthotropic material, defined by a fiber stiffness and radial stiffness parameter [42,47].The stress along the fiber σ f is a nonlinear function of the fiber stretch: where β is a scaling constant, c f is the stiffness constant along the fiber direction, λ f is the myocardial fiber stretch.As described previously [47], myocardial fiber stretch is related to dynamic LV volume by: where V lv and V lv,zero-p are LV chamber volumes (i.e.volume of blood in the chamber) under pressurized and zero-pressure conditions respectively, and V w is the LV wall volume (i.e.volume of LV myocardial tissue, which is the sum of myocyte volume and extracellular matrix volume [ECM]).In the first part of this analysis (sensitivity analysis), the effect of increased myocardial stiffness was evaluated by increasing c f , without differentiating between myocardial and ECM stiffness.Later, we revisit the homogenous assumption and update the model to distinguish the contributions of myocyte stiffness, ECM stiffness, and their relative volume fractions.
LV contractility
Contractility c is defined as an intrinsic property of the myocardium.This parameter was decreased to represent reduced myocyte contractile ability (Table 1).As developed originally by Bovendeerd et al [47], contractility is used to calculate active fiber stress in the ventricle wall during systole: where c is chamber contractility; σ ar is a scaling constant; f(l s ) is a sigmoidal function of the sarcomere length l s ; g(t a ) is a sinusoidal signal describing cardiac excitation as a function of time elapsed since activation t a ; and h(v s ) is sarcomere fiber shortening velocity v s .All equations for determining sarcomere length, time elapsed since cardiac excitation, and shortening velocity have been described previously [39,47] and are given in the S1 Text.
LV outward dilatation
While concentric remodeling rather than outward dilatation is more often observed in HFpEF patients, some patients may have some degree of outward dilatation.In addition, understanding differences in outward dilatation may be important in distinguishing between HFpEF and HFrEF mechanisms.Thus, we also sought to evaluate the effect of outward dilatation on EF and LVEDP.As described previously [39], outward dilatation is represented in the model as elongation of myocytes resulting in enlargement of the LV cavity volume V lv,zero-p .
Myocytes are modeled as cylinders, and myocyte volume is given by: where N myo, D myo, L myo are the number, average diameter, and average length of myocytes, respectively.D myo is the sum of the normal (healthy) diameter D myo,0 and any change in diameter ΔD myo resulting from remodeling by addition of sarcomeres in parallel.L myo is the sum of normal (healthy) length L myo,0 and any change in length ΔL myo resulting from remodeling by addition of sarcomeres in series.
As the myocytes elongate, the cavity volume increases: V lv,0 is the unpressurized LV cavity volume under normal conditions, before any remodeling.The effect of outward dilatation was evaluated by increasing the change in myocyte length, ΔL myo.
Slowed LV relaxation
Cardiac excitation is modeled as a sinusoidal function, adapted from the form used in Bovendeerd et al [47] to allow the rise time and fall time of excitation to be specified separately: where t is elapsed time since the beginning of activation, t twitch is the duration of LV contraction, t r is the duration of the rising part of the contraction, and t beat is the duration of a single heart beat (1/heart rate).To model slowed relaxation, twitch time is defined as the sum of the rise time t r and normal fall time t f : Δt f is the incremental increase in the fall time.When Δt f is zero, the rise and fall is symmetric, and this expression simplifies to the same expression used in [47].Increasing Δt f causes the excitation signal g(t) to fall more slowly, as illustrated in Fig 2.
Isovolumic relaxation time (IVRT) is normally around 80 ms, and the relaxation time constant τ is typically 35-48 ms, but both are often increased in pressure-overload subjects and in most but not all HFpEF subjects, with IVRT values ranging from 85ms-160ms [10,25] and τ ranging from 45-90 ms [19].In the model, increases in IVRT and τ (which can be calculated from simulated LV pressure waveforms) can be induced by increasing the parameter Δt f .Thus, to evaluate the effect of slowed relaxation, Δt f was increased over a wide range, ranging from 0 to 130 ms, which corresponds to a t twitch of 40 to 170 ms (0-325% increase).This range produced increases in IVRT and τ ranging from their baseline values of 80 ms and 35 ms, respectively, up to extremely large values of 432 ms and 220 ms, respectively.Note that t twitch represents the total time during which the ventricle is contracting and relaxing, while the outputs IVRT and τ represent calculated features of a portion of the LV pressure waveform, and thus do not scale linearly with t twitch .
Systemic vascular stiffening
For each segment i of the vasculature, the compliance C defines the relationship between pressure and volume: where V i and P i are the dynamic volume and pressure in each segment, and V i,0 is the volume under zero-pressure conditions.Vascular stiffening was modeled as a decrease in compliance C, and was evaluated for four vascular segments i: systemic arterial, systemic venous, pulmonary arterial, and pulmonary venous.
Reduced venous capacitance
Reduced venous capacitance was modeled as a decrease in the venous unpressurized volume V ven,0 in Eq 10 above.
Hypertension
As described previously [43], hypertension was induced mathematically by changing renal functional parameters that result in sodium and water retention: increasing renal vascular resistance, decreasing glomerular permeability, increasing tubular sodium reabsorption rates, and increasing the set-point for renal interstitial hydrostatic pressure (RIHP) that controls the kidney's pressure-natriuresis response.Each of these mechanisms alone cause small increases in MAP, but when combined, can increase blood pressure over a wide range.Hypertension is a heterogeneous disease, and the mechanisms used here are one way to induce hypertension, but are not intended to represent all forms of hypertension that may exist in HFpEF patients.
Sensitivity analysis
To evaluate the contributions of each mechanism in producing a state of HFpEF, a sobol global sensitivity analysis was conducted.The sobol method decomposes the variance of a given model output into the fractions attributable to each input.The first order sobol indices quantify the fraction of the variance attributable to each parameter alone, and the total order indices quantify the fraction of the variance due to each parameter jointly with other parameters.The parameters in Table 1 were used as inputs, and LVEDP and EF were considered as model outputs.For each parameter combination, the model was simulated for 60 days (about 1 minute computation time-see S2 Text)-a duration sufficient to allow the model to settle to a new stable state.During this time, the process of outward remodeling of the myocardium over time in response to elevated wall stress (see Eq 11 later) were turned off by setting its rate constant K l to zero.The sobol analysis was conducted using the sensobol package in R [57].An initial sampling of N = 2 9 was used, and the total number of parameter sets tested was 2N*(1+p) or 9216 for p = 8 parameters.Sobol indices were estimated using the Monte-Carlo estimator of Azzini et al [58], and confidence intervals were determined by bootstrapping with 1000 replicates.
Simulating HFpEF over time: Evaluating mechanical drivers of cardiac remodeling
After evaluating combinations of mechanisms that can produce a state of HFpEF (elevated LVEDP with normal EF), we next investigated mechanisms of LV remodeling over time that may differentiate HFpEF from HFrEF.Specifically, outward dilatation is generally thought to be a response to increased preload.We sought to understand why, once preload becomes elevated in HFpEF patients, it does not cause outward dilatation, chamber enlargement, and progressive decline in EF that is seen in HFrEF.
Cardiac remodeling occurs as a result of cellular and tissue adaptations to different loading conditions.Although the different cellular or tissue adaptations for cardiac growth and remodeling have been well studied [59], the specific mechanical stimuli that evoke a remodeling response is still debated [60].It has long been recognized, based on work by Grossman et al [61], that ventricle wall thickening appears to normalize peak systolic wall stress during systole, while outward dilatation appears to occur in response to elevated passive wall stress during diastole [61,62].Grossman proposed that the myocardium remodels in response to changes in peak systolic stress σ f,peak by adding or removing sarcomeres in parallel, thus increasing myocyte diameter and wall thickness, while it remodels in response to an increase in passive diastolic stress σ f,ED by adding sarcomeres in series, increasing myocyte length and causing outward dilatation [61].We previously implemented this hypothesis by allowing myocyte diameter and length to change over time in response to LV wall stress (Fig 1B ) and demonstrated that with these two simple remodeling laws, the model could reproduce the distinct remodeling responses to pressure and volume overload [39], including the progressive outward remodeling in HFrEF [42] and the regression of remodeling with anti-hypertensive therapies in LV hypertrophy and HFrEF patients [43,44].
However, more recent studies propose that diastolic wall stretch is the actual stimulus for outward dilatation [63][64][65][66].Here, we revisited the assumptions regarding the driver of myocyte lengthening.Specifically, we alternatively considered passive wall stress and passive stretch as the possible signals for outward dilatation.Using two different versions of the model described below, we evaluated the remodeling response over time to reduced LV contractility (the mechanism most commonly associated with HFrEF) versus increased LV stiffness (the mechanism most commonly associated with HFpEF).
Model 1: Stress-driven remodeling
In this version of the model, as we have described previously [39,61], myocyte length increases when end diastolic longitudinal wall stress σ f,ED exceeds the upper limit of normal σ f,ED,0 .Increases in myocyte length were approximated as irreversible (i.e. the change in myocyte length does not return to normal if σ f,ED is returned to normal levels), since previous work has suggested that regression of myocyte elongation may occur much more slowly than progression [39,42].
Here ΔL myo is the change from baseline in average myocyte length and K l is the rate constant governing the speed of elongation.
Model 2: Stretch-driven remodeling
In this version of the model, myocyte length increases when end diastolic wall longitudinal stretch λ f,ED exceeds the upper limit of normal λ f,ED,0.
Where ΔL myo is the change from baseline in average myocyte length, and K l is the rate constant governing the speed of elongation.
Simulation procedure
Because a history of hypertension is common in both HFpEF and HFrEF, parameters were first changed to induce mild hypertension (see parameters in Table 1).After an equilibration period that allowed the model to reach a stable state, during which outward remodeling was turned off (rate constant K l = 0 in Eqs 11 and 12), LV contractility was decreased by 15% to simulate HFrEF, and myocardial stiffness c f was increased by 60% to represent HFpEF.Outward remodeling was "turned on" (K l set to > 0) and the simulation was run for 1 year.This simulation was performed with two versions of the model described above, with outward remodeling driven either by LV end diastolic stress (model 1 -Eq 11) or by LV end diastolic stretch (model 2 -Eq 12).
Evaluating contribution of myocyte and collagen stiffness and volume fraction
To enable the effects of myocyte stiffening, collagen stiffening, or increased volume fraction on the development of HFpEF to be evaluated separately, we updated the model to approximate the myocardium as a layered composite of unidirectional fibers of myocytes and extracellular matrix in parallel, each component defined by a stiffness constant and volume fraction.This is an iso-strain condition, in which a force applied to the composite produces the same strain but different stresses (if stiffness constants are different) in each component material.When the chamber is pressurized, the total force on a segment of myocardial tissue is the sum of the forces on the myocardium and extracellular matrix: where F myo and F ecm are the forces on the myocytes and extracellular matrix, respectively.Since F = σ * Area, the composite passive stress σ f can be shown to be: where A myo /A total and A ecm /A total are area fractions for myocytes and ECM respectively, and are the same as the volume fractions of each component.
As a simplifying assumption, the stress-stretch relationship for each material are assumed to follow the same nonlinear relationship: where β is a fitting constant, c f,myo and c f,ecm are stiffness constants for myocytes and ECM respectively, and λ f is the LV chamber stretch.Under homogeneous conditions, c f,myo and c f, ecm were assumed to be the same (c f in Table 1), and values for C f and β were taken from [47].Thus, increased myocyte stiffness (e.g.due to titin hyperphosphorylation) was modeled by increasing c f,myo ; increased stiffening of the extracellular matrix (e.g.due to increased collagen crosslinking or deposition of amyloid complexes) was modeled by increasing c f,ecm ; and increased collagen volume fraction was modeled by increasing A ecm and A total by the same amount.
Technical implementation
The model was implemented in R v4.1.1 and utilizes the RxODE package [67].More information on the technical implementation is given in S2 Text.
Contribution of potential mechanisms in producing a state of HFpEF
Sobol sensitivity analysis identified LV stiffness, hypertension, and LV contractility as the most important determinants of LVEDP (Fig 3A).Arterial compliance, outward dilatation, venous capacitance, and venous compliance had much weaker relative effect.Slowed LV relaxation was not found to be important by itself (first order indices not different from zero), but had a small interactive effect with other parameters (non-zero total order index).EF was most strongly influenced by outward dilatation (Fig 3B, first order index > 75%).LV contractility, hypertension, and arterial compliance had much weaker effects.
Fig 4 further illustrates the effect of changing the parameters identified as most important, alone and in combinations, on LVEDP and EF (A and B).The bottom left corner of each panel in these figures shows the reference state, when all parameters are at their normal value.The bottom panel (C) shows the resulting heart failure state, based on the defined LVEDP and EF criteria.First, a state of HFpEF only occurred when LV stiffness was increased and when outward dilatation was limited.Outward dilatation of even 10% tended to decrease EF below 60%, and larger increases resulted in ejection fractions below 50%, even without changes in other parameters.This is perhaps unsurprising, since outward dilatation represents an increase LV chamber volume, and LV end diastolic volume is the denominator of EF.We will revisit the factors determining outward dilatation later.
When outward dilatation was limited (first two rows of each panel), a large increase in LV stiffness alone (75%) was sufficient to produce a state of HFpEF, but HFpEF could also occur at lower degrees of LV stiffening, when combined with either hypertension, reduced arterial compliance, or reduced LV contractility.However, when large reductions in contractility were combined with large reductions in arterial compliance plus hypertension, ejection fraction tended to decrease into the intermediate range (heart failure with midrange ejection fraction, HF-mEF), even if LV stiffness was normal.
Effect of potential mechanisms of HFpEF on LV myocardial stress and strain
The analysis above identified sets of mechanisms that can produce a state of elevated LVEDP without lowering ejection fraction, and indicated that LV stiffness must be at least mildly increased while outward dilatation must be minimal in order to produce a state of HFpEF.In those simulations, the degree of outward dilatation was set to a constant and not allowed to change over time.However, elevated LVEDP necessarily alters the mechanical state felt by the myocardium, with consequences for subsequent ventricular remodeling.Increased preload is generally thought to be associated with outward remodeling and chamber enlargement, yet in HFpEF, preload is elevated while outward dilatation is minimal.Thus, we next sought to understand how potential HFpEF mechanisms may affect mechanical signals for remodeling differently than HFrEF mechanisms, potentially resulting in different remodeling patterns over time.
The two signals commonly proposed to drive outward dilatation are diastolic LV wall stress and strain.As shown in Fig 6A, LV end diastolic stress followed the same pattern as LVEDP, increasing with increased LV stiffness, reduced LV contractility, reduced arterial compliance, and worsening hypertension.LV end diastolic strain (Fig 6B) also increased with reduced LV contractility, reduced arterial compliance, and worsening hypertension.However, the effect of LV stiffness on strain followed a different pattern.When LV contractility and arterial compliance were normal, increased LV stiffness caused small subtle increases in LV strain.But interestingly, when combined with reduced contractility and/or arterial compliance, increased LV stiffness mitigated the rise in LV strain, helping to keep strain at near normal levels.
LV outward remodeling in response to LV end diastolic stress or strain over time
Given that increased LV stiffness has opposite effects on LV passive stress versus strain, and given that minimal outward dilatation is required to maintain a state of HFpEF, we next investigated the consequences of assuming LV passive stress versus LV passive strain as the signal When remodeling was driven by stress, both increased LV stiffness and decreased LV contractility caused EF to progress steadily downward into the HFrEF range.On the other hand, when remodeling was driven by strain, EF still progressed toward HFrEF in response to decreased contractility, but remained stable above 50% (in HFpEF range) in response to increased LV stiffness.The difference in the EF trajectories between the stress and strain models following increased LV stiffness were a consequence of different degrees of myocyte elongation, which were in turn due to differences in the underlying stress or strain driver (Fig 7 bottom row).
Taken together, the results of the strain-driven model are consistent with the different cardiac remodeling patterns in response to increased LV stiffness (HFpEF) vs. decreased contractility (HFrEF), while the stress-driven model cannot explain the maintenance of normal EF in HFpEF.
Effect of increased LV myocyte stiffness, ECM stiffness, and collagen volume fraction on cardiac hemodynamics in strain-driven remodeling
Given the importance of LV stiffness, we next sought to understand the impact of changes in intracellular vs extracellular components of the myocardium.While we initially modeled the myocardium as a homogeneous material with a single material stiffness, it is better modeled as a composite material of myocytes and ECM (Eqs 13-16).Myocardial stiffness has been found to increase as much as 2-fold in HFpEF.In addition, insoluble collagen, which is stiffer than soluble collagen, as well as collagen volume fraction, are increased in HFpEF [20].
Fig 8 shows the effect of independently increasing myocyte stiffness (representing titin hyperphosphorylation), collagen stiffness (representing increased collagen cross-linking), and collagen volume fraction, alone and in combination, on LVEDP, LV passive strain, and LV passive stress felt in the myocytes and ECM.For comparison, the effects are shown when systolic function is normal (normal contractility-bottom row in each panel) and impaired (reduced contractility-top row in each panel).While parameters were varied independently, it should be noted that in reality changes in collagen or myocyte stiffness could also impact contractility.Consider first the case in which systolic function is normal (bottom row in each panel) and collagen volume fraction is normal (left column in each panel).As myocyte stiffness increases, LVEDP (A) and myocytes stress (C) go up, while LV strain (B) increases but to a lesser degree.If collagen stiffness also increases, LVEDP is raised further, but there is minimal effect on strain or myocyte stress, as the additional load from the higher pressure is borne by the ECM (D) rather than the myocytes.If collagen volume fraction is increased, there is a very weak reduction in all four measures.
When systolic function is impaired while all else is unchanged (top row, bottom left corner in each panel), LVEDP is somewhat elevated, but the increase in LV strain is much larger-to a degree that would likely be untenable for myocytes.However, if myocyte stiffness is also increased, the rise in strain is much smaller (although the myocyte stress is increased further).And if collagen stiffness is also increased, the rise in both LV strain and myocyte stress are mitigated, as more of the stress is borne by the ECM.This suggests that while titin stiffening increases LVEDP, it may protect against excess LV strain.It also supports increased collagen stiffness (potentially as a result of increased crosslinking) as a protective mechanism that further reduces the strain, and unloads the myocytes as well.
In addition, if LV strain is the driver of outward remodeling, then if LV titan and/or collagen stiffness are increased prior to ischemic or other injury that impairs systolic function, it may allow LV strain to remain low enough to prevent outward remodeling and the decline in ejection fraction observed in HFrEF, but not preventing elevation of LVEDP, thus resulting in a state of HFpEF.In other words, preexisting LV stiffness may protect again outward dilatation and reduced ejection fraction, but not against elevated filling pressures.
Discussion
HFpEF remains a challenging condition to manage and treat, in part because its underlying mechanisms are heterogenous and not fully understood.Better understanding the role and interaction of the various mechanisms that contribute to HFpEF and differentiate its progression from that of HFrEF may facilitate better treatment and drug development for this patient population.In this study, we utilized a mathematical model of cardiorenal function to quantify the relative contribution of commonly proposed pathophysiologic mechanisms to the development of a state of HFpEF.Our simulations support existing evidence for elevated LV passive stiffness as the critical mechanism contributing to elevated filling pressure in HFpEF, and arterial stiffening (reduced compliance), hypertension, and impaired contractility as exacerbating mechanisms.
We also investigated how these factors affect mechanical signals of cardiac remodeling, and evaluated which combinations of mechanisms and mechanical signals can explain the different remodeling patterns in HFpEF vs. HFrEF over time.This analysis adds to the growing support for LV passive strain, rather than LV passive stress, as the signal for outward dilatation.Further, our simulations suggest that because a stiffer myocardium experiences less strain at a given filling pressure, sustained elevation in filling pressure does not trigger outward remodeling.This means that, when other factors that exacerbate LVEDP, such as impaired contractility, hypertension, or arterial stiffening, occur in an already stiffened heart, outward dilatation may be minimal.Thus, all the signs and symptoms of heart failure resulting from increased filling pressures can occur without a progressive decline in ejection fraction.
HFpEF as a consequence of LV passive stiffening plus additional insult(s)
The critical role of LV passive stiffening in HFpEF identified here is consistent with the many clinical studies that have shown that passive myocardial stiffness is elevated in HFpEF patients [17][18][19].The finding that a severe increase in LV stiffness alone may be sufficient to elevate LVEDP and cause HFpEF is consistent with previous simulations by others with the CircAdapt model [68] (which uses the same foundational cardiac mechanics model [47], but uses fixed CO in a closed system, i.e. no renal control of fluid volume).Here, we further showed that HFpEF can also occur if milder LV stiffening is coupled with another insult: hypertension, arterial stiffening, or a mild reduction in LV contractility.
The deleterious consequences of adding insults such as arterial stiffening or hypertension to a stiffened myocardium are illustrated in Fig 9 .The back-up of LV filling pressure into the normally low-pressure venous and capillary beds is responsible for many of the signs and symptoms of heart failure.Increased LV stiffness alone shifts the LV diastolic pressure curve upward (A) and increases stressed blood volume and capillary and venous pressures (C, E, and F).Alone, neither hypertension nor decreased arterial compliance have much effect on LV diastolic pressure, stressed blood volume, or venous and capillary pressures (although reduced arterial compliance increases arterial pulse pressure (B), while hypertension increases peak systolic pressure and mean arterial pressure (D)).However, when decreased arterial compliance is added to LV stiffening, the elevations in stressed blood volume and venous and capillary pressures are nearly doubled.And they are further exacerbated if hypertension is also added.
LV passive stiffening mitigates excessive LV wall strain and explains different remodeling patterns between HFpEF and HFrEF
Our analysis demonstrated that LV passive strain, as opposed to stress, can better explain the different remodeling patterns between HFpEF and HFrEF.While there is still much debate regarding the signals for cardiac tissue growth and remodeling, modeling analyses have increasingly suggested strain as the critical signal for tissue growth during volume-overload [36,60,64].Most recent analyses use strain or a combination of stress and strain, and our analysis adds further support for strain as the key signal [60,[64][65][66].
These findings support a new hypothesis for the pathophysiology distinguishing HFpEF from both LVH and HFrEF: namely, that LV passive stiffening contributes to a rise in LVEDP but prevents excess LV wall strain (since a stiffer heart stretches less for a given change in pressure), thus limiting outward remodeling in response to other insults to the myocardium and preventing progressive decline in EF.Thus, a state of HFpEF occurs, in which filling pressures are elevated but EF is normal and stable.
This effect can be further understood by considering the mathematical relationships between filling pressure, stress, and strain.For a spherical, strain-stiffening (i.e.stiffness increases with increasing stretch) vessel, passive wall stress σ is linearly proportional to filling pressure P i : while passive wall strain ε can be shown to be log-linearly proportional to pressure, and inversely proportional to stiffness (see S3 Text for further explanation): Thus, when filling pressure increases, wall stress will always increase proportionally (Eq 17), and for a given stiffness, strain will also increase.But if stiffness is increased as well, then strain will only increase if the log-increase in pressure is greater than the increase in stiffness (Eq 18).
So, when LV passive stiffness is normal, any factor that increases filling pressures will increase both stress and strain proportionally.[42,44].On the other hand, when LV passive stiffness is elevated, LVEDP rises (Fig 4) but the rise in LV passive strain is minimal (Fig 6).Any additional insult that elevates filling pressures further (e.g.hypertension, arterial stiffening, reduced contractility) will increase strain less than it would at normal stiffness.If strain is the signal for outward dilatation, then outward dilatation will be much less, and ejection fraction will not decline.The systemic consequences of congestion and elevated filling pressure will still occur, leading to the signs and symptoms of heart failure, but ejection fraction will remain normal.
HF-pEF is associated with multiple co-morbidities (kidney disease, metabolic syndrome, coronary artery disease, atrial fibrillation, valvular disease) [9], and each of these comorbidities may contribute insults such as the ones evaluated here (hypertension, impaired contractility, vascular stiffening, etc.).Our simulations suggest that comorbidities that produce cardiac or vascular changes that are not severe enough to cause HFrEF in a heart of normal stiffness may cause HF-pEF as LV stiffness is increased.Understanding HFpEF mechanisms in this way, as additional insults added to an already stiffened myocardium, provides an important step forward in the path to understanding and reproducing the heterogeneity within the HFpEF population.We have previously used the same cardiorenal model to simulate some of these comorbidities, including hypertension, chronic kidney disease, diabetes, and mitral/aortic stenosis.Going forward, these comorbidities can be added to a stiffened myocardium to generate virtual HFpEF populations.Indeed, because the model mechanistically couples renal and cardiac function, it is well suited for these types of simulation.In addition, the majority of therapies that are used to manage heart failure act through renal mechanisms.Accounting for the heterogeneity of HFpEF populations is a well-recognized challenge in improving management or treatment of HFpEF, as different patient types may respond differently.Going forward, we can utilize this model both to investigate differential responses to existing therapies (e.g.Why do SGLT2i and MRA improve outcomes while RAAS blockers have failed?Why are these treatments more effective at reducing hospitalizations than mortality?etc.), to understand and predict differential subpopulation responses (Why do some subpopulations see greater benefits than others?), and particularly to suggest new approaches or to predict patient-specific or subpopulation-specific effects of new mechanistic targets under development.
Limitations and other considerations
This study defined HFpEF as a state of normal ejection fraction and chronically elevated LVEDP.However, some HFpEF patients, especially early on, have impaired exercise intolerance but normal resting filling pressures [9].These patients tend to have much better prognoses than patients with elevated LVEDP.This study did not explicitly model exercise tolerance.However, both experimental studies [69,70] and previous modeling studies [68] have shown that increased LV stiffness is associated with poorer exercise tolerance.
While slowed LV relaxation is a common finding in HFpEF, the model predicted it does not contribute significantly to elevated LVEDP, consistent with previous simulations by others [68].Reduced venous capacitance and compliance both had relatively small effects on resting LVEDP alone, suggesting that the system is normally able to accommodate wide variability in venous function, including increases in stressed blood volume, without increasing cardiac filling pressures.However, these mechanisms did influence LVEDP sensitivity to other parameters, suggesting that they at least mildly augment the effects of LV stiffening, arterial stiffening, and hypertension.These mechanisms may have meaningful effects on ventricular filling pressures with acute exercise, when there is less time for natriuretic/diuretic adjustments to help accommodate increases in stressed blood volume.This analysis does not exclude these mechanisms as contributing factors in exercise intolerance.It only suggests that these mechanisms are not primary causes of chronically elevated filling pressure.These mechanisms should still be considered in future studies of exercise intolerance, as well as in potential responses to therapeutic interventions.
Heart rate was treated as a constant in this study.Most clinical studies do not report different resting heart rates between HFpEF and non-HFpEF patients, and thus heart rate is likely not a determining factor in elevated LV EDP.However, sympathetic control of heart failure, and particularly chronotropic incompetence, may play an important role in exercise tolerance among HFpEF patients, and should be considered in future studies.
While we considered the effect of ventricular and arterial stiffening on filling pressures, arterial stiffening also increases end systolic elastance.By increasing end systolic pressure (see Fig 9A), impaired ventricular-arterial (VA) coupling may limit contractile reserve and thus limited exertional capacity [31].Even at rest, more energy may be required to maintain normal stroke work [71].This could contribute to cardiac ischemia, ROS, cellular stress, and fibrosis that over time contribute to myocardial stiffening.However, these potential downstream effects of LV stiffening were not evaluated here.
This model does not attempt to represent the molecular and cellular mechanisms that govern contraction and relaxation, and thus contraction-relaxation coupling is not enforced.The activation signal is prescribed by Eq 8, and the strength of contraction is prescribed by a single parameter.This allows relaxation time and contractile strength to be altered separately so that the hemodynamic consequences each can be determined.But these mechanisms are likely not independent in reality.
While this analysis provides further support for elevated LV stiffness as the primary mechanism of HFpEF, it does not address the underlying factors that lead to elevation in LV stiffness.Understanding why LV stiffness becomes increased, and how to prevent it, may be key in preventing development of HFpEF.
Lastly, while the states of HFpEF generated here are consistent with clinical definitions of HFpEF and progress in ways consistent with clinical observations, the behavior of these HFpEF virtual patients in response to perturbations (e.g.therapeutic interventions) should be tested against clinical data.This is necessary next step before using this model to make predictions.
Conclusions
This modeling analysis shows the important role of LV stiffness, coupled with myocardial strain as the signal for outward myocardial remodeling, in explaining the distinctly different remodeling patterns in HFpEF vs. HFrEF.It suggests that LV stiffnening is both a critical contributing factor to elevation of cardiac filling pressures in HFpEF, as well as the mechanism by which outremodeling is limited and EF remains normal, since a stiffer myocardium experiences less strain and thus less outward dilatation at a given filling pressure.It supports the concept that heterogeneity within the HFpEF population is a consequence of one or more cardiac or vascular insults occurring concominant with or following myocardial stiffening.Because increased myocardial stiffness contributes to increased filling pressures and puts the patient closer to the levels needed to become symptomatic, a range of additional cardiac or vascular insults, and/or their combination, may be sufficient to push filling pressures into symtomatic levels, but not sufficient to stretch the stiff myocardium enough to trigger dilatation.
(Fig 1D-1G).Briefly, the cardiac portion of the model describes the dynamics of the cardiac cycle (Fig 1A), adaptation of myocytes and remodeling of the left ventricle in response to changes in mechanical loading (Fig 1B), and cardiovascular hemodynamics (Fig 1C).The renal and volume homeostasis portion of the model describes blood flow and pressure through the renal vasculature (Fig 1D); renal filtration, reabsorption, and excretion of sodium, water, and glucose (Fig 1F); whole-body fluid/electrolyte distribution (Fig 1E); and key neurohormonal and intrinsic feedback mechanisms (Fig 1G).The cardiac and renal components of the model are coupled through mean arterial pressure (MAP), venous pressure, and blood volume (BV).MAP and venous pressure calculated from the systemic circulation are inputs to the kidney and are determinants of renal blood flow and glomerular hydrostatic pressure in the kidney model (Fig 1D).The renal model determines sodium and water excretion (Fig 1F), which then determines blood and interstitial fluid volume (Fig 1E), and blood volume feeds back into the circulation model (Fig 1C)
Fig 2 .
Fig 2. Slowed relaxation modeled as an elongation of the falling arm of the contraction signal.https://doi.org/10.1371/journal.pcbi.1011598.g002 The trends in interstitial fluid volume were very similar to the trends in LVEDP (Fig 5A), and tended to increase as each mechanism worsened, alone and in combination.Cardiac output tended to decrease following the same pattern (Fig5B).
Fig 7 .
Fig 7. Effect of increased LV stiffness or reduced contractility (representative of HFpEF and HFrEF, respectively) over time under two different models of remodeling-strain-driven and stress-driven.With both models, reduced contractility caused EF to decline into the HFrEF range.However, only the strain-driven model allowed EF to remain in the normal range over time with increased LV stiffness.Gray dashed lines are threshold levels of stress/strain above which outward remodeling occurs.https://doi.org/10.1371/journal.pcbi.1011598.g007
Fig 9 .
Fig 9.The deleterious consequences of arterial stiffening and/or hypertension in a stiffened myocardium.A) LV stiffness shifts the diastolic pressure curve upward; decreased arterial compliance shifts the end systolic pressure upward, thus increasing end systolic elastance (E es ); hypertension increases both end systolic and peak systolic pressure; Reduced arterial compliance alone increases arterial pulse pressure (B), but has minimal effect on stressed blood volume (C), MAP (D), mean capillary pressure (E), or mean venous pressure (F).However, it has a strong exacerbating effect on stressed blood volume and capillary and venous pressures when combined with a stiff myocardium.Hypertension has a similar exacerbating effect.https://doi.org/10.1371/journal.pcbi.1011598.g009 | 10,521 | 2023-11-01T00:00:00.000 | [
"Medicine",
"Mathematics"
] |
Exploration and demonstration of explainable machine learning models in prosthetic rehabilitation-based gait analysis
Quantitative gait analysis is important for understanding the non-typical walking patterns associated with mobility impairments. Conventional linear statistical methods and machine learning (ML) models are commonly used to assess gait performance and related changes in the gait parameters. Nonetheless, explainable machine learning provides an alternative technique for distinguishing the significant and influential gait changes stemming from a given intervention. The goal of this work was to demonstrate the use of explainable ML models in gait analysis for prosthetic rehabilitation in both population- and sample-based interpretability analyses. Models were developed to classify amputee gait with two types of prosthetic knee joints. Sagittal plane gait patterns of 21 individuals with unilateral transfemoral amputations were video-recorded and 19 spatiotemporal and kinematic gait parameters were extracted and included in the models. Four ML models—logistic regression, support vector machine, random forest, and LightGBM—were assessed and tested for accuracy and precision. The Shapley Additive exPlanations (SHAP) framework was applied to examine global and local interpretability. Random Forest yielded the highest classification accuracy (98.3%). The SHAP framework quantified the level of influence of each gait parameter in the models where knee flexion-related parameters were found the most influential factors in yielding the outcomes of the models. The sample-based explainable ML provided additional insights over the population-based analyses, including an understanding of the effect of the knee type on the walking style of a specific sample, and whether or not it agreed with global interpretations. It was concluded that explainable ML models can be powerful tools for the assessment of gait-related clinical interventions, revealing important parameters that may be overlooked using conventional statistical methods.
Introduction
Clinicians commonly use gait indicators or parameters-such as the step length, stride velocity, and joint angles-to diagnose gait issues and establish a suitable course of rehabilitation [1].Gait metrics also play an important role in research, facilitating the assessment of interventions that are most effective in improving gait [2,3].Gait parameters are typically obtained via instrumented gait analysis techniques, such as optical cameras calibrated to track the body in three-dimensional space or wearable motion sensors (i.e.inertial measurement units).These technologies allow for large sets of gait parameters (spatiotemporal and kinematic) to be accurately measured and analyzed.
The two main techniques for analyzing interventional differences in gait data are traditional statistical methods and machine learning (ML) [4].Statistical methods, such as analyses of variance, are most commonly used in clinical research as they are easily understood and applied, in addition to providing measures of association with the dependent variable.However, statistical methods rely on strong assumptions, such as the type of residual distribution, linear dependency of the variables with the dependent variable, and independency of the predictors [4][5][6].Further, these methods are also not ideal for applications with small sample sizes and large sets of variables, as is common in rehabilitation research using quantitative gait analysis methods [4,7].Traditional statistical methods can result in underpowered analyses prone to type I errors or the potential discovery of gait changes that may not be important [8].A common way of dealing with this issue is hypotheses-based gait parameter reduction, where the researcher selects a sub-group of parameters based on the anticipated changes in gait (associated with the investigated intervention); however, this can result in the loss of important parameters and insights into unforeseen performance effects [9].
Recently, there has been a growing interest in the application of ML models for gait analysis methods, as they can address some of the shortcomings of traditional statistical methods and provide flexibility, scalability, and independence from inferred assumptions [10][11][12][13][14]. ML methods can also be applied to different data types (e.g.images, text, and tabular data), and the outcomes can be merged into predictions for diagnosis, prognosis, and possible treatments [15,16].ML approaches, such as principle component analysis, can reduce the volume of data and improve visualization, which can help determine similarities and differences between samples [17].Other ML methods, including support vector machine (SVM) algorithms, were successfully used for the automatic classification of prosthetic components, despite relatively small sample sizes [10].In [11], a k-nearest neighbour method was successfully used to classify the abnormal gait patterns caused by knee osteoarthritis and Parkinson's disease.In [12], four different ML models, including partial least square-discriminant analysis, SVM, random forest (RF), and artificial neural networks, were used to extract and use informative features for classifying gait characteristics according to certain subject characteristics.Hence, it is clear that ML models can leverage large amounts of data in an automated manner to ascertain the nonlinear and complex interactions between predictors and dependent variables.However, a challenge with ML techniques is their black-box nature.It can be difficult to understand the complex underlying mechanisms of ML models and the results that they yield [18].This has led to significant research interest in explainable ML models for gait analysis [19].
Few studies have investigated explainable ML for clinical gait classification.For example, the work in [20] used explainable ML with a gait dataset of ground reaction force measurements to successfully recognize significant gait parameters and quantify their contribution to the diagnosis of anterior cruciate ligament injury.Similarly, the authors of [21] used treebased explainable ML models to identify the most important factors affecting the gait speed of elderly people.In [22], the feature importance characteristics of two tree-based ML methods (XGBoost and RF) were used to determine the significance of statistical gait parameters on osteopenia and sarcopenia, which could ultimately lead to better management measures for these issues in the elderly.A fuzzy logic-based feature selection for knee osteoarthritis was developed in [23] and Shapley Additive exPlanations (SHAP) [24] was used to determine the importance of the chosen features and the rationale behind the decision-making process of the model.In [25], local interpretable model-agnostic explanation (LIME) was employed for identifying the most important gait parameters used in foot disorders and surgery planning.Although LIME successfully recognized the significance of the gait parameters for several ML models in [25], unlike SHAP, its results could not be easily generalized for a wider population.
Permutation feature importance (PFI) [26], LIME, and SHAP are three different techniques in the realm of explainable machine learning that have been used for interpreting and understanding the outputs of ML models.PFI works by permuting (randomly shuffling) the values of a single feature and measuring the change in model performance.Then, the feature importance is determined by the decrease in model performance when the feature's values are randomly permuted.A larger drop in performance indicates a more important feature.PFI provides only a global interpretation of feature importance, i.e., how each feature contributes to the overall model performance.On the other hand, LIME focuses on explaining the predictions of an ML model by approximating the model's decision boundary locally using a simpler, interpretable model (e.g., linear model).LIME provides local explanations for a specific instance or a small set of instances.This method creates perturbed versions of the input data, observes the corresponding predictions, and fits an interpretable model to approximate the complex model locally.LIME only offers instance-specific explanations and may not capture global patterns in the data.However, SHAP values are based on cooperative game theory and Shapley values.SHAP methodology assigns a value to each feature, representing its contribution to the prediction for a particular instance.SHAP values provide a way to allocate the prediction value among the features.SHAP analysis can be used for both local and global interpretability.In conclusion, PFI offers a global view of feature importance based on model performance, LIME provides local, instance-specific explanations using simplified models, and SHAP values offer a comprehensive approach to understanding feature contributions at both the local and global levels.
In the domain of explainable ML, local and global interpretation refer to different scopes or levels at which the interpretation or explanation of a model's predictions is conducted.Local interpretation aims to explain the predictions of a model for a specific sample in the dataset.For local interpretation with SHAP, sample SHAP values are computed for each feature for a given sample.In the case of clinical research, this may be the observations of a particular patient.These values represent the contribution of each feature to the difference between the model's prediction for that instance and the expected model prediction which usually is the average prediction for the dataset.Local interpretation is useful for understanding why a particular prediction was made for a specific instance.It provides insights into the factors influencing the model's output at a sample level.On the other hand, global interpretation looks at the overall behavior of the model across the entire dataset.For global interpretation with SHAP, aggregate statistics such as feature importance or summary plots are used to understand the general patterns and trends in feature contributions across all instances in the dataset (overall sampled population).Global interpretation is valuable for gaining insights into the overall behavior of the model, identifying important features across the entire dataset, and understanding which features consistently contribute to predictions.Unlike the local interpretation which is useful for explaining a specific prediction or providing instance-specific justifications, global interpretation is employed when understanding the model's behavior at a broader level is required to identify important features across the dataset and make generalizations about feature contributions.Overall, local interpretation in SHAP analysis involves explaining sample predictions, while global interpretation focuses on understanding the overall behavior of the model across the entire dataset.Both perspectives contribute to a comprehensive understanding of a model's predictions and can be valuable in different contexts, depending on the specific goals of the analysis.
Owing to its ability to analyze both population and sample-based outcomes, along with its high performance in terms of explaining ML models, SHAP is utilized to analyze gait changes related to physical rehabilitation in this study.The objective is to explore and demonstrate the use of ML-based classifiers to differentiate between two types of prosthetic interventions using gait parameters as model inputs, and to apply explainable ML techniques to interpret ML models and identify the most influential gait parameters.A complementary objective of this study is to develop both population-and sample-based explainable models for clinical prosthetic gait analysis using explainable ML.
Methods
The dataset comprised gait patterns for two prosthetic knee joints captured using sagittalplane video recordings of 21 individuals with transfemoral amputations.The participants were recruited between Dec 2016 and Sept 2017.The study was approved by the National Ethics Committee for Health Research, Cambodia (002NECHR), and Bloorview Research Institute Ethics Board (REB16-686).Written consent was obtained from all participants prior to data collection.The author (JA) has access to information that could identify individual participants during or after data collection.
Experimental protocol
In a crossover study design, participants conducted walking trials during two separate sessions with two prosthetic knee joints, including the All-Terrain knee (Legworks, Inc., Toronto, ON, Canada) and the International Committee of the Red Cross knee joint (CREquipments, Coppet, Switzerland), hereinafter referred to as the ATK and ICRC, respectively.The participants were instructed to walk along a 10-meter walkway at their normal and fast-walking speeds in both directions to capture videos of at least on full stride bilaterally.A total of 19 features were extracted from the video files using open-source software (Kinovea), including both spatiotemporal and kinematic gait parameters, as listed in Table 1.Symmetry indices (SI) were computed for the swing time, step length, and maximum knee flexion to enable the direct comparison of the intact and prosthetic limbs.
where V represents gait parameter recorded on the prosthetic/intact limb.
Analytical methods
Logistic/Logit regression (LR) [27] is a parametric classification algorithm used to determine the likelihood of a particular data input belonging to a binary class.This method multiplies each feature X = (x 0 , x 1 , x 2 , . .., x n ) by a regression weight W = (w 0 , w 1 , w 2 , . .., w n ) and calculates their summation.Then, it adds a bias term 'b' to be fed to a sigmoid function (i.e.Sshaped function), given in Eq (2).A gradient-based numerical algorithm is usually used to find the (sub)optimal coefficients.
SVM [2] is a simple but powerful ML modelling approach for both linear (and nonlinear) classification and regression purposes.In the case of linear separable binary classification problems, a hyperplane separates two different classes.For a given dataset {(x 1 , y 1 ), (x 2 , y 2 ),. .., (x N , y N )} with N instances, SVM establishes a hyperplane by finding a weight vector W and bias b.The goal is to minimize the term ||w|| 2 , as follows.
Minimize : 1 2 W T W; S:t: : RF [28] implements a supervised learning strategy for both classification and regression problems.In an RF algorithm, a forest of decision trees is created and combined to construct a model with predictions.The average (or median) of the outcome and majority vote strategy are used from different trees to forecast or classify data.RF models are able to evaluate the relative importance of applied features by investigating the average amount of impurity reduction in tree nodes that deal with those features.
LightGBM [28] is a newer ML model, which adopts a gradient-boosting decision tree algorithm for higher-speed execution, lower memory usage, and higher precision than conventional tree-based methods.The LightGBM algorithm uses a histogram to compute the gradients.Similar to RF, LightGBM has the ability to assess the importance of features.
SHAP [24] is a library that helps compute Shapley values and visualize these values in understandable and comparable ways.It is used for explaining the outputs of ML models by calculating and examining the contribution of input variables.SHAP decomposes the output into a sum of contributions from each variable feature of an input and presents the predictions as the sum of SHAP values added to a fixed base value.From an analytical standpoint, SHAP attempts to explain sample predictions (i.e.outputs) based on the game theory optimal Shapley values [24].In general, SHAP analysis can determine the contribution and importance of each feature in the predictions of ML models for both the entire population as well as each sample.
Various preprocessing methods were carried out to prepare the raw data for the ML models.To determine and exclude outliers stemming from the video recording process and/or gait parameter extraction from the videos, the IQR algorithm was used [29].Accordingly, first lower and upper boundaries of quantiles were calculated via L f = Q1 − 1.5 × IQR and U f = Q3 + 1.5 × IQR, where IQR represents interquartile range (i.e.upper quartile-lower quartile).Then, all samples outside of the range [L f , U f ] were identified and removed as outliers.For excluding redundant gait parameters, Pearson's correlation analysis was used.The prosthetic ROM knee, intact ROM knee, and stride velocity IC were highly correlated (R 2 > 0.9) and, therefore, removed from the dataset.As an essential step in building ML models, the data was shuffled to ensure that the samples were identically and independently distributed.More specifically, this shuffling could assist the gradient-based optimization algorithms in finding accurate optimal weights for the ML models.Since the gait parameters had different units and ranges of values, to avoid the dominance of some gait parameters over others, the Min-Max-Scaler function was used to map the range of the data into [0, 1].This preprocessing technique removes the scale of the data and increases the stability of the ML models and decreases the odds of approaching nonoptimal weights of the ML models [27].Since the data were complete, no missing data imputation was carried out.Also, as there was no categorical feature in the data, no encoding method was employed for converting categorical features into numerical values.Finally, 75% of the data was randomly selected as the training dataset, and the remainder was left for the test set.The four classifiers were assessed against three performance indicators: the accuracy, receiver operating characteristic curve area under the curve (ROC-AUC), and F 1 score.The accuracy measures how many predictions, including both positive and negative cases, were correctly classified.The F 1 score computes the harmonic mean between precision and recall.The Precision is defined as the proportion of positive predictions that was actually correct.The Recall represents the proportion of actual positives that was predicted correctly.The ROC plots true positive rate against false positive rate and, therefore, the ROC-AUC score measures the area underneath the ROC.The following equations were used to calculate the metrics.
where TP, TN, FP, and FN stand for true positive, true negative, false positive, and false negative samples, respectively.Finally, to determine the best hyperparameters, a K-fold grid search cross-validation algorithm was implemented.First the training dataset was split into five folds.Then, the first fold was used as the test dataset and the rest of the folds were applied for training the ML models.This was repeated with the second fold as the testing dataset and the remainder for training and so on.In each iteration, an exhaustive search over specified hyperparameter values for each classifier was performed (i.e.all the possible combinations of the hyperparameter values were considered and applied).The scores were computed to find the best hyperparameters as given in Table 2.
Remark 1.Initially, the analysis was performed with seven models namely LR, SVM, LightGBM, RF, XGBoost, Catboost, and a dense neural network.Since the results of treebased methods XGBoost and Catboost were similar to the results of LightGBM, in order to have a concise report without losing informative results, their results were removed from the analysis.Also, the results of the developed dense neural network were not as good as the results of the other methods which could be due to the low number of dataset's samples.So, the neural network model was not included in the report either.It is noted that the experiments were conducted with diverse hyperparameters for all the models and experiments were repeated for various times to have a benchmark and solid analysis.
Remark 2. To examine the type of the relationship between gait parameters and the two knees, both (semi)linear and nonlinear ML models were employed.The first option was to use a logistic regression (considering that its outputs are passed through a sigmoid function).Then the complexity of the model was increased by implementing an SVM model with both linear and nonlinear kernels.To investigate further the relationship between gait parameters and the knees, the third model was chosen as an RF where it has been known to detect the nonlinear relationships between dependant and independent variables.The gradient-boosting decision tree LightGBM model was also used as a potential alternative to RF where it has been known to be fast and accurate.However, tuning the LightGBM was more complicated compared to the RF model and it tended to overfit in the experiments.After running the models for at least 15 times, the mean of the results was chosen as the statistical indicator for the performance evaluation of the models.
Classification results
Since randomness usually exists when generating different initial parameters for ML models (e.g.initialization of weights, shuffling the data, etc.), the developed models were run 15 times to find the best results.All ML models performed well (>90%, except for ROC-AUC of LR and SVM) when classifying the knee joints; the RF model performed the best as shown in Table 3.Also, the corresponding ROC curves were illustrated in Fig 1.
Feature importance for tree-based models
Tree-based algorithms, such as RF, XGBoost, CatBoost, and LightGBM, can provide an analysis of the importance of features.This can reveal how influential each feature (e.g.gait ordered using the mean absolute value of the SHAP values for each feature.In all four models, the prosthetic max knee flex and knee flex SI were identified as the most influential/top ranked gait parameters.ATK led to lower values of prosthetic max knee, knee flex SI, swing time SI, prosthetic swing time, and intact max knee flex compared to ICRC.On the other hand, cadence, prosthetic max hip, and stride velocity ankle parameters were higher for ATK.Remark 3. In order to find the most important features of the models by another method, the recursive feature elimination (RFE) algorithm was adopted.RFE is a feature selection technique used to iteratively remove less important features from a model based on their importance ranking.RFE starts by training the model on the entire set of features.Then, it ranks the features based on their importance scores.These scores indicate the contribution of each feature to the model's performance.Afterward, the least important feature (according to the ranking) is removed from the feature set.The model is then retrained using the remaining features.These steps are repeated iteratively until a specified number of features is reached or until the model's performance (e.g., accuracy) stabilizes or starts to degrade.The final set of selected features, obtained after the recursive elimination process, represents the subset that the algorithm determined to be the most relevant for the given ML task.In this work, the RFE was implemented for all four models assuming that the final number of features to select was six.The results of the RFE were shown in Table 4.It is noted that RFE did not provide the rank and amount of contribution of the selected features.Also, the results of the RFE method could change based on the number of features to be selected.
Independence plots: Explaining single feature impact
SHAP dependence plots are used for exploring the impact of a single feature on the model's predictions while accounting for the effects of other features.In this paper, they provide a clear visual representation of how the output of the developed ML model changes with variations in a specific gait parameter while considering the effects of other features.To depict the SHAP dependence plots, first SHAP values for each instance in the dataset are computed to represent Similarly, the dots on the left side of the vertical line are for ICRC.Features at the top are of the most importance.For example, for Prosthetic Flex SI, the ATK has lower values (i.e. less prosthetic knee flexion) compared to ICRC.The changes in the gait parameters are due to the ATK swing-phase control mechanism.The ATK mechanism better controls (compared to ICRC) the kinematics at the knee joint during gait to increase symmetry between the prosthetic and non-prosthetic sides, specifically related to swing times, step lengths as well as heel-rise (swingphase knee flexion excursions) across different walking speeds.For a more detailed explanation please refer to [3].https://doi.org/10.1371/journal.pone.0300447.g003the contribution of each feature to the difference between the model's prediction for a specific instance and the expected model prediction.Then a target feature for the dependence analysis is chosen.Subsequently, a scatter plot with the x-axis representing the values of the chosen gait parameter and the y-axis representing the corresponding SHAP values for that parameter is depicted.Each point on the scatter plot corresponds to an instance in the dataset.Finally, analyzing the shape and trend of the dependence plot reveals the amount of the changes in the chosen feature's values associated with changes in the model's predictions.
According to the global interpretability results presented in
SHAP local interpretability: Explaining sample-based predictions
Local interpretability focuses on explaining classification outputs for sample participants.SHAP was used to explain how sample-based outputs (i.e. the label of each sample in the dataset) were reached through the contributions of each gait parameter.The results of local SHAP analysis obtained from the RF model on three different participants are illustrated in Fig 7 .It was observed that in most cases the most contributing parameters were similar for both the ICRC and ATK knees but in the opposite direction.In other words, if a value of a gait parameter was reduced by a knee, its value is increased by the other knee with small discrepancies being attributed to randomness in the execution of the models.These mirrored results are expected in the simple example of two knee joints, while more sophisticated models (two or more interventions as may be relevant in many cases of prosthetic rehabilitation) could be expected to yield more complex relationships in the analyses.
Remark 4. In general, the predicted output of the developed models is a (semi)linear or nonlinear weighted aggregation of all the input features (i.e.gait parameters).Each model tries to minimize a loss function that is a function of total prediction errors.Since the target values are +1 and -1 for ATK and ICRC, respectively, models will focus on those input features that have high variations between two knees so that they can recognize (minimize the loss function or error) based on those features.Otherwise, a feature having similar value for both knees will not most probably be useful for the models as it will give similar prediction output for both knees and, therefore, the error will go high and the loss function will not be minimized.For example, for knee flex SI, which is one of the important features, there are only two possibilities, whereby one knee will be high and the other them must be low.If they were both similar, then inherently, the feature would not come out as being important at distinguishing the knee and, hence, the model loss function would not be minimized.Accordingly, the more/higher vs lower a parameter value is, the more important it is in the model.So, SHAP analysis will detect similar most contributing parameters for both ICRC and ATK knees in opposite directions to minimize the errors of the labels (as the labels are opposite each other).It is noted that the most contributing parameters could be different, if the labels of the knees were similar (i.e. if the models were trying to predict similar labels for the different knees).For the low values of prosthetic max knee flex (up to 73 degrees), the likelihood of wearing an ATK was high.After passing 73 degrees, the probability of wearing an ATK started to decline meaning that the ICRC knee was likely worn.A similar pattern was observed for knee flex SI, swing time SI, and prosthetic swing time with the critical points of roughly 7, 33, and 0.52 sec, respectively.https://doi.org/10.1371/journal.pone.0300447.g004means how much the corresponding gait parameter is influential on the prediction result (+1 or -1).For example, it is seen that the step length SI has been appeared with blue color for P1 in ATK class indicating that this parameter is recognized as an influential parameter for ICRC rather than ATK.This means that although the step length SI has the third place among the gait parameters, its contribution is leaned toward the ICRC knee.But since its effect is low (the size of the blue color arrow is small), the knee type is recognized as ATK.Also, a positive SHAP value (the number inside the arrows) indicates a feature's positive contribution toward an ATK knee as it is labeled with +1, while a negative value indicates a positive contribution toward an ICRC knee as it is labeled with -1.In the meantime, red arrows direct to the right (ATK knees) and blue arrows aim at the left side targeting ICRC knees.A similar inference is knee flex SI was the most important parameter while the same model had obtained the prosthetic max knee flex as the main dominant gait parameter for another person.However, in Fig 8(c), the cadence was identified as the most influential gait parameter for both participants wearing the ATK prosthetic knee.This shows that regardless of the results of global SHAP analysis, each person could have had disparate dominant gait parameters with local SHAP analyses and different ML models.Remark 6.In the plots of Fig 8 where the results were depicted for ATK knees, it is observed that the plots include a mixture of both red and blue color arrows.To elaborate on this pattern, some points should be taken into account as follows: (1) The length of an arrow (regardless of its color) shows the amount of the contribution of the corresponding gait parameter in the predicted class type.For example, the prosthetic max knee flex with the largest length is placed on the top of the plot in
Discussion
This study aimed to apply and demonstrate explainable ML-based models to gait analysis while identifying the most influential features (i.e.gait parameters).All models performed with an accuracy over 90%, with RF models performing slightly better.This could be due to RF models representing the data using nonlinear parallel decision trees, which are known to reduce both bias (through hyperparameter tuning) and variance [27].This suggests that nonlinear tree-based ML models may better deal with the data in comparison to linear/semi-linear models.
Building upon previous work, this study explored the feasibility of using ML models to determine the influence of interventions on input variables (gait parameters) in a prosthetic application.Through global SHAP analysis, it was determined that the parameters related to knee flexion (prosthetic max knee flex and knee flex SI) and its timing (prosthetic swing time and swing time SI), along with the walking speed (stride velocity ankle and cadence), were the features that most affected classification.Stride length, step length SI, and double support time were found to have minimal influence on the predictions of the classifiers.This suggests that the two knee types had similar outcomes for these gait parameters.These findings were consistent with those of [3], where the same two prosthetic knees were analyzed using traditional linear statistical methods.However, owing to the nonlinear nature of the adopted methodologies, the current work discovered certain nonlinear relationships between the gait parameters and the knee type.For example, it was shown that the ATK results in higher values of cadence, while the aforementioned study [3] obtained a marginal statistically significant effect (p-value = 0.02) for cadence.This might be due to an existing nonlinear relationship between cadence and the prosthetic interventions that could not be captured by linear methods.
Unlike traditional statistical analyses or black-box ML models, the presented explainable ML models were able to determine the amount of variation of each gait parameter affected by a specific prosthetic device using SHAP analysis.For example, the feature importance graphs (Fig 2 ) of tree-based methods determined the relative effects of different prosthetic devices on each gait parameter.Hence, a multiple-wise comparison between the affected gait parameters could be performed using SHAP analysis.Based on the importance values provided by SHAP analysis (e.g. the values associated with the arrows in Figs 7 and 8), the quantity of the various gait parameters could be ascertained for each intervention.Conventional statistical methods (e.g.ANOVA) do not generally provide the corresponding amounts of parameter variations caused by nonlinear interactions via different interventions.Therefore, the properties of SHAP analysis could complement the results of existing conventional statistical methods, specifically in cases with nonlinear and complex big data.
Incorporating SHAP dependence plots revealed the impact of a single gait parameter on the classification results of the models.From the analysis, it was observed that the trend in The local interpretability (sample-based) analysis of the explainable ML methods presented in this study provides powerful insight into the effect of knee type on the walking style of a specific sample, whether or not it is in agreement with global (population) interpretations.An example of this incongruency was seen in the rank of the cadence in Figs 3(b) and 8(b).It could be observed that the rank of the cadence was changed compared to the global interpretability results in which its place was upgraded from third in global analysis to first in local SHAP plots for the SVM model.This reveals the importance of sample-based interpretability analysis, as the local results did not align with the global results for the given sample.This could be useful for analyzing and developing rehabilitation processes for a specific sample (i.e.personalized health care).Moreover, local explanations could also be beneficial in critical cases, where the analysis of waterfall plots can identify false positive or negative outcomes as misclassified and misleading situations [30].For instance, the local SHAP analysis for a participant wearing an ICRC prosthetic obtained from the SVM model was illustrated in Fig 9 .This figure shows prosthetic max knee flex with a blue color arrow as the most important gait parameter affected by the worn prosthetic and determined the prosthesis type as ATK (the label on the vertical line was +1).However, although the most effective gait parameter for the identified prosthesis type (i.e.ATK prosthesis for the SVM model in Fig 3) was prosthetic max knee flex, the color of arrow should have been red indicating that high values of this parameter were associated with ATK knees.The same discrepancy could be observed for the knee flex SI where it was associated for an ATK knee with a blue color arrow in Fig 9 , while the global plot in Fig 3 determined that blue color for this gait parameter should be associated with ICRC knees.Thus, it could be concluded that this particular prediction was a false positive case where the model misclassified the ICRC prosthesis as an ATK knee and, therefore, incorrect importance results were found through SHAP.In such circumstances, and to capture the meaningful sample or sub-group differences, more accurate classifiers-such as deep learning methods (if sufficient data is available) or a combination of models-can be used to improve the reliability of the results.
This study had certain limitations.First, the accurate capture and calculation of certain gait parameters were difficult to achieve using the implemented video-based gait analysis method [31].Some minor discrepancies existed in the obtained interpretability results.For example, the ranks of less important gait parameters in the SHAP plots in Fig 3 (i.e. the parameters located in the lower places in the plot) were different for various models.In general, authentic characteristics of the interpretability experiments (i.e.SHAP analysis) can be established if different methodologies result in similar conclusions in terms of both accuracy and the interpretability.On the other hand, the results of the SHAP analysis might differ for various runs of the same ML classifier due to either the existing randomness in the procedure of the ML algorithms (e.g.weight initialization) or different hyperparameters chosen for the model.In order to handle the randomness of the implementations, random seed was set to a unique value to reproduce the same results in different runs.In cases where setting seed was insufficient for obtaining reproducible results, multiple runs with averaged results were used.Also, the ML model that obtained the best accuracy for the intervention classification was be selected as the final decisive model, and the corresponding feature importance results could be considered as the indicators of the significance of the gait parameters.Additionally, the parameters related to Hip (including Max/ROM Hip) were removed from the dataset to examine the accuracy and interpretability of the revised models.However, it was found that the tuning of the hyperparameters of the models yielded similar results.Thus, since these parameters were dropped from the dataset, the effects of the different interventions on them could not be verified.Hence, as part of future work, the inclusion and exclusion of other gait parameters, as well as sensitivity and robustness analysis, should be conducted.
Conclusion
Explainable ML techniques can enable researchers and clinicians to better understand the effects of interventions on important patient outcomes, such as gait quality, as presented in this study.This study that both global and local interpretability analyses can be useful, providing generalizable information relating to a certain population, and also facilitating the assessment of sample patients who may present differently from the population.This research has demonstrated the potential use of explainable ML models in the assessment of clinical interventions, particularly the influence of a prosthetic knee joint on gait, which can be extended to and utilized in other clinical and non-clinical applications.
parameter) is in determining the output of the model.The feature importance diagrams of the developed RF and LightGBM models are shown in Fig 2(a) and 2(b), respectively.It can be seen that the two most important gait parameters were prosthetic max knee flexion and knee flexion SI, implying that the two parameters were most affected by the type of prosthetic intervention.The least influential gait parameters for both models were identified as double support time, stride length, and prosthetic step length.These gait parameters did not change significantly due to the given intervention.SHAP global interpretability: Explaining population predictions SHAP global interpretability yields the rank of the influence of each gait parameter [24].It also determines the direction of influence.Global SHAP values for the RF, SVM, LR, and LightGBM algorithms are shown in Fig 3(a)-3(d), respectively.In these plots, by default, the features were
Fig 3 .
Fig 3. Global interpretability for (a) RF, (b) SVM, (c) LR, and (d) LightGBM.The blue dots and red dots on the right side of the vertical line indicate low and high values, respectively, of corresponding gait parameters that have an influence on ATK.Similarly, the dots on the left side of the vertical line are for ICRC.Features at the top are of the most importance.For example, for Prosthetic Flex SI, the ATK has lower values (i.e. less prosthetic knee flexion) compared to ICRC.The changes in the gait parameters are due to the ATK swing-phase control mechanism.The ATK mechanism better controls (compared to ICRC) the kinematics at the knee joint during gait to increase symmetry between the prosthetic and non-prosthetic sides, specifically related to swing times, step lengths as well as heel-rise (swingphase knee flexion excursions) across different walking speeds.For a more detailed explanation please refer to[3].
Fig 3 ,
SHAP dependences are plotted for the results of the best ML model (i.e.RF model).Fig 4 depicts dependence plots for four most influential gait parameters (four top features in Fig 3 for the RF model).In Fig 5, the dependence plots are illustrated for the second four most influential gait parameters (features ranked from five to eight in Fig 3 for the RF model).The dependence plots for four least influential gait parameters (last four features at the bottom of global interpretability plot in Fig 3 for the RF model) are in Fig 6.
Remark 5 .
The existence of red color arrows in ICRC knees in Fig 7 means that the corresponding gait parameter does not positively contribute toward the ICRC knee.Contribution
Fig 4 .
Fig 4. SHAP dependence plots for four most influential decreasing gait parameters via RF model.For the low values of prosthetic max knee flex (up to 73 degrees), the likelihood of wearing an ATK was high.After passing 73 degrees, the probability of wearing an ATK started to decline meaning that the ICRC knee was likely worn.A similar pattern was observed for knee flex SI, swing time SI, and prosthetic swing time with the critical points of roughly 7, 33, and 0.52 sec, respectively.
Fig 5 .
Fig 5. SHAP dependence plots for gait parameters ranked five to eight for the RF model.In general, all four gait parameters started from low values for ICRC knees and increased for the ATK knees.However, the trend and rate of the curves are different for different features.For example, stride velocity ankle shows a flat trend for the values between 0.9 m/sec and 1.1 m/sec.https://doi.org/10.1371/journal.pone.0300447.g005
Fig 6 .Fig 7 .Fig 8 .
Fig 6.SHAP dependence plots for four least influential gait parameters via RF model.The general trend for the dependence plots of these features was flat indicating that these gait parameters had minimal impact on the model predictions.This means that neither ATK nor ICRC prosthetic knees affected these gait parameters.The dependence plot for prosthetic step length parameter demonstrates a nonlinear pattern between its varying values and the likelihood of wearing the prosthetic knees implying both ATK or ICRC knees could result in either low or high values of prosthetic step length (but the impact is low in any case).https://doi.org/10.1371/journal.pone.0300447.g006 Fig 8(a) meaning that it is the most important parameter contributing in the model prediction; (2) For ATK knees, the red color arrows with larger lengths are placed in higher ranks, indicating that the corresponding gait parameters are most influenced by ATK knees; and (3) Although the knee types in Fig 8 were all recognized as ATK, some parameters that negatively contributed to the prediction of the ATK were shown by blue color arrows in Fig 8.However, the effects of these blue color parameters would be less compared to the parameters indicated by red color arrows.In conclusion, the shorter lengths of the blue color arrows in the plots for ATK knees in Fig 8 are indicative of their contributions in the model predictions as being minor and hence the model leans toward ATK knees (i.e. the model predicted +1 as the final result implying an ATK knee).
Fig 4 is decreasing, increasing in Fig 5 and staying flat in Fig 6.The direction of the trend (increasing or decreasing) shows the direction of the impact of the chosen feature on the model's output.For example, the decreasing trend of knee flex SI in Fig 4 indicates that the large values of this gait parameter were associated with ATK knees and lower values belonged to the ICRC knees.On the other hand, the increasing trend of prosthetic max hip in Fig 5 revealed that ATK knees decreased the amount of maximum hip compared to the ICRC prosthesis.Also, the flat trend in Fig 6 suggested little to knee type had no significant impact on the corresponding gait parameter.The nonlinear relationship between a gait parameter and the type of a prosthetic knee could be identified via SHAP dependence plots too.For instance, Fig 4 suggests a nonlinear relationship between swing time SI and the adopted prosthetic knee.A (semi)linear relationship between stride velocity ankle is seen in Fig 5. Reveal of both (semi)linear and nonlinear relationships between gait parameters and the knee types by SHAP dependence plots suggests the use of a nonlinear model for classifying the knees.
Fig 9 .
Fig 9. SHAP analysis for a false positive example of classification for a single participant wearing an ICRC prosthesis obtained from the SVM model.Since the label -1 was assigned to the ICRC, one expected to see a vertical line labelled with -1.However, the class label was marked +1, meaning that the model had identified this sample with an ATK prosthesis.https://doi.org/10.1371/journal.pone.0300447.g009
Table 1 . Descriptions of the extracted gait parameters.
Prosthetic Swing Time Time between foot-off and foot-strike on prosthetic side (sec) Intact Swing Time Time between foot-off and foot-strike on intact side (sec) ROM Y Displacement Vertical displacement of the iliac crest markers (m) Swing Time SI Symmetry index of swing time based (unitless)-see Eq 1 Step Length SI Symmetry index of step length (unitless)-see Eq 1 Knee Flex SI Symmetry index of maximum knee flexion (1)-see Eq 1 https://doi.org/10.1371/journal.pone.0300447.t001 | 9,751 | 2024-04-02T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Sums of Products of Cauchy Numbers, Including Poly-Cauchy Numbers
Takao Komatsu Graduate School of Science and Technology, Hirosaki University, Hirosaki 036-8561, Japan Correspondence should be addressed to Takao Komatsu<EMAIL_ADDRESS>Received 24 July 2012; Accepted 24 October 2012 Academic Editor: Gi Sang Cheon Copyright © 2013 Takao Komatsu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We investigate sums of products of Cauchy numbers including poly-Cauchy numbers: T m (n) = i1+⋅⋅⋅+im=n, i1 ,...,im≥0 ( n i1 ,...,im ) i1 ⋅ ⋅ ⋅ im−1 c (k) im (m ≥ 1, n ≥ 0). A relation among these sums T m (n) shown in the paper and explicit expressions of sums of two and three products (the case of m = 2 and that of m = 3 described in the paper) are given. We also study the other three types of sums of products related to the Cauchy numbers of both kinds and the poly-Cauchy numbers of both kinds.
Introduction
The Cauchy numbers (of the first kind) are defined by the integral of the falling factorial: (see [1,Chapter VII]). The numbers / ! are sometimes called the Bernoulli numbers of the second kind (see e.g., [2,3]). Such numbers have been studied by several authors [4][5][6][7][8] because they are related to various special combinatorial numbers, including Stirling numbers of both kinds, Bernoulli numbers, and harmonic numbers. It is interesting to see that the Cauchy numbers of the first kind have the similar properties and expressions to the Bernoulli numbers . For example, the generating function of the Cauchy numbers of the first kind is expressed in terms of the logarithmic function: (see [1,6]), and the generating function of Bernoulli numbers is expressed in terms of the exponential function: (see [1]) or (see [9]). In addition, Cauchy numbers of the first kind can be written explicitly as (see [1,Chapter VII], [6, page 1908 (see e.g., [10]). Bernoulli numbers (in the latter definition) can be also written explicitly as where { } are the Stirling numbers of the second kind, determined by 2 Journal of Discrete Mathematics (see, e.g., [10]). Recently, Liu et al. [5] established some recurrence relations about Cauchy numbers of the first kind as analogous results about Bernoulli numbers by Agoh and Dilcher [11]. In 1997 Kaneko [9] introduced the poly-Bernoulli numbers ( ) ( ≥ 0, ≥ 1) by the generating function where is the th polylogarithm function. When = 1, (1) = is the classical Bernoulli number with (1) 1 = 1/2. On the other hand, the author [12] introduced the poly-Cauchy numbers (of the first kind) ( ) as a generalization of the Cauchy numbers and an analogue of the poly-Bernoulli numbers by the following: In addition, the generating function of poly-Cauchy numbers is given by where is the th polylogarithm factorial function, which is also introduced by the author [12,13]. If = 1, then (1) = is the classical Cauchy number.
The following identity on sums of two products of Bernoulli numbers is known as Euler's formula: The corresponding formula for Cauchy numbers was discovered in [8]: In this paper, we shall give more analogous results by investigating a general type of sums of products of Cauchy numbers including poly-Cauchy numbers: ( 1 , . . . , whose Bernoulli version is discussed in [14]. A relation among these sums and explicit expressions of sums of two and three products are also given.
Main Results
We shall consider the sums of products of Cauchy numbers including poly-Cauchy numbers. Kamano [14] investigated the following types of sums of products: ( 1 , . . . , where Bernoulli numbers are defined by the generating function (3) and poly-Bernoulli numbers ( ) are defined by the generating function (9) and Li ( ) is the th polylogarithm function defined in (10). It is shown [14] that Consider an analogous type of sums of products of Cauchy numbers including poly-Cauchy numbers: ( 1 , . . . , Then we show the following result.
Theorem 1. For an integer and a nonnegative integer , one has
Note that the generating function of ( ) is given by Since Journal of Discrete Mathematics 3 we have Since is equal to We need the following lemma in order to prove Theorem 1.
Lemma 2. For an integer and a positive integer , one has
Proof of Lemma 2. Since we have By induction, we can show that for ≥ 1 where Thus, by using the inversion relationship (see e.g., [10,Chapter 6]), the left-hand side of the identity in the previous lemma is equal to which is the right-hand side of the desired identity.
where (1) where ( ) ( ) are poly-Cauchy polynomials of the first kind, defined by the generating function ( ) ( ) are expressed explicitly in terms of the Stirling numbers of the first kind [13, Theorem 1]: Hence, the identity (39) holds because Next, by (30) and 0 ( ) = 1 + we have Hence, Therefore, we get the identity (40). Finally, by we have Hence, Therefore, we get the identity (41).
Journal of Discrete Mathematics 5 Putting = 1 in (40), we have the following identity, which is also found in [8]. This is also an analogous formula to Euler's formula (14).
Corollary 5.
One has (see also Table 1) is also given.
Poly-Cauchy
wherê( ) is poly-Cauchy number of the second kind [12], whose generating function is given by Journal of Discrete Mathematics 7 (see [12]). In this sense, ( ) is called poly-Cauchy number of the first kind. When = 1,̂=̂( 1) is the classical Cauchy number of the second kind, whose generating function is given by By using the corresponding lemma to Lemma 2, where ( ) is replaced bŷ( ) = Lif (− ln(1 + )), we can obtain the following result.
Theorem 7. For an integer and a nonnegative integer , one has
Putting = 1 in Theorem 7, one has the following.
Consider the case = 2. Note that the generating function of poly-Cauchy polynomial of the second kind ( ) ( ) [13] is given by Hence,̂( On the other hand, Thus,̂( Table 2). Theorem 9. For ≥ 0 and ≥ 1 one haŝ Putting = 1 in (80), we have the following identity. This is also an analogous formula to Euler's formula (14).
Theorem 13. For an integer and a nonnegative integer , one has
Putting = 1 in Theorem 13, we have the following.
Theorem 17. For ≥ 0 and ≥ 1 one has Define Then we obtain the following. (102)
Further Study
Kamano [14] mentioned that explicit formulae of ( ) ( ) for ≥ 4 seemed to be complicated to describe. We will give explicit formulae of ( ) ( ) for any ≥ 2 later anywhere else. In addition, one may consider the sums of products of ( − ) Cauchy numbers and poly-Cauchy numbers. It would be an interesting work to establish the explicit expressions of such summations. | 1,661.8 | 2013-01-30T00:00:00.000 | [
"Mathematics"
] |
Efficiency Analysis of Excavator Nut Inventory Using Economic Order Quantity Method at PT. ABCDE Bekasi-Jawa Barat
: Inventory management is one of the keys to the success of the production process in any company, including manufacturing companies. Therefore, determining the optimal amount of inventory is crucial for inventory management. In a production process, sometimes a condition is found where the amount of inventory far exceeds what is needed, so that in the process of implementing the specifications change of an inventory will take quite a long time, this is due to an effort to use up obsolete inventory first, so it is necessary to calculate the optimal value of the company inventory. In this research, using the Economic Order Quantity and Reorder Point analysis method in the POM-QM software, it is expected to determine the optimal value of the purchase amount, Safety Stock, Reorder Point, and the required inventory costs of J950010 nuts. The results showed that although there were differences between the results of the POM-QM calculation and the application in the field, the management of J950010 nut stock is optimal.
INTRODUCTION
Inventory is a necessity for every company, including manufacturing companies that process raw materials so that they can become ready-to-use goods that have selling value. No exception with PT. ABCDE, one of the worldwide heavy equipment manufacturers. One of its main products is a medium class hydraulic excavator. Every single excavator unit consists of thousands of parts that compose it, so a good inventory management is needed to support its production activities. One of the most important parts of an excavator is the nut, especially if the nut is used on all production units. If the nut is stocked out, the whole production process will stop, but if too much stock can cause a burden on the company's finances. Related to this issue, good inventory management is needed to avoid those conditions. An example case is shown by the inventory of J950010 nuts at PT. ABCDE. In the 2019 fiscal year, there was a problem in changing the nut specifications. The crux of the problem is that the implementation of changing the old nut specification into the new specification takes months. The reason from the PIC of warehouse regarding this problem is that the replacement must wait for the old specification inventory to be used up first, so as to avoid wasted part (scrap). This is supported by the data related to the supply of J950010 nuts for the period April 2019 -March 2020 which is shown in Table 1.
Based on the information contained in table 1, it can be seen that there was a significant excess supply of nut J950010 which caused the part change time takes a very long time. Therefore, this research is aimed at finding out what the optimal supply of J950010 nuts should be in the 2019 fiscal year in order to shorten the time for changing parts. In addition, optimal inventory is expected to prevent the occurrence of stock out conditions and can minimize the required cost of the inventory.
From the core problems that have been known, in order to achieve the goals and objectives of this research, it is necessary to determine the boundaries of the problem to be solved. Considering the scope of inventory management is very broad, so in this research, to prevent the discussion from being too broad, the researcher will limit the problem to a few issues. The problems that will be solved in this research include the value of Economic Order Quantity, Total Inventory Cost, Safety Stock, and Reorder Point of J950010 nuts inventory.
This research was conducted in a private company, located in Cibitung, Bekasi, Indonesia. The object to be investigated is the J950010 nut. This part is the nut used by all medium class excavator units produced by PT. ABCDE. The data source in this research is inventory and logostic data of nut J950010 from April 2019 until March 2020. These data consist of summary of medium excavator production in 2019 fiscal year, Purchase Order number list of nut J950010, Import Declaration, and Inland Service document.
The purpose of doing this research is as input for PT. ABCDE in order to improve inventory efficiency of J950010 nut. Especially if there have been many problems related to inventory management in the company. With this research, it is hoped that it can be a reference to improve inventory efficiency for other parts Based on explanation above, there are main problems that must solve in this research, which are: 1. What is the optimal value that should be applied to PT. ABCDE in terms of Economic Order Quantity, Reorder Point, and Safety Stock on J950010 excavator nut? 2. Is inventory planning at PT. The ABCDE currently applied in the case mentioned in the previous point is optimal?
LITERATURE REVIEW Research Review
In the manufacturing industry, inventory management is the key to the success of the production process. In terms of costs, inventory management also plays an important role in the operational costs of a company. If inventory management is not carried out properly, then operating costs can increase and affect a company's profit margin. Usually, companies try to minimize the costs that arise from all activities related to inventory management.
In general, inventory can be defined as organizational resources that are kept to meet demand. Inventory is one of the most expensive assets of many companies, and represents as much as 50% of the total invested capital (Heizer & Render, 2014:512). An inventory system is a set of policies and controls that monitors inventory levels and determines the level of inventory that must always be there, when inventory should be replenished, and how much of an order should be ordered (Jacobs & Chase 2014: 209).
One of the methods used in inventory management is to use Economic Order Quantity. Heizer and Reinder (2010) explain that Economic Order Quantity is an inventory control technique that minimizes total ordering and storage costs. Meanwhile, according to Bambang Riyanto (2013: 78) is the quantity of goods obtained with minimal costs, or often said to be the optimal number of purchases.
In the article "The EOQ Inventory Formula" written by James M. Cargal, the basic theory of Economic Order Quantity is explained. In his writings, he explains the appropriate way how to use each variables. According to him, EOQ can be found using the following formula: Where: Q = EOQ, or the variable you want to optimize D = Demand for goods per year S = Cost of ordering goods once ordered H = Inventory cost According to Heizer and Render (2014: 561) this technique is relatively easy to use, but based on the following assumptions: a. The number of demands is known, fairly consistent, and independent b. The lead time for receipt of orders is known and is constant. c. No shortage is allowed d. The number of orders received at once. e. The purchase price of the item is constant, and there is no discount.
In addition to the Economic Order Quantity method, the Reorder Point method is also used in inventory management. According to Sofjan Assauri (2008), the reorder point or reorder level is a point or limit of the amount of inventory that exists at a time where orders must be held again.
The reorder point can be affected by several factors, including: a) Lead Time, which is an understanding of the time it takes between ordering the ordered goods to arrive at the company. b) The average level of use of supplied raw materials to make a product in a certain time unit. c) Safety Stock, namely the minimum amount of raw material inventory that must be available as a precaution against possible delays in the arrival of raw materials. From three factors that can affect it, the Reorder Point value can be found using the following formula: Reorder Point = (LT x AU) + SS (2) Where: LT = Lead Time AU = Inventory Supply SS = Safety Stock Along with the times, manual calculations can now be done using computer calculations. Computer calculations in this study using the POM-QM module. POM-QM stands for Production and Operations Management/Quantitative Methods, is a computer program used to data process and solve various problems in the field of production and operations management that are quantitative. According to the POM-QM for Windows Ver.3 manual, this program provides modules in the business decision-making area, one of which is inventory. In this inventory module, there are several methods, which in this study were used only the Economic Order Quantity method and the Reorder Point method.
Research Framework
Many studies that discuss the Economic Order Quantity and Reorder Point. However, of the many studies, no one has researched the J950010 nut part at PT. ABCDE. The research that approaches is the overstock analysis of the YA40003084 Engine Cover, one of the components of the middle class excavator. Therefore, in this study, we will first describe the research framework regarding the efficiency of the J950010 nut supply as shown in Fig. 1.
Figure 1. Research Framework
This research started with the discovery of problems related to changing the J950010 nut specifications which took a very long time. From here, the researchers find out why the change took so long. After finding out, this is because parts with old specifications must be spent first.
After knowing the reason, it was concluded that there was a large accumulation of inventory that had to be spent before using the J950010 nuts with the latest specifications. Therefore, it is necessary to analyze what the optimum inventory should be for the J950010 nut. To find out, EOQ and ROP methods are used which are calculated with the help of POM-QM software.
To be able to obtain the results of EOQ and ROP, several input variables are needed in the calculation operation. In the EOQ calculation, input variables are needed in the form of the number of annual demands for J950010 nuts in Fiscal Year 2019, set up cost each time you place an order for 1 nut, carrying cost of 1 nut for 1 year, and the price of the nut per unit. while the input variables for the ROP are the daily average demand for nuts J95010, service level, lead time, and standard deviation for the daily average demand and lead time.
RESEARCH METHODS
This research was conducted at PT. ABCDE Plan 1 in the Cibitung area, Bekasi Regency, West Java, Indonesia, which produces middle class hydraulic excavators. There are several variables used in this study. these variables are used in the calculation of EOQ and ROP. for more details related to the variables used, can be seen in the Table 2. The object of research used in this paper is the general part of the J950010 nut. The part was chosen as the only part in this research because in FY2019, which is in the April 2019 -March 2020 period, the part experienced problems in changing nut from the old specifications to the latest specifications. The problem lies in the time of changing parts which take months. In addition, these parts are used by all finished products produced by PT. ABCDE, so that if there are problems with the inventory of these part, it will interfere with the company's production activities as a whole. From there, the researcher intends to find out whether the nut inventory planning has been carried out correctly.
The research conducted is about the analysis of inventory management at PT. ABCDE using the Economic Order Quantity (EOQ) and Reorder Point methods, which will find out the amount of inventory that needs to be ordered per order, Reorder Point, Safety Stock, and Total Inventory Cost (TIC). Calculation of the variables you want to find out the value is done by POM-QM software.
The data used in this study is secondary data taken from the company's database, especially those contained in the MRP system and from documents in each section at PT. ABCDE. The MRP system that mentioned means the SMAP V3 system used by PT. ABCDE. In addition, data is obtained by directly contacting employees in the relevant department according to the desired data.
The data taken is data regarding the supply of nuts J950010 from April 2019 until March 2020. This period was taken because during that period problems that arose related to changing the J950010 nuts occurred.
To be able to obtain information regarding the number of demands in a year, a data search was carried out to the PPIC section of PT. ABCDE, which then obtained data of Excavator Production Amounts for one Fiscal Year. The data is processed first to obtain information related to the production of middle class excavators in 1 year. In addition, from this data, other information can be obtained is regarding the daily average number of demands.
To be able to find out information about the price of nut per unit, a data search was conducted to the Procurement section of PT. ABCDE which then obtained data of Purchase Order number list. Apart from obtaining information related to the price of nut per unit, from this data other information was obtained is regarding Lead Time.
To find out the set-up cost, conducted a data search to the Logistics section of PT. ABCDE. From this search, data were obtained in the form of PIB (Import Declaration) as well as Inland Trucking Bills Recap for 2019 and 2020. From these two documents, information related to setup costs can be obtained by adding up all cost items arising in placing an order, such as transportation, insurance, import duty, Value Added Tax, Income Tax and administrative costs.
To find the carrying cost, a simple calculation is carried out on the inventory value of J950010 nuts by dividing the value of the supplied inventory by the average inventory value per year. The result of the calculation is in the form of a percentage.
After all the required inputs are known, the next step is to enter the input values into each column in the POM-QM software. The resulting output is the optimal amount of inventory ordered in each order, the number of orders during the time period studied, as well as the overall inventory cost during the research period.
After the EOQ value is known, the next thing to find out is the Reorder Point value. Reorder Point is a variable that can be time or units of goods, where when it is reached, inventory orders must be made to avoid shortage conditions. To be able to determine the value of the Reorder Point, it takes several inputs in the calculation operation using POM-QM software. The required inputs are daily demand, daily demand standard deviation, Service Level, Lead Time, and Lead Time standard deviation.
To be able to obtain the standard deviation value of daily demand and Lead Time, calculations are carried out on the data contained in Excavator Production Amounts for one Fiscal Year and Purchase Order number list using formulas in Microsoft Excel. The next step is to enter the input variable in the appropriate column in the Reorder Point calculation menu. The resulting output is Safety Stock, and Reorder Point.
After all the variables you want to know can be solved with POM-QM software, the last step is to draw conclusions by comparing the results of inventory planning calculations with the EOQ method carried out with the help of POM-QM software and those applied by the company today. In addition, several advice and recommendations are also given to help solve the problems that have been formulated previously in this study.
FINDINGS AND DISCUSSION Determine EOQ
The first step in determining this EOQ is to find information regarding the number of requests for J950010 nut in a year. From the data collection carried out in the PPIC section, documents such as Excavator Production Amounts for one Fiscal Year, can be obtained contains information about the number of demands in a year. The demand value in a year is 115776 pieces.
From the results of the search for data in the procurement section, a document was obtained regarding the list of PO numbers for nuts J950010. From that document, information is obtained about the price of parts per unit. The part price per unit is obtained from finding the average price of the J950010 nut part during the April 2019 -March 2020 period, which is 0.0823 USD. The next step in determining the EOQ is to find the setup/ordering cost for each order. These costs arise from every activity performed in ordering supplies such as, shipping costs, insurance, administration, taxes and receipts, etc. In this research, setup/ordering costs are divided into 6 categories, namely import duties, Value-Added Tax, Income Tax, insurance, transportation, and administration. from the search and data processing carried out, the ordering cost for 1 unit of J950010 nuts is 0.035 USD.
The final step in determining the EOQ is to find the value of holding/carrying costs. In this study, holding/carrying costs are obtained by comparing the value of the inventory supplied with the existing inventory for one year. from the calculation results obtained a value of 43.82% as the holding/carrying cost.
After all the input variables are known, the EOQ calculation using POM-QM can be done. By entering the input variable in the appropriate column in the POM-QM software, the calculation results can be seen in Fig. 2 as follows:
Figure 2. EOQ calculation with POM-QM
From Fig. 2, it can be seen that the optimal number of orders for J950010 nut is 507 pieces per order. Orders made annually as many as 229 times the order. For inventory costs, the amount is 9547 USD annually. The cost of this inventory can be seen in Fig. 3. Fig. 3 shows the overall cost of J950010 nut inventory for one year period April 2019 -March 2020 PT. ABCDE. The x-axis on the graph shows the order quantity, while the y-axis shows the cost. The graph shows that the more the order quantity, the less costs to be incurred. However, the more the order quantity, the higher the holding cost, so the point of intersection between the holding cost line and the ordering cost line becomes the optimal order quantity value.
Determine ROP
As mentioned earlier, to calculate ROP input variables are needed such as daily demand, service level, lead time, as well as standard deviation for daily demand and lead time. The data used as daily requests can be obtained from Excavator Production Amounts for one Fiscal Year and Purchase Order number list. The data in these documents are processed first using Microsoft Excel, so that the daily demand value is 483 pieces.
For information related to Lead Time, data processing is carried out in Purchase Order number list. From the Fixed Date and Received Date columns, it can be seen that the average Lead Time for the procurement of J950010 nut is around 66 days. Although the required Lead Time is 66 days, other information is obtained that during FY 2019 PT. ABCDE made 18 deliveries of J950010 nut.
To determine the standard deviation of daily demand and Lead Time, in this study, the researcher used the formula provided in Microsoft Excel software. By using the data in Excavator Production Amounts for one Fiscal Year and Purchase Order number list and using the STDEV.S formula in Microsoft Excel, the standard deviation for daily demand and Lead Time can be determined as follows: Daily demand standar deviation = 108.14 Lead Time standard deviation = 9.56 The last variable that must be known is the service level. in this research, the service level used is 95%. This value is obtained from the inventory grouping based on turnover. Because this J950010 nut is the most widely used part by all excavator hydraulic models produced by PT. ABCDE, then this nut is included in the category of high service level, so the value is 95%. This means that in meeting the need for J950010 nuts in the excavator assembly process per year, only 5% of stock outs are allowed.
After all the input variables needed to perform the ROP calculation are known, the next step is to enter these variables into the appropriate column in the POM-QM software. The method is almost same as the EOQ calculation, but with a different module in the Inventory menu. ROP calculation with POM-QM can be seen in Fig. 4.
Figure 4. Reorder Point Calculation with POM-QM
From Figure 4 it can be seen that the safety stock for the J950010 nut is 7709 pieces, and the Reorder Point is 39587 pieces. During the Lead Time period, the number of J950010 nut that are expected to be used is 31878 pieces.
CONCLUSION AND RECOMMENDATION Conclusions
From the calculation operations with POM-QM software that have been discussed in the previous chapter, the following conclusions can be drawn: 1) EOQ value of nut J950010 PT. ABCDE based on data for the period April 2019 -March 2020 are as follows: The number of nut that should be ordered is 507 pieces per order. The number of orders with this amount for one year is 229 times the order. As for the Reorder Point, the value is 39587 units and the Safety Stock that should be applied based on the calculation results of the software is 7709 units. 2) Excavator part control system currently applied at PT. ABCDE is already based on the EOQ value calculated using POM-QM software. This is based on the results of the research previously mentioned that the number of nut that must be ordered each time an order based on calculations using POM-QM software is 507 pieces, while those applied to PT. ABCDE during the research period in question is 500 pieces. Orders that must be made based on software calculations are 229 times, while those made by PT. ABCDE is 222 times. For the Safety Stock value, the calculation result of the software is 7709 pieces, while the actual condition is 7099 pieces. Although there are differences, inventory at PT. ABCDE can be said to have been implemented according to the EOQ and Reorder Point methods.
Recommendation
Based on the conclusions above, even though it has implemented controls based on EOQ, there are several recommendations that can be considered by the company to be able to improve the efficiency of the J95010 nut inventory, which include: 1) Due to very long Lead Time and inventory control is carried out automatically by the system, discipline and alertness are needed for the inventory system PIC to immediately stop issuing POs to suppliers after receiving information regarding changes in specifications, so that the stock of existing nut does not increase. 2) Discipline is needed in complying with the nut inventory allocation that has been determined by the system, so that the amount of stock in the system and the actual stock in the warehouse there are no significant differences that can affect the production process. 3) To speed up the process of changing nut, one of which can be done by increasing sales of the final product so that the use of nut to be replaced will run out faster. 4) Improve the communication system which is fast and accurate, especially in terms of distribution of nut from suppliers and shipping parties to the company. All kinds of changes in terms of distribution will affect the existing Lead Time and can also affect the availability of nut in the warehouse. In addition, a fast and accurate communication system between departments must also be implemented, so that all kinds of information can be followed up properly. 5) In addition to the four suggestions above, the last one is the company should use the results of these calculations as inventory planning for nuts J950010 that is, by ordering 507 pieces each time you order, do 229 orders for 1 fiscal year, and added safety stock to 7709. Although there is only an insignificant difference, at most no results in this study are more efficient than what has been applied in the company. | 5,626.6 | 2021-07-10T00:00:00.000 | [
"Business",
"Engineering"
] |
One-step synthesis and upconversion luminescence properties of hierarchical In2O3:Yb3+,Er3+ nanorod flowers
In2O3:Yb ,Er nanorod flowers (NRFs) are prepared by a simple hydrothermal method, where sucrose was used as a ligand. The obtained In2O3:Yb ,Er NRFs were carefully characterized by scanning electron microscopy (SEM), power X-ray diffraction (XRD), transmission electron microscopy (TEM) and steady/ transient spectroscopy. The dependence of the upconversion luminescence (UCL) of In2O3:Yb ,Er NRFs on morphology, Yb concentration and excitation power was carefully discussed. It is found that the luminescence intensity ratio of the red emission to green emission (R/G) depended on the morphology and Yb concentration, and the green emissions are dominantly resulting from a threephoton populating process with higher Yb doping concentration. More importantly, the concentration quenching in the In2O3:Yb ,Er NRFs was greatly suppressed due to boundary effects, and is beneficial for lighting and photon energy conversion devices.
Introduction
][6][7][8][9][10] The inorganic host is the carrier of UC and it is very important for improving the efficiency of UCL, which is affected by the morphology of UCNPs.8][19] At present, the controllable preparation of high effective UCL of rare earth (RE) doped In 2 O 3 NCs is still difficult.Several strategies have been proposed to solve these problems.For example, Chen's group synthesized Er 3+ ions doped cubic In 2 O 3 nanoparticles via a simple solvothermal method, and rstly demonstrated its UC and downconversion emissions under 808 and 980 nm excitation, respectively. 20Xu et al. successfully fabricated In 2 O 3 :RE nanotubes by the electrospinning method for the rst time, and the sensitivity of In 2 O 3 nanotubes to H 2 S were signicantly improved due to the RE ions doping. 21Dutta's group also prepared Eu/Dy doped In 2 O 3 nanoparticles by a sonochemical technique.Relatively weak Eu 3+ emission could be detected in In 2 O 3 :Eu nanoparticles, which is attributed to the distorted environment around the Eu ions in the In 2 O 3 lattice. 22So far, although a few works on In 2 O 3 :RE nanoparticles have been performed, the mechanism of UC processes of In 2 O 3 NCs doped with Yb 3+ , Er 3+ ions is not fully understood.Especially, the dependence of UCL on cross relaxation, population processes, excitation power density and RE doping concentration are not clear.Therefore, it is necessary to detailedly investigate the UCL property of In 2 -O 3 :Yb 3+ ,Er 3+ NCs.
In this article, we prepared In 2 O 3 :Yb 3+ ,Er 3+ hierarchical NRFs by a facile hydrothermal method using sucrose as surfactant. 23The formation, morphology and structure of the In 2 O 3 :Yb 3+ ,Er 3+ NRFs were examined by SEM, TEM, and XRD.And the inuence of sucrose on the hydrothermally prepared In 2 O 3 :Yb 3+ ,Er 3+ NRFs is further elucidated.The investigation of UCL property of the In 2 O 3 :Yb 3+ ,Er 3+ NRF demonstrated that the UCL intensity and R/G ratios depended on particle size, excitation power and Yb 3+ concentration.The as-obtained In 2 O 3 :Yb 3+ ,Er 3+ NRFs exhibited a high effective UC emission, which is a promising candidate for display and solar cells.
Synthesis In 2 O 3 nanorods owers
All the reagents (analytical-grade purity) were used without any further purication.In the preparation of (10 mol%)Yb 3+ , (1 mol%)Er 3+ co-doped In 2 O 3 NCs, InCl 3 $4.5H 2 O (266 mg), YbCl 3 $6H 2 O (33 mg), ErCl 3 $6H 2 O (3 mg), urea (15 mg) and moderate sucrose (342 mg) were dissolved in deionized water (36 mL) with adequately stirring for 30 min.In this reaction, sucrose acts as a template and linker to bind the nanoparticles, which leads to the formation of ower-like nanostructures.The presence of both sucrose and urea is a prerequisite for the formation of this porous hierarchical In 2 O 3 .Then, a Teonlined stainless steel autoclave was used for placed for the mixture solution and placed in 160 C oven for 12 h.Aer the autoclave was cooled to room temperature naturally, the result product was obtained through centrifuging the result product using deionized water and ethanol for three times, and then dried at 80 C for 1 day.In 2 O 3 :Yb 3+ ,Er 3+ NCs were obtained through calcining the precursor at 500 C for 2 h.The same route is adopted for the preparation of (10 mol%)Yb 3+ and xEr 3+ (x ¼ 2, 3, 5, 8 mol%) doped In 2 O 3 NCs.The preparation process of In 2 O 3 :Yb 3+ ,Er 3+ NRFs is presented in Scheme 1.
Characterization and measurements
The SEM images were obtained on a JEOL JSM-7500F microscope operating at 15 kV.The TEM, HR-TEM and selected-area electron diffraction (SAED) images were measured on a JEOL JEM-2100 microscope operated at 200 kV.The XRD pattern was recorded on a Rigaku D/max-rA power diffractometer.The UC emission spectra and dynamics of rare earth ions were measured on an Edinburgh uorescence spectrometer (FLS980).2 shows the SEM images of In 2 O 3 :Yb 3+ ,Er 3+ NCs with varying amount of sucrose.In the absence of sucrose, the product is the mixture of irregular nanocubes and ower-like nanostructures (Fig. 2a), as the reference (REF) samples.When the moderate sucrose is added, the NRFs were appeared.Fig. 2b shows the low magni-cation SEM image of In 2 O 3 :Yb 3+ ,Er 3+ NRFs, demonstrated that the NRFs are high uniformity.The enlarged-magnication SEM images are presented in Fig. 2c and d, and we can see that each nanorod is uniform.Fig. 3 records the elaborate structural analysis of samples using TEM.In Fig. 3a, it can be seen that each nanorod of NRFs is homogenous, and the size of these nanorods is $100 nm in diameter and $1 mm in length (Fig. 3b).From the TEM images, we can deduce that the NRFs are self-assembled by many single nanorods.The inter-planar spacing is $0.292 nm through measuring the lattice fringes (Fig. 3c), matching with the distance of the (2 2 2) plane of the cubic In 2 O 3 crystal.The SAED pattern of the In 2 O 3 :Yb 3+ ,Er 3+ NRFs (Fig. 3d) conrms that the obtained products are polycrystalline in structure.In order to further determine the structure and dopant concentrations of In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the energy-dispersive X-ray (EDX) analysis of In 2 O 3 :(10 mol%) Scheme 1 Schematic illustration for the synthesis of In 2 O 3 :Yb 3+ ,Er 3+ NRFs.Yb 3+ ,(2 mol%)Er 3+ NRFs is measured in Fig. S1.† It shows the presence of In, Yb, Er and O element.Furthermore, the spectrum conrms that the mole ratio of Yb and Er in the NRFs is 8.47 and 1.7, respectively.According to this measuring result, we can deduce that the exact doping concentration of Er 3+ ions is 0.85 mol%, 2.55 mol%, 4.24 mol%, 6.79 mol% in In 2 O 3 :(10-mol%)Yb 3+ ,xEr 3+ (x ¼ 1, 3, 5, 8 mol%) NRFs.
Effect of morphology, Yb 3+ concentration and excitation power on UCL
The typical UCL spectra of In 2 O 3 :Yb 3+ ,(1 mol%)Er 3+ REF and NRFs samples were investigated (Fig. 4).In the two samples, the 4 4).This phenomenon is known as concentration quenching and quite common in RE-doping NCs, which is due to an energy migration process among the same type of RE ions. 24In In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the nanorod is only $100 nm and dispersive each other, which can be clearly recognized by HR-TEM images (as shown in Fig. 3).The boundary effects in other traditional NCs would be unavoidably occurred.For example, in the Y 2 O 3 :Eu 3+ ,Y 2 SiO 5 :Eu 3+ , and LaPO 4 :Ce 3+ ,Tb 3+ inverse opal photonic crystals, [25][26][27] the quenching concentration of the RE emission increases more than relative to the corresponding bulk phosphors.In other words, the concentration quenching was suppressed considerably due to the boundary effects.
To further understand the UCL mechanism, the UCL dynamics of 4 F 9/2 -4 I 15/2 transition of In 2 O 3 :Yb 3+ ,(1 mol%)Er 3+ REF and NRFs were measured (Fig. 5), and all the decay curves of Er 3+ ions were tted by the double-exponential function. 28rom the lifetime constants, we can concluded that: (1) in the lower doped concentration of Er 3+ ions, the lifetimes in REF samples are longer than that in In 2 O 3 :Yb 3+ ,Er 3+ NRFs.As we known, the decay time constant is the reverse of the nonradiative and radiative decay rates.The non-radiative decay rate is signicantly affected by the large phonon groups in the surface, crystal phase, lattice defect state and phonon energy.In In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the non-radiative channels nearby Er 3+ ions, such as lattice defect states/surface large phonon bands would be increased due to the increasing of the volume to surface ratio. 29(2) In the In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the lifetime constants decrease gradually with the increasing of Er 3+ concentration compared to those of REF samples (as shown in the insert of Fig. 5), implying that the concentration quenching in the NRFs are suppressed.In the traditional In 2 O 3 :Yb 3+ ,Er 3+ phosphors, the long-range resonant energy transfer (ET) processes among Er 3+ ions are very severe.Inevitably, luminescent quenching could occur because of the ET from Er 3+ ions to defect states, which is dependent on doped concentrations.In the In 2 O 3 :Yb 3+ ,Er 3+ NRFs, these ET process should be restrained largely due to the structure of NRFs, because the size of each nanorod is $100 nm and the nanorods are dispersive.In this case, the ET among Er 3+ ions can only happen within one nanorod, then, the photons will be scattered into the air instead of capturing by the defect states.
Fig. 6 shows the Yb 3+ concentration dependent UCL spectra of the In 2 O 3 :Yb 3+ ,Er 3+ NRFs.From the spectra analysis of Fig. 6, the integral intensity of red emission gradually increased, whereas that of green emissions diminished gradually as the Yb 3+ concentration increases.The emission spectra of the In 2 O 3 :Yb 3+ ,Er 3+ NRFs at different excitation powers were also measured (Fig. 7).We can found that the red emission is dominant in the low excitation power, and the R/G ratio is as high as $35 when the excitation power is 330 mW.The blue and green emissions increased gradually with increasing of excitation power.The R/G ratio versus excitation power is shown in the insert of Fig. 7, it is clear seen that the R/G ratios decreased with increasing of excitation power.
The ln-ln plots of the integral intensity versus excitation power for the red and green emissions of In 2 O 3 :(15 mol%)Yb 3+ ,xEr 3+ (x ¼ 1, 3, 5, 8 mol%) NRFs are investigated (Fig. 8).As is well known, the visible output power intensity (I V ) will be proportional to power (n) of the infrared excitation (I IR ) power if the saturation effect can be neglected: 30 , where n is the number of IR photons absorbed per visible photon emitted.The slopes n are determined to be 2.68, 2.62, 2.65, 2.4 for 2 H 11/2 -4 I 15/2 transition, 2.87, 2.62, 2.73, 2.02 for 4 S 3/2 -4 I 15/2 transition and 1.17, 1.71, 1.79, 1.86 for 4 F 9/2 -4 I 15/2 transition in the In 2 O 3 :Yb 3+ ,Er 3+ NRFs at different Er 3+ ions concentration of 1 mol%, 3 mol%, 5 mol% and 8 mol%, respectively.We can see that the slopes n are >2 for the green emissions and >1 for the red emission of Er 3+ .Thus the 2 H 11/2 / 4 S 3/2 levels are populating through three photons processes, and the 4 F 9/2 level is populating through two-photons ET processes for all the In 2 O 3 :Yb 3+ ,Er 3+ NRFs.As far as we know, the three-photon populating phenomenon for the 2 H 11/2 / 4 S 3/2 levels is few reported in Er 3+ /Yb 3+ codoped materials.The population mechanisms of 4 S 3/2 and 4 F 9/2 levels in In 2 O 3 :Yb 3+ ,Er 3+ NRFs are quite different to other UC nanocrystals from the above analysis.Fig. 9 presents UC population and emission diagram of the Er 3+ /Yb 3+ doping system under 980 nm excitation.The 4 I 11/2 level is populated by the electrons on the ground state 4 I 15/2 via ET of neighboring Yb 3+ ions.Subsequently, the 4 I 13/2 level is populating through the nonradiative relaxation process of 4 I 11/2 -4 I 13/2 via the assistance of large phonon groups, which is widely existed in NCs prepared via the hydrothermal method. 31In the second-step, the UC population of 4 I 11/2 -4 F 7/2 and 4 I 13/2 -4 F 9/2 occurred via ET and excitedstate absorption, generating the red emission ( 4 F 9/2 -4 I 15/2 ).The cross relaxation of 4 F 7/2 + 4 I 11/2 À 4 F 9/2 + 4 F 9/2 is signicant for populating the 4 F 9/2 level, when the excitation power or the Yb 3+ concentration is sufficiently high.Then the Yb 3+ concentration is lower in In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the electrons in 4 of 4 I 13/2 -4 F 9/2 to be more effective.This is a main reason for increasing the R/G ratio in In 2 O 3 :Yb 3+ ,Er 3+ NRFs.
The relation of R/G ratio and Yb 3+ concentration can be explained below, the population of 4 F 9/2 level is mainly originated from ET upconversion processes.The ET transfer from Yb 3+ ions to Er 3+ ions could be increased with the increase of Yb 3+ concentration, thus the red emission ( 4 F 9/2 -4 I 15/2 ) was accelerated because of an increase of the ET process.The quenching of green emission with the increasing Yb 3+ concentration is due to the reversed ET from Er 3+ to Yb 3+ ions ( 4 S 3/2 (Er 3+ ) + 2 F 7/2 (Yb 3+ ) À 4 I 13/2 (Er 3+ ) + 2 F 5/2 (Er 3+ )), which leads to the depopulation of the 4 S 3/2 level.As we all known, the phonon energy of In 2 O 3 is larger than that of Yb 2 O 3 , which may lead to the increase of nonradiative relaxation.However, all the emissions (including red emission and green emission) will be quenching due to the nonradiative relaxation.The quenching of the green emissions is mainly attributed to the reversed ET from Yb 3+ ions to Er 3+ ions, which can be conrmed by the uorescence decay of 4 S 3/2 level.Finally, the power-dependent UCL in In 2 O 3 :Yb 3+ ,Er 3+ NRFs can be well explained.The three-photon population processes 4 F 9/2 -4 G 11/2 become more effective with the increasing of excitation power, leading the depopulation of the 4 F 9/2 level and the population enhancement of 4 G 11/2 .The populations for 4 F 3/2 and 4 S 3/2 / 2 H 11/2 levels also increase due to multi-phonon relaxation, so the R/G ratio decreases as the excitation power increases.
Conclusion
In 2 O 3 :Yb 3+ ,Er 3+ NRFs has been successfully prepared by a simple hydrothermal method combined with a succedent calcining process.It is also found that in In 2 O 3 :Yb 3+ ,Er 3+ NRFs, the concentration quenching is dramatically suppressed.The R/ G ratio changed depending on the NP morphology, Yb 3+ concentration and the excitation power.The green emissions are mainly originated from three-photon excitation in In 2 O 3 :-Yb 3+ ,Er 3+ NRFs with higher Yb 3+ concentration, which is different from other substrate materials.The In 2 O 3 :Yb 3+ ,Er 3+ NRFs are potential candidate for the application of lighting and solar cell.
3. 1
Fig. 1 shows the XRD patterns of the calcining In 2 O 3 :Yb 3+ ,Er 3+ products and the corresponding reference samples (REF).It can be seen that the products are the pure cubic structure according to JCPDS card no.06-416.With space group Ia 3 (no.206) and lattice parameters of a ¼ 10.118 Å.No diffraction peaks from any other impurities were observed, indicating the high purity
F 3/2 -4 I 15/2 (blue), 2 H 11/2 , 4 S 3/2 -4 I 15/2 (green) and 4 F 9/2 -4 I 15/2 (red) transitions can be clearly identied.The intensity ratio of the red ( 4 F 9/2 -4 I 15/2 ) emission to the green ( 2 H 11/2 , 4 S 3/2 -4 I 15/2 ) emissions (R/G) in In 2 O 3 :Yb 3+ ,Er 3+ NRFs is higher than that in REF sample.The UCL property could be inuenced by the following three factors: the nanocrystal size, crystallinity, and morphology.From the XRD and SEM analysis, we can deduce that the crystallinity and the size of In 2 O 3 :Yb 3+ ,Er 3+ NRFs and REF samples are both nearly same, indicating that the effect of the size and crystallinity might not be the dominant reason to the change of R/G ratio.The remarkable change in the R/G ratio between In 2 O 3 :Yb 3+ ,Er 3+ NRFs and REF samples could be attributed to the change of morphology.The In 2 O 3 :Yb 3+ ,Er 3+ NRFs constituted with dispersive nanorods has a large specic surface area, which introduced more surface ligands and surface defects.These groups and defects can bridge some non-radiative relaxation channels, such as 4 I 13/2 -4 I 11/2 and 4 S 3/2 -4 F 9/2 , favoring the generation of the 4 F 9/2 -4 I 15/2 red emission.In order to verify the conclusion, the In 2 O 3 :(10 mol%)Yb 3+ ,(1 mol%)Er 3+ power was prepared by the sol-gel method, the UCL spectra of In 2 O 3 :Yb 3+ ,Er 3+ power (the size > 10 mm) was measured under 980 nm excitation, shown in Fig. S2.† We can nd that the R/G in the power is also dramatically lower than that in In 2 O 3 :Yb 3+ ,Er 3+ NRFs.The integral intensity of UCL for In 2 O 3 :Yb 3+ ,Er 3+ NRFs is lower than REF sample.Then the UCL intensity of In 2 O 3 :Yb 3+ ,Er 3+ NRFs exceeds REF samples with further increase the doping concentration of Er 3+ (shown in the insert of Fig. | 4,348.4 | 2017-11-27T00:00:00.000 | [
"Materials Science"
] |
Identification of influential probe types in epigenetic predictions of human traits: implications for microarray design
Background CpG methylation levels can help to explain inter-individual differences in phenotypic traits. Few studies have explored whether identifying probe subsets based on their biological and statistical properties can maximise predictions whilst minimising array content. Variance component analyses and penalised regression (epigenetic predictors) were used to test the influence of (i) the number of probes considered, (ii) mean probe variability and (iii) methylation QTL status on the variance captured in eighteen traits by blood DNA methylation. Training and test samples comprised ≤ 4450 and ≤ 2578 unrelated individuals from Generation Scotland, respectively. Results As the number of probes under consideration decreased, so too did the estimates from variance components and prediction analyses. Methylation QTL status and mean probe variability did not influence variance components. However, relative effect sizes were 15% larger for epigenetic predictors based on probes with known or reported methylation QTLs compared to probes without reported methylation QTLs. Relative effect sizes were 45% larger for predictors based on probes with mean Beta-values between 10 and 90% compared to those based on hypo- or hypermethylated probes (Beta-value ≤ 10% or ≥ 90%). Conclusions Arrays with fewer probes could reduce costs, leading to increased sample sizes for analyses. Our results show that reducing array content can restrict prediction metrics and careful attention must be given to the biological and distribution properties of CpG probes in array content selection. Supplementary Information The online version contains supplementary material available at 10.1186/s13148-022-01320-9.
Background
DNA methylation (DNAm) involves the addition of methyl groups to the fifth carbon of cytosine bases, typically in the context of cytosine-guanine dinucleotides (CpG sites). There are approximately 28 million CpG sites across the human genome [1,2], of which 60-80% are methylated [3]. Illumina DNAm arrays are popular technologies for profiling genome-wide DNAm. The probe content on these arrays has been selected by experts to optimise the balance between gene coverage and array size. The Infinium HumanMethylation 450K and HumanMethylationEPIC (EPIC) arrays cover 99% of RefSeq genes and contain probes that interrogate 485,577 and 863,904 CpG sites, respectively [4,5].
There are two primary methods to quantify the amount of methylation at CpG sites interrogated by Infinium probes. First, the Beta-value (or B value) is a ratio of the methylated probe intensity to the overall measured intensity (sum of methylated and unmethylated probe intensities) [6,7]. The Beta-value ranges from 0 to 100% where Open Access *Correspondence<EMAIL_ADDRESS>100% implies complete methylation across all copies of the site in a given sample. Second, M values reflect the log2 ratio of methylated probe intensities versus unmethylated probe intensities. Positive M values mean that the site is likely more methylated than unmethylated in a given sample, and a value close to zero indicates that the site is equally methylated and unmethylated. It has been found that approximately 0.5% of Illumina probes show significantly different estimates for methylation intensities when measured by other methods, such as Methylation Capture bisulfite sequencing [8]. Here, we focus on the Beta-value (derived from Illumina arrays) as it has a simpler biological interpretation and therefore allows us to intuitively categorise probes into hypo-and hypermethylated sites (mean Beta-value ≤ 10% or ≥ 90% across individuals), which may reflect invariant probes.
Illumina DNAm data are routinely utilised in health outcomes research. First, the arrays are employed in association studies to uncover individual genomic loci associated with disease states and other phenotypes [9]. Second, the total array content (450K or EPIC) can be used to estimate the contribution of DNAm to inter-individual variability in human traits [10,11]. Third, machine learning algorithms can be applied to DNAm data to identify weighted linear combinations of probes that predict numerous phenotypes, including chronological age, smoking status and body mass index [12][13][14].
Genetic, demographic and environmental factors contribute to inter-individual variability in CpG methylation [15]. Common genetic factors that correlate with CpG methylation are termed methylation quantitative trait loci (mQTLs) and explain on average 15% of the additive genetic variance of DNAm [16]. Variation in CpG methylation might also reflect technical artefacts, including heterogeneity in sample preparation and batch effects [17]. A large number of probes exhibit low levels of inter-individual variation in a given tissue, including blood [18][19][20][21][22]. Several methods have been proposed to remove sites that are non-variable in diverse tissue types. The methods include mixture modelling, principal component analyses and empirically derived data reduction strategies [23][24][25]. In the context of locus discovery, these methods reduce the severity of multiple testing correction and might improve power to detect epigenetic associations with phenotypes. However, it is unclear if low-variability CpG probes affect the amount of phenotypic variance captured by DNAm. There is also a lack of studies that examine the influence of probe intensity characteristics on DNAm-based predictors.
Probes with high inter-individual variation in DNA methylation might be more informative for capturing variance in human traits compared to those that are invariant (i.e. low inter-individual variation). Here, we tested the hypothesis that invariant probes do not influence the amount of variance in phenotypes captured by Illumina array content. We utilised blood DNAm data and eighteen phenotypes from 4450 unrelated volunteers in the population-based cohort Generation Scotland as our training sample [26,27]. We compared the performance of five primary sets of probes. The first set of probes, or the reference set, included all probes common to the 450K and EPIC arrays (n = 393,654 probes). We focussed on probes common to both arrays rather than focussing on the EPIC array alone in order to ensure generalisability to other and older cohort studies, which employ the 450K array. In the second set, we excluded hypo-and hypermethylated probes (e.g. mean Beta-value ≤ 10% or ≥ 90% across individuals). We also removed probes with mQTLs reported in the largest genome-wide association study on blood CpG methylation to date [16]. We employed these exclusion criteria in an effort to retain variable probes whose variability might largely reflect environmental contributions (n = 115,746 probes). The third, fourth and fifth sets included the 50,000, 20,000 and 10,000 most variable probes (i.e. highest standard deviations) without known mQTLs (Fig. 1).
We used two methods to investigate how the number of probes considered in a probe set and how their distribution properties influenced the amount of phenotypic variance captured by DNAm. First, we estimated the amount of phenotypic variation captured by DNAm in the training sample (reflecting within-sample trait variance). For this, we used OmicS-data-based complex trait analysis (OSCA) software in which the correlation structure among all input probes is used to create an omic-databased relationship matrix (ORM). The ORM is then used to estimate variance components through the restricted maximum likelihood method (REML) [10]. In essence, these estimates represent an upper bound of trait variance captured by DNAm in a given sample. Second, we applied penalised regression models to build DNAmbased predictors of all eighteen traits in the training sample. In DNAm prediction analyses, the substantially higher number of probes on arrays (features) when compared to observations (individual phenotype values) can lead to overfitting. For example, a predictor may perform well in the training data set but not in an external, independent data set. DNAm-based predictors derived from LASSO or elastic net penalised regression models often only consider small numbers of probes (derived from all input probes) to avoid such overfitting. The variances explained by these predictors are also smaller than those from REML, reflecting disparate methodologies and analysis objectives. In summary, REML utilises all input probes and estimates within-sample phenotypic variance and penalised regression considers a small subset of these probes to estimate the amount of variance captured by DNAm in out-of-sample settings (Fig. 2).
We compared results from the five primary sets of probes to test our primary hypothesis, and these probe sets had decreasing numbers of probes and increasing mean variabilities. In further analyses, we also considered secondary subsets of probes with (i) an mQTL (with a mean Beta-value between 10 and 90%), (ii) hypo-or hypermethylated probes (with a mean Beta-value ≤ 10% or ≥ 90%) and (iii) genome-wide significant EWAS Catalog probes (at P < 3.6 × 10 -8 ). By comparing results from the primary and secondary probe sets, we were able to test the influence of (i) the number of probes considered, (ii) mean probe variability and (iii) methylation Overview of analysis strategy in the present study. We tested whether subsets of probes showed similar predictive capacities to total DNAm array content (1) ('all available probes' , n = 393,654). We first identified subsets of interest. We restricted primary analyses to probes without known genetic influences (i.e. non-mQTL probes) and those with mean Beta-values (β) between 10 and 90% (2). These probes were termed 'variable non-mQTL probes' (n = 115,746). We then extracted the 50,000, 20,000 and 10,000 probes with the highest standard deviations from the pool of 115,746 non-mQTL probes (3). In our primary analyses, we compared the predictive performances of these four probe subsets against that of the full set of probes used in our analyses (4). In further analyses, we tested the relative performances of subsets based on (i) probes without known mQTLs and with mean Beta-value between 10 and 90% (shown in green in (2), highlighted in (3)), (ii) probes with known mQTLs and with mean Beta-value between 10 and 90% (shown in red in (2)) and hypo-or hypermethylated probes (mean Beta-value ≤ 10% or ≥ 90%, also shown in red in (2)). DNAm, DNA methylation; mQTL, methylation quantitative trait locus; SD, standard deviation. Image created using Biorender.com QTL status on the variance captured in eighteen traits by blood DNA methylation. Further, we compared results from these probe sets against those from randomly sampled sets of probes of equal size in order to determine whether observed estimates were significantly different from those expected by chance.
Results
Demographics and summary data for all phenotypes are shown in Additional file 1: Table S1. The phenotypes were chronological age, seven biochemical traits (creatinine, glucose, high-density lipoprotein cholesterol, potassium, sodium, total cholesterol and urea) and ten complex traits (body fat percentage, body mass index, diastolic blood pressure, forced expiratory volume in one second (FEV), forced vital capacity (FVC), heart rate (average beats/minute), self-reported alcohol consumption, smoking pack years, systolic blood pressure and waist-to-hip ratio). The mean age in the training sample was 50.0 years (SD = 12.5), and the sample was 61.4% female. The test sample showed a similar mean age of 51.4 years (SD = 13.2) with a slightly lower proportion of females (56.3%). Values for all other phenotypes were comparable between the training and test samples.
Phenotypic variance captured by DNAm decreases with the number of probes considered
We compared variance component estimates from 'all available probes' (n probe = 393,654) and four subsets of probes with decreasing sizes and increasing mean variabilities (see Methods, Fig. 1). The subsets contained probes with mean Beta-values between 10 and 90% and without underlying mQTLs as reported by the GoDMC mQTL consortium (i.e. were non-mQTL probes) [16]. The first of these four subsets contained 115,746 probes, which represented all probes without reported mQTLs and with mean Beta-values between 10 and 90% (i.e.
Fig. 2
Distinction between two primary analysis methods in the present study. We employed both variance components and penalised regression models in order to examine the amount of phenotypic variance captured by each respective probe set (n = 18 in total, see Methods). Variance component estimates were obtained using the restricted maximum likelihood method in OSCA. Here, we were able to estimate the amount of phenotypic variance captured by all probes in a given probe set in the training sample (n ≤ 4450). We also employed penalised regression to build linear DNAm-based predictors of traits using probes in a given probe set in the training sample. We then applied the predictors to the test sample (n ≤ 2578) in order to estimate how much variance in a given trait the predictor could explain over basic covariates (such as age and sex). This coefficient reflected the incremental R 2 estimate and pertained to an out-of-sample setting as the predictor was applied to a sample outside of that in which it was derived. LASSO, least absolute shrinkage and selection operator; OSCA, OmicS data-based complex trait analysis. Image created using Biorender.com 'variable non-mQTL probes'). The remaining three probe subsets harboured the 50,000, 20,000 and 10,000 most variable of the non-mQTL probes, showing the highest standard deviations in the training sample (n = 4450).
The proportion of phenotypic variance captured by 'all available probes' (n probe = 393,654) ranged from 23.7% (standard error (se) = 6.0%) for blood potassium levels to 79.6% (se = 2.1%) for smoking pack years (Additional file 1: Table S2). The average proportion of variance captured across seventeen biochemical and complex traits was 54.0%. Mean estimates were 44.1% and 61.0% for biochemical and complex traits, respectively (Additional file 2: Fig. S1).
The four remaining probe sets containing 115,746, 50,000, 20,000 and 10,000 probes, on average, captured 47.9%, 40.6%, 30.4% and 21.9% of phenotypic variance across seventeen traits, excluding chronological age (Additional file 1: Table S2). Generally, the estimates were not significantly different from sub-sampled probe sets of equal size, which were sampled from 'all available probes' (Additional file 1: Table S3). An exception to this was smoking pack years (P < 0.05). Figure 3 shows the four traits with the highest proportion of phenotypic variance captured by probe values.
Performance of DNAm-based predictors decreases with the number of probes considered
DNAm-based predictors based on 'all available probes' (n probe = 393,654) captured between 0.74% (forced vital capacity) and 46.0% (smoking pack years) of trait variance Variance captured in complex traits by all available probes and four subsets of decreasing size. Restricted maximum likelihood was used to estimate variance components in the training sample (n ≤ 4450, OSCA software). The four traits (out of seventeen biochemical and complex traits) with the highest proportion of variance captured by DNAm are shown. Five different sets of probes were compared. 'All available probes' denotes probes that were common to the Illumina EPIC and 450K arrays and passed quality control procedures in the training sample within Generation Scotland (n = 393,654 probes). The 'variable non-mQTL probes' set consisted of probes without reported non-genetic influences and mean Beta-values between 10 and 90%. The remaining three probe subsets contained the 50,000, 20,000 and 10,000 most variable non-mQTL probes (ranked by their standard deviations). The five sets of probes therefore had decreasing numbers of probes but increasing mean variabilities. Vertical bars show 95% confidence intervals. DNAm, DNA methylation; mQTL, methylation quantitative trait locus; OSCA, OmicS data-based complex trait analysis in the test sample (Additional file 1: Table S4). DNAmbased predictors developed from 'all available probes' on average captured 9.1% of trait variance (Additional file 2: Fig. S2).
DNAm-based predictors developed from the four subsets of non-mQTL probes (in order of decreasing size) captured 6.7%, 6.6%, 5.6% and 5.0% of phenotypic variation. The four traits with the highest incremental R 2 estimates are shown in Fig. 4.
The performances of the four subsets of non-mQTL probes were weaker for biochemical measures than complex traits. For biochemical measures, relative effect sizes were 19.1-38.7% of the magnitude of estimates from 'all available probes' . The corresponding estimates were 47.5-74.2% for complex traits (Additional file 1: Table S4). Incremental R 2 estimates were comparable to maximal R 2 estimates from the literature achieved with similar, linear DNAm-based predictors. These analyses are distinct from the earlier variance component analyses and reflect the performance of DNAm-based predictors in samples external to those in which they were developed (Additional file 1: Table S5). Incremental R 2 estimates from the four probe subsets were also not significantly different from sub-sampled sets of equal size (Additional file 1: Table S6).
Subsets of probes capture similar amounts of variation in chronological age as total array content
Using REML, 'all available probes' captured 100% of variability in chronological age (n probe = 393,654). Subsets that contained 115,746, 50,000 and 20,000 probes also captured 100% of the variance. The subset containing the Fig. 4 DNAm-based prediction of complex traits using all available probes and four subsets of decreasing size. LASSO regression was used to build blood DNAm-based predictors of seventeen biochemical and complex traits (n ≤ 4450 training sample and n ≤ 2578 test sample). The four traits with the highest proportion of variance captured by DNAm predictors in the test sample are displayed (incremental R 2 estimates above null model, see main text). The first set of probes included those that passed quality control in the training sample, were common to both the EPIC and 450K arrays and included both probes with known methylation QTLs (mQTLs) and probes without known mQTLs reported in the GoDMC consortium. The next four sets of probes included non-mQTL probes only and had decreasing numbers of probes but increasing mean variabilities. DNAm, DNA methylation; HDL, high-density lipoprotein; LASSO, least absolute shrinkage and selection operator; mQTL, methylation quantitative trait locus 10,000 most variable non-mQTL probes captured only 92.1% (se = 0.9%, Additional file 1: Table S7).
An epigenetic age predictor based on 'all available probes' explained 91.7% of the variance in chronological age in the test sample (n = 2578). The R 2 estimates from four subsets (in order of decreasing size) were 87.4%, 87.7%, 85.7% and 83.9%, respectively (Additional file 1: Table S8). The estimates were not significantly different from those in randomly sampled subsets with an equivalent number of loci.
Highly variable probes are enriched for intergenic and upstream features
We tested whether the most variable set of probes, i.e. the 10,000 most variable non-mQTL probes, were over or under-represented for certain genomic features. We compared genomic annotations from this subset to annotations from 1000 sub-sampled sets of 10,000 probes, which were drawn from all available non-mQTL probes (n = 115,746, see Methods). Highly variable probes were enriched in intergenic sites, 5'UTR regions and sites lying 200-1,500 bases upstream from a transcription start site (range of fold enrichment (FE) = [1.1, 1.2], FDR-adjusted P = 0.001). They were also significantly under-represented within 3'UTR regions and gene bodies (FE = 0.8, P = 0.001; Additional file 1: Table S9).
Methylation QTL status and mean probe variabilities do not influence variance component estimates
We performed further secondary analyses to determine the relative predictive capacities of four classes of probes. The first three classes were: (i) probes without a known mQTL and mean Beta-value between 10 and 90% (considered in primary analyses), (ii) probes with a known mQTL and mean Beta-value between 10 and 90% and (iii) probes with mean Beta-value ≤ 10% or ≥ 90%, that is, hypo-or hypermethylated probes (containing both mQTL and non-mQTL probes). The latter two classes are shown as the excluded probes in Fig. 1. We also considered a fourth class, which was EWAS Catalog probes (n = 38,853, see Methods). The EWAS Catalog probes contained all three of the other classes: > 65% were sites with an mQTL and < 5% were hypo-or hypermethylated (Additional file 1: Table S10).
Across all classes, variance estimates decreased with the number of probes under consideration (Table 1). All probe classes, when matched for the number of probes, showed comparable variance component estimates (Table 1; Additional file 1: Tables S11-S13). An exception to this involved subsets that included 115,746 probes. Probes with mean Beta-values between 10 and 90% on average captured 10% more trait variance than hypoor hypermethylated probes (mean Beta-value ≤ 10% or ≥ 90%) at this threshold. The probe classes captured similar amounts of variance in age (Additional file 1: Table S14).
Probes with methylation QTLs and intermediate Beta-values are important for out-of-sample trait predictions
Epigenetic predictors based on EWAS Catalog probes (n = 38,853) captured as much variance as those based on 'all available probes' (n probe = 393,654). The 20,000 and 10,000 most variable EWAS Catalog probes showed estimates that were 91.5% and 85.3% of the magnitude of those from all 'available probes' (Additional file 1: Tables S15-S17).
Epigenetic predictors based on probes with an mQTL (n = 133,758), and the 115,746 most variable of these probes, also captured as much phenotypic variance as predictors based on 'all available probes' (Additional file 1: Table S15). Exceptions included predictors for creatinine and systolic blood pressure (60-70% of estimates from 'all available probes').
The relative effect sizes (i.e. relative incremental R 2 estimates) were on average 15% larger for probes with mQTLs versus those without GoDMC mQTLs. Relative effect sizes were also approximately 45% greater for probes with mean Beta-values between 10 and 90% when compared to hypo-or hypermethylated probes with mean Beta-values ≤ 10% or ≥ 90% ( Table 2, Additional file 1: Tables S15-S17).
The performances of age predictors were comparable for all classes except hypo-and hypermethylated probes, which showed R 2 estimates that were 5-10% lower than other probe classes (Additional file 1: Table S18).
Discussion
The amount of phenotypic variance captured by DNAm decreased in all traits as the number of probes under consideration decreased. Further, variance component estimates were similar for subsets with and without reported genetic influences and subsets with and without hypo-and hypermethylated probes. The estimates were also comparable to sub-sampled subsets of equal size. Therefore, the number of probes considered is an important determinant of the amount of within-sample trait variance that can be captured by DNAm. Methylation QTL status and mean probe variabilities did not appear to impact variance component estimates. By contrast, epigenetic predictors based on probe subsets with mQTLs generally outperformed those that contained probes without GoDMC mQTLs. Similarly, probes that had mean Beta-values between 10 and 90% outperformed subsets that contained hypo-and hypermethylated probes in out-of-sample trait predictions. Therefore, methylation QTL status and mean Beta-values are important factors in the performance of epigenetic trait predictions. As with variance component analyses, decreasing the number of probes considered resulted in poorer performing epigenetic predictors. Highly variable probes were enriched for intergenic sites, which is consistent with the existing literature [21,28,29]. However, the most variable probes that fall outside of CpG islands can be poorly captured by arrays [30]. The list of the most variable probes might show variation between epigenomic data sets given differences in normalisation methods and systematic differences in cohort profiles. We also did not correct for additional covariates, such as cell-type heterogeneity, which could lead to differences in estimates for probe variabilities. However, OSCA, or OmicS-data-based complex trait analysis, can account for unmeasured confounders and correlation structures between distal probes induced by confounders [10]. This is possible owing to the creation of an ORM, which describes the correlation structure between all input CpGs in a given data set. The ORM is then used to estimate the joint effects of all probes on the phenotype providing an estimate of the proportion of phenotypic variance captured by DNAm through restricted maximum likelihood. We selected standard deviations to measure variability in probe methylation levels. However, some probes may show non-normal distributions of Beta-values or multimodal distributions (such as probes with mQTLs). This complicates the general application of one measure of variability across all probes. Nevertheless, our results showed comprehensively that decreasing the number of available sites reduced variance estimates regardless of mQTL status or mean Beta-value.
As part of our primary and secondary analyses, we separated Illumina probes into those that have a genomewide significant mQTL reported in the GoDMC mQTL database and those that do not have an mQTL reported in this list [16]. The GoDMC resource represents the largest, blood mQTL data set for Illumina probes. However, it must be acknowledged that it does not represent an exhaustive list of all possible mQTLs, whether acting in cis or in trans. Most probes are likely to have a genetic variance component but effect sizes for mQTLs vary substantially with most probes explaining less than 5-10% of inter-individual variation in DNAm [16,31]. Future work is needed to filter probes by the proportion of variance explained by mQTLs in order to identify those probes with highly influential mQTLs. The impact of probes with strong genetic influences on epigenetic predictions should be examined and in cohorts of different ethnicities and clinical populations, which was not possible in the present study. Importantly, our strategy of stratifying probes by mQTL status replicates that of existing studies that examine the technical and distribution properties of Illumina probes. For instance, Sugden et al. also stratified probes into those with known mQTLs and those without mQTLs [31] and showed that probes indexed by mQTLs are more reliably measured than their non-mQTL counterparts [32]. The superior performance of epigenetic predictors from mQTL subsets compared to non-mQTL subsets in our study could reflect the higher measurement reliability of mQTL probes, and the exclusion of loci with strong biological signals in the predictors based on non-mQTL probes. As our data and findings were derived from whole blood, methodological insights into the role of Illumina probe types on variance analyses should only be used to guide future studies with whole blood samples. High R 2 estimates from subsets based on EWAS Catalog probes likely reflect contributions from all probe classes (i.e. probes with and without an mQTL and hypo-or hypermethylated sites) and that many of the traits considered in this study feature in the EWAS Catalog. Furthermore, traits with strong epigenetic correlates were the most robust to changes in probe classification or the number of probes considered. For instance, REML suggested that 20,000 probes were sufficient to capture 100% of inter-individual variation in chronological age. However, only 90% of the variance in age could be explained by subsets containing 10,000 probes. Previously, it has been shown that 100% of the variance in chronological age is captured by DNAm in Generation Scotland and the Systems Genomic of Parkinson's Disease consortium [33]. Further, permutation testing suggested that these results did not reflect overestimation. The REML estimates are broadly analogous to chip-based heritability estimates in genetic analyses, reflecting how much variance in a trait can be explained by the omics measure in a given sample. By contrast, the aim of the penalised regression analyses was to generate linear combinations of probes that are informative for predicting age or other traits, which we applied to a separate but similar sample. Our incremental R 2 estimates (~ 90%) are in line with, albeit lower than, those from existing epigenetic age indices, which employ additional steps to ensure highly accurate age predictors [14,33,34]. Here, our aim was to assess the influence of the number of probes and probe distribution properties on epigenetic predictions of age and seventeen lifestyle and biochemical traits. With respect to age, we show that (i) small subsets of probes can capture age-related changes in DNAm, (ii) DNAm-based age predictors are not strongly affected by mQTL status and (iii) probes that are hypo-or hypermethylated are less informative for predicting age than probes with Beta-values between 10 and 90%.
Conclusions
Restricting DNAm array probes to the most variable sites could improve power in association studies whilst minimising array content. We show that this approach hampers variance component analyses and that phenotypes with strong epigenetic correlates are the most robust to changes in the number of available probes. Further, loci with an mQTL and with intermediate DNAm levels are central to epigenetic predictions of clinically relevant phenotypes. Our results provide methodological considerations towards the goal of selecting reduced array content from existing methylation microarrays, which can afford more cost-effective methylomic analyses in large-scale population biobanks. However, substituting or removing probes results in alterations to chip design and possibly the background physiochemical properties of the array. Therefore, it is not appropriate to assess the transferability of the present findings to other, related platforms. Nevertheless, our data demonstrate that strategies aiming to minimise arrays using fewer probes must carefully select CpG or probe content in order to maximise epigenetic predictions of human traits.
Preparation of DNA methylation data
DNAm was measured using the Infinium MethylationE-PIC BeadChip at the Wellcome Clinical Research Facility, Western General Hospital, Edinburgh. Methylation typing in the first set (n = 5200) and the second set (n = 4585) was performed using 31 batches each. Full details on the processing of DNAm data are available in Additional file 3. Poor-performing and sex chromosome probes were excluded, leaving 760,943 and 758,332 probes in the first and second sets, respectively. Participants with unreliable self-report questionnaire data (self-reported 'yes' for all diseases in the questionnaire), saliva samples and possible XXY genotypes were excluded, leaving 5087 and 4450 samples in the first and second set, respectively. In the first set, there were 2578 unrelated individuals (common SNP GRM-based relatedness < 0.05). In the second set, all 4450 individuals were unrelated to one another. Individuals in the first set were also unrelated to those in the second set. The second set (profiled in 2019) was chosen for OSCA models and as the training sample in DNAmbased prediction analyses given its larger sample size (n = 4450). Unrelated individuals from the first set (profiled in 2016) formed the test sample in DNAm-based prediction analyses (n = 2578). Linear regression models were used to correct probe Beta-values for age, sex and batch effects separately within the training (n = 4450) and test samples (n = 2578).
Identification of variable probes in blood
There were 758,332 sites in the training sample (n = 4450) following quality control. First, we restricted sites to those that are common to the 450K and EPIC arrays to allow for generalisability to other epigenetic studies (n = 398,624 probes). We excluded loci that were predicted to cross-hybridise and those with polymorphisms at the target site, which can alter probe binding (n probe = 4970 excluded, 393,654 remaining) [36,37]. These 393,654 probes represented the reference set in our analyses, which we defined as 'all available probes' .
Preparation of phenotypic data
Eighteen traits were considered in our analyses. Full details on phenotype preparation are detailed in Additional file 3. The seventeen biochemical and complex traits (excluding chronological age) were trimmed for outliers (i.e. values that were ± 4 SDs away from the mean). Fifteen phenotypes (excluding FEV and FVC) were regressed on age, age-squared and sex. FEV and FVC were regressed on age, age-squared, sex and height (in cm). Correlation structures for raw (i.e. unadjusted) and residualised phenotypes are shown in Additional file 2: Fig. S3 and S4, respectively. For age models, DNAm and chronological age (in years) were unadjusted. Residualised phenotypes were entered as dependent variables in OSCA or penalised regression models.
Variance component analyses
OSCA software was used to estimate the proportion of phenotypic variance in eighteen traits captured by DNAm in the training sample (n ≤ 4450) [10]. In this method, an omic-data-based relationship matrix (ORM) describes the co-variance matrix between standardised probe values across all individuals in a given data set. Here, the ORM was derived from age-, sex-and batchadjusted Illumina probe data and is fitted as a random effect component in mixed linear models. Phenotypes were pre-corrected for covariates as described in the previous section. Restricted maximum likelihood (REML) was applied to estimate the variance components, i.e. the amount of phenotypic variance captured by all DNAm probes used to build an ORM. We developed 18 ORMs in total reflecting all probe sets described: (i) one for 'all available probes' , (ii) four for the 'variable non-mQTL probe' sets, (iii) five for 'variable mQTL probe' sets, (iv) five for hypo-and hypermethylated probe sets and (v) three for EWAS Catalog probes. The probe sets are outlined in full in Additional file 1: Table S10.
The variance component estimates are analogous, but not equivalent, to SNP-based heritability estimates [38,39]. However, SNP-based heritability estimates have an inference of association through causality. The epigenetic variance component estimates could reflect both cause and consequence with respect to the phenotype and are not readily extended to other samples with different background characteristics. REML estimates served as important within-sample variance estimates in the present study, allowing us to assess the impact of the number of probes used to build an ORM, and their properties, on the amount of phenotypic variance captured by probe values. We then applied penalised regression models to build linear DNAm-based predictors of the phenotypes in the training sample. We carried out these analyses in order to assess the relative predictive performances of the probe sets when applied to a separate test sample (n < 2578), described below.
LASSO regression and prediction analyses
Least absolute shrinkage and selector operator (LASSO) regression was used to build DNAm-based predictors of eighteen phenotypes. The R package biglasso [40] was implemented and the training sample included ≤ 4450 participants. The mixing parameter (alpha) was set to 1 and tenfold cross-validation was applied. The model with the lambda value that corresponded to the minimum mean cross-validated error was selected. Epigenetic scores for traits were derived by applying coefficients from this model to corresponding probes in the test sample (n = 2578). This method takes into account the correlation structure between probes, but only selects a weighted additive combination of probes that are informative for predicting a given trait. Therefore, epigenetic predictors or methylation risk scores are broadly analogous to polygenic risk scores, which often show R 2 estimates that fall far below SNP-based heritability estimates [41]. Here, our goal was to compare the relative predictive performances of probe sets in an out-of-sample context, distinct from the earlier approach of estimating variance components within the training sample alone.
Linear regression models were used to test for associations between DNAm-based predictors (i.e. epigenetic scores) for the eighteen traits and their corresponding phenotypic values in the test sample. The incremental r-squared (R 2 ) was calculated by subtracting the R 2 of the full model from that of the null model (shown below). For the FEV and FVC predictors, height was included as an additional covariate in both models. For the age predictors, the R 2 value pertained to that of the epigenetic score without further covariates.
Sub-sampling analyses
We tested whether variance components and incremental R 2 estimates from probe sets were significantly different from those expected by chance. For OSCA estimates, we generated 1,000 sub-samples of 115,746, 50,000, 20,000 and 10,000 probes (to match the primary subsets of non-mQTL probes tested in our analyses). The subsampled sets were sampled from 'all available probes' (n probe = 393,654). We also generated 100 sub-samples of 115,746, 50,000, 20,000 and 10,000 probes, and not 1,000 sub-samples, for LASSO regression in order to lessen the computational burden.
We tested whether highly variable probes were significantly over-represented or under-represented for genomic and epigenomic annotations. Annotations were derived from the IlluminaHumanMethylationEPI-Canno.ilm10b4.hg19 package in R [42]. Annotations for the most variable primary subset (i.e. 10,000 non-mQTL probes) were compared against 1,000 sub-samples of non-mQTL CpGs with an equal number of probes. Here, probes were sub-sampled from the 'variable non-mQTL probes' set (n probe = 115,746) and not from 'all available probes' (n probe = 393,654) as the latter contains probes with and without mQTLs, which show different genetic architectures [16].
Comparisons of methylation QTL status and mean Beta-values
In addition to non-mQTL subsets (with mean Beta-values between 10 and 90%), we tested two further classes of probes. First, we considered probes with a reported mQTL from GoDMC (P < 5 × 10 -8 ) that had mean Betavalues between 10 and 90% (n probe = 133,758) [16]. Second, we considered all hypo-or hypermethylated probes (Beta-value ≤ 10% or ≥ 90%, n probe = 144,150). We tested the performances of the 115,746, 50,000, 20,000 and Null model : Phenotype ∼ chronological age + sex Full model :Phenotype ∼ chronological age + sex + epigenetic score 10,000 most variable probes from each of these three classes.
We also repeated REML and LASSO regression using EWAS Catalog probes [43]. EWAS Catalog probes contained sites with an mQTL, sites without an mQTL and hypo-and hypermethylated sites. We restricted EWAS Catalog probes to those with P < 3.6 × 10 -8 [44] and those reported in studies with sample sizes > 1000. We also excluded studies related to chronological age due to the very large number of sites implicated and alsothose in which Generation Scotland contributed to analyses. There were 100 studies that passed inclusion criteria with 47,093 unique probes. Of these, 38,853 probes overlapped with 'all available probes' used in our analyses (n probe = 393,654). To allow for comparison to other subsets, the 20,000 and 10,000 most variable EWAS Catalog probes (n probe = 38,853) were extracted.
Additional file 2: Figure S1. Phenotypic variance captured by five nested sets of probes with decreasing numbers of probes and increasing mean variabilities. Restricted maximum likelihood analyses were performed using blood DNAm and phenotypic data from 4450 volunteers in the training sample of Generation Scotland. Seventeen biochemical and complex traits are shown. The seventeen traits are arranged into six groups (A-F). Vertical bars indicate 95% confidence intervals. Alc, self-reported alcohol consumption; bmi, body mass index; cholest, total cholesterol; dBP, diastolic blood pressure; DNAm, DNA methylation; fat, body fat percentage; FEV, forced expiratory volume in one second; FVC, forced vital capacity; HDL, high-density lipoprotein cholesterol; HR, heart rate; mQTL, methylation quantitative trait locus; PckYrs, smoking pack years; sBP, systolic blood pressure; whr, waist-to-hip ratio. Figure S2. Incremental R2 estimates for DNAm-based predictors of seventeen traits using five nested sets of probes with decreasing numbers of probes and increasing mean variabilities. LASSO regression was used to build DNAm-based predictors of seventeen traits using data from 4450 volunteers in the training sample within Generation Scotland. An unrelated sample of 2578 individuals in Generation Scotland served as the test set. The seventeen traits are arranged into six groups of three traits (A-F). Alc, self-reported alcohol consumption; bmi, body mass index; cholest, total cholesterol; dBP, diastolic blood pressure; DNAm, DNA methylation; fat, body fat percentage; FEV, forced expiratory volume in one second; FVC, forced | 8,827 | 2022-08-10T00:00:00.000 | [
"Biology"
] |
Software for Wearable Devices: Challenges and Opportunities
Wearable devices are a new form of mobile computer system that provides exclusive and user personalized services. Wearable devices bring new issues and challenges to computer science and technology. This paper summarizes the development process, current situation and important software research issues of wearable devices.
A. The Definition of Wearable Devices
A wearable device is a computer that is subsumed into the personal space of a user, controlled by the user, and has both operational and interactional constancy, i.e., is always on and always accessible [1]. Wearable devices have the same computing abilities as mobile phones and tablet computers. In some cases, however, wearable devices are more competent for tasks such as calculation, navigation, remote picture than handheld devices due to their portability and characteristics to be detailed below.
B. The Development of Wearable Devices
We can have a clear understanding of the development for wearable devices from [2] [3]. Wearable devices have undergone many years of development since the initial ideas and prototypes appeared in the 1960s. During the 1960s to 1970s, wearable devices were in their embryonic period. People designed wearable devices for special purposes, interests or events. During the period, wearable devices remained in a small-scale field and people rarely understood their roles. In 1966, Edward Thorp, a professor in the Massachusetts Institute of Technology (MIT), invented a pair of shoes that could be used to cheat at roulette. This is the first wearable device in the world. In 1975, the Hamilton Watch Company launched a "calculator" watch which is the world's first wrist calculator. In 1977, the CC Collins designed a wearable device for the blind, which converts images captured from a head-mounted camera into tactile grids located on the blind's vests.
During the 1980s to 1990s, wearable devices entered the primary stage of development. People began to pay attention to wearable devices. Although wearable technology had a great improvement, wearable devices were still not practical for consumers and not friendly for users. In 1981, Steve Mann designed a head-mounted camera that to some extent can be regarded as the pioneer of Google glasses. In the same year, Steve Mann designed a backpack style computer with text, image and multimedia functions, displaying through the helmet.
In 1997, Massachusetts Institute of Technology, Carnegie Mellon University, and Georgia Institute of Technology jointly organized the first International Symposium on Intelligent Wearable Computer (ISWC). Since then, smart wearable computing and smart wearable devices have attracted wide attention in academia and industry.
Since the 21st century, wearable devices have entered an advanced stage of development and aroused widespread concern. They become more complex and are designed according to the needs of users or the market. Many companies launched independently designed wearable devices and released corresponding software and hardware development platforms. In 2007, James Park and Eric Friedman founded the Fitbit Company that is dedicated to the development of wearable devices on pedometer and sleep quality detection etc. In 2013, Google launched Google glass and caused a sensation in the world. Meanwhile, Apple, Samsung, Sony and other companies have been developing their smart watches.
In the next few years, predictably, wearable devices will enter a period of prosperity. The IMS data [4] revealed that wearable devices shipments will reach 92.5 million units by 2016. According to Juniper's research [5], the number of wearable devices including smart watches and glasses will approach 130 million by 2018. Moreover, according to IDC's reports [6], we note that wearable devices had been under a rapid development; the number should be up to 19 million by the end of 2014; and the predicted shipments including smart watches and related devices will grow at an annual rate of 78% and reach 112 million by 2018. Therefore, we can believe that wearable devices will gradually enter people's lives and bring convenience to human, and wearable market will attract more participants.
C. Classification Standards for Wearable Devices
At present, there are two standards for classifying wearable devices [7]. One standard is based on product forms, including head-mounted (such as glass and helmet), body-dressed (such as coat, underwear, and trousers), hand-worn (such as watch, bracelet, and gloves), and foot-worn (such as shoes and socks). Another standard is based on product functions, including healthy living (such as sport wristband and smart bracelet), information consulting (such as smart glass and smart watch), and somatosensory control (such as somatosensory controller). Figure 1 lists a variety of wearable devices.
II. WHAT MAKES WEARABLE DEVICES DIFFERENT?
A wearable device is more convenient for users to use and carry due to its miniaturization, lightweight and dressing. Their functions, forms and usages are different from tablet computers and mobile phones. In the wearable symposium held in 1997 [8], Bass L. summarized five characteristics that a wearable device should have [9]: 1) It may be used while the wearer is in motion; 2) It may be used while one or both hands are free or occupied with tasks; 3) It exists within the corporeal envelope of the user, i.e., it should be not merely attached to the body but becomes an integral part of the person's clothing; 4) It must allow the user to maintain control; 5) It must exhibit constancy, in the sense that it should be constantly available.
Steve Mann also provided the definition of wearable devices and described them from three operational modes and six attributes at the International Conference on Wearable Computing (ICWC) [1] held in 1998, The three operational modes include constancy, augmentation and mediation. And the Six attributes include unmonopolizing of the user's attention, unrestrictive to the user, observable by the user, controllable by the user, attentive to the environment and communicative to others.
However, wearable devices show more features as they evolve, such as diversity and concealment [10]. Wearable devices not only change the human-computer relationship and the way people use computers. Moreover, they produce a significant influence on people's life and work.
III. SOFTWARE FOR WEARABLE DEVICES
The technology of wearable devices is not mature. The development of wearable devices is thus bound to encounter various problems such as functional singleness, incompatibility between operating system and software, convenience of human-computer interaction, data transmission, confidentiality of the information, energy consumption problems brought by continuous running. This chapter discusses some primary research issues in the development process of wearable devices.
A. Operating System
Operating system is the interface of hardware and software. Its function is to manage hardware, software and data resources, to control program execution, to improve humancomputer interaction, to enable users to have a good working environment, and to provide services for users and support for other applications.
The operating system on wearable devices has gone through years of development [11]. As early as in 2000, IBM collaborates with the famous Japanese watch manufacturer Citizen to launch a smart watch named WatchPad with Linux as its operating system. Fossil designed in 2003 a wrist device called Wrist PDA. It equipped with PalmOS [12] operating system and supported screen touch, which were very popular at that time. In addition, Microsoft designed in 2004 the SPOT system for smart watches. In 2013, Samsung released its first smart watch Galaxy Gear using Android as operating system. After that, Samsung launched the second generation of smart watch running Samsung's independently designed operating system Tizen. In March 2014, based on Android, Google launched a smart watch dedicated operating system called Android Wear [13], whose operation is implemented through Google Now's voice commands. Android Wear is expected to build a uniform and standard operating system platform, accelerating the development of wearable devices.
At present, there exist a variety of operating systems in wearable market. But they may not be convenient for users to use. Developers are difficult to choose which operating system for the device. And the application for one operating system is not suitable for another.
Since operating system is essential for wearable devices, we should design wearable operating systems by taking the features of wearable devices into account so that we can achieve the following objectives.
1) Convenience. The design of the operating system should be more convenient for users to use wearable devices.
2) Effectiveness. The operating system should be managed more effectively and take advantage of resources like hardware, software and data of wearable devices.
3) Scalability. The operating system should permit new system functions to be developed, tested and included. 4) Openness. The operating system should support integrated and collaborative network work of different manufacturers and devices so that it can achieve the portability and interoperability of applications. 5) Multitasking. The operating system should be able to run multiple applications concurrently.
In general, there are several options for developers to develop wearable operating system [14][15]: 1) Develop further on the basis of palm operating system used on mobile phones or other terminals. The advantage of this approach is to shorten the development cycle and to reuse existing applications. Its disadvantage is that there exist some problems on transplanting the existing applications.
2) Develop proprietary operating system based on Linux. This approach has strong pertinence. However, it is difficult for development and establishing the corresponding software of ecological environment.
3) Develop web-based operating system [14]. This approach would take full advantage of resources and support of the web server, adjust and restructure software system dynamically according to the actual requirements, minimize the resource requirements to wearable devices. The approach has great potential for development, but will take a long time and a large cost.
B. Database Management System
Wearable devices run continuously, and are always ready to interact with users. They will collect user information as much as possible, and store them in database. The user can access the database to query necessary information via the database management system. For example, the Aid smart cane designed by Egle Ugintaite [7], utilizes built-in sensors to record user's pulse, blood pressure and temperature in real time, and displays health data through its LCD screen. The user can consult the database to obtain various data about his/her recent health status.
In addition to the basic information including gender, age, height and weight, wearable devices can also record user's health status, consumption habits, personal temper, and preference information to color and food, etc. These information may be useful for users achieving their goals. For example, when it is time for lunch, wearable devices are able to search nearby restaurants meeting the user's taste according to the database information, and provide hints to the user; when shopping, the devices can also recommend certain commodities that the user may need or may be interested in based on personal preference; when docking smart appliances, wearable devices can send personal information stored in the database to the terminals, and instruct smart appliance system to control air conditioning in real time according to the user's temperature, or automatically open the audio device and play music fitting the user's mood at that time.
At present, the major task of the database management system for wearable devices is only to deal with simple data, even some devices do not include the function of database management. The response speed and processing power may not necessarily meet the needs of devices.
Due to the different functionalities of wearable devices, the recorded data are changeful and may include the basic personal information, physical health data, and external environment changing data etc. To store these data, not only local databases, but also cloud databases are required and a reliable database management system becomes necessary. Therefore, developers should design specialized database management systems that are able to manage and operate various data, being lightweight and having fast response speed; or should transfer existing database management systems on mobile phones to wearable devices.
C. Network Communication Protocol
Wearable devices may need to exchange data with other devices such as mobile phones, computers and other wearable devices. Therefore, network protocols for wearable devices become necessary. Network protocols define the network communication mode of wearable devices, and determine exchange data format and problems related to synchronization.
Bluetooth (IEEE 802.15.1), ZigBee (IEEE802.15.4), WiFi, NFC (IEEE 802.11) are four currently popular short-range wireless communication protocol standards [16]. In 2013, Broadcom launched Wireless Internet Connectivity for Embedded Devices (WICED) [17]. WICED simplifies the implementation of WiFi in a series of consumer electronics. It can achieve WiFi networking application or connect mobile phones, tablets computers and wearable devices for data sharing. Bluetooth Smart [18] is an improved version of Bluetooth 4.0 [19] with the power reduced by ten to twenty times. Furthermore, even if the radio is closed in most of the time, devices can still keep connected; when data are ready, devices can be awakened immediately within a very short time. In this way, battery life can be extended, which makes Bluetooth Smart more suitable for wearable devices.
At present, network communication protocols for wearable devices are relatively simple and focus primarily on wireless functions. However, wearable devices will eventually implement more and more functions like those implemented in mobile phones, such as WAP, GRS, GPRS, large file or data transmission, etc. Therefore, more reliable network communication protocol supports are demanded.
Developers should transfer network communication protocols running on tablet computers or mobile phones to wearable devices, or design special network communication protocols for wearable devices, which are more energyefficient, safe, and with high throughput.
D. Application Development Platform
The rapid development of mobile phones attributes the success to the large number of available applications. Similarly, the lack of applications is one of the obstacles for the rapid development of wearable devices. Software development requires development environments and a complete set of tools, including modeling tools, interface tools, project management, configuration management tools, and testing tools, etc. Software development kit is often used to support a particular software engineering method, reduce the burden of manual management, and make software more systematic like other software engineering methods.
At present, there are many hardware design platforms for wearable devices. In 2013, Broadcom launched a development platform called WICED [17]. Bluetooth, WiFi, NFC and positioning technology can be integrated into wearable devices. The wireless networking function with low power consumption, high performance and interoperability can also be embedded into devices. As the representative of the wearable system platform based on ARM architecture, Freescale launched the open-source, scalable Wearable Reference Platform (WaRP) [20] with over fifteen manufacturer collaborators including Kynetics, RevolutionRobotics, Circuitco. The platform supports open operating system such as Android and Linux and has features like scalability and flexibility.
Wearable market is currently in the stage of development and promotion of hardware, while application development is lagging behind. In order to accelerate the software application development of wearable market, various software reference platforms are springing up. In 2014, Google released the software development kit (SDK) including an emulator and other tools for Android Wear [21], which can allow developers to integrate Android Wear platform with their own devices and applications that can be available to download. Developers of the Android Wear ecological circle manufacturing can utilize all the tools they need to start making apps for the new devices. The development of SDK for Android Wear is expected to bring richer APP experience for future smart watches. Tizen is an open source operating system based on Linux for mobile phones and other smart devices. It has a complete development platform, including simulator, IDE etc. At present, Samsung's Galaxy Gear2 smart watch is running on this operating system. If the promotion is widely successful, we believe that Tizen will contribute to the development of wearable devices.
However, it is still lack of mature software development platform for wearable devices. An important issue for developers is to choose which platform to develop application software. In addition, developers cannot verify which applications have been developed and available. Therefore, developers have several options [22]: 1) Develop application software that supports particular operating systems for a single platform. This approach would simplify developers' work, but the resulting application may not be able to run on wearable devices with other different operating systems; 2) Develop native applications for each platform, but the technology and cost of application maintenance for each platform become the big challenges; 3) Develop mobile web applications, so as to reduce the native code for each platform. But the application may not be able to meet the market demand.
E. Privacy and Security
Wearable devices can collect real-time user information so as to provide effective personalized services for users. These data contain various user informations, for example, geographical location, living habits, body temperature, heart rate, account password and conversation, etc. When mishandled, it may bring great danger to the user's privacy and security, and harm the user's property or even personal safety. With the wide use of wearable devices and the rapid growth of applications, security and privacy issues should become more and more important. The communication of wearable devices is mainly based on wireless network. Thus, more private information may be easily attacked or stolen.
Wearable industry is not mature. There is still lack of a complete and effective program to protect privacy and security. While the industry may neglect security problems due to cost issues, researchers working in wireless sensors and mobile application have done a lot of fundamental work that will help to solve the privacy and security problems in wearable devices. For example, Liu et al. studied the security problems on mobile application [23]. Ameen et al. analyzed the problems of information security and privacy protection on wireless sensor network in the health care field [24].
We can consider from the following aspects to protect user's information and privacy: 1) Research on reliable network communication protocols for wearable devices so as to ensure the security of data during transmission; 2) The system should have permission setting. Wearable applications can only be allowed to obtain necessary data so as to reduce data exposure; 3) Reasonable software patterns should be proposed to solve these problems. For example, loosing the binding between private data and real names and mixing various may help to protect user's privacy and security.
F. Energy Consumption
Because wearable devices can only use battery, rather than stationary power, as their power supply, it is more tedious to charge wearable devices than mobile phones. Frequent charging or replacing battery can inevitably reduce the practicability of devices and the preference and satisfaction of users. In addition, the large amount of energy consumption for devices can produce great heat. If the cooling problem cannot be handled properly, it will damage user experience or even cause low temperature scald. Therefore, the energy consumption of wearable devices is an issue worthy of attention.
At present, manufacturers control energy consumption of wearable devices or mobile phones through the design of hardware or operating systems. Since the beginning of using wearable devices, their battery life is not very satisfactory. Taking smart watches as an example, the battery life of Moto 360 is up to 60 hours [25] and the one of LG G Watch is only 36 hours [26]. Energy consumption management is an essential issue for wearable devices.
In addition to hardware and operating systems, we can also consider improving energy consumption control from a highlevel application layer.
Particularly, the control of energy consumption can be considered from the following aspects at the application layer.
1) Reduce hardware electricity consumption through reasonable invoke of system APIs. For example, Hao et al. and Li et al. proposed the code-level energy consumption analysis methods on mobile applications [27] [28]. Such methods can be extended to wearable devices to achieve the purpose of reducing energy consumption through invoking lessenergy-consumption API or arranging reasonable invoking sequence.
2) Create adaptive energy-sensitive applications to adjust automatically energy usage. When the energy is sufficient, high quality services will be provided; otherwise, unimportant applications will be turned off in order to increase the usage time. Mizouni R et al used in [29] such adaptive strategy to reduce the energy consumption of smart phones in mobile applications.
3) Adopt load-balancing method to transfer complex calculation to the mobile terminal via wireless communication network, thus reducing wearable device's own energy consumption. Kwon et al. presented a method to solve the problem [30]. A similar approach can also be introduced into wearable devices through replacing high calculation energy consumption with low communication energy consumption.
G. Human-Computer Interaction
A major feature of wearable devices is to collect user information, perceive user physical conditions and the change of external environment, complete various commands, and assist or remind the user automatically [14]. Wearable devices pursue people-oriented, namely requiring wearable devices being more suitable for the user. Wearable devices would be best to perceive, recognize and understand human emotion, and give intelligent, sensitive, friendly response. Humanmachine interaction is a key technology of wearable systems, which should solve the interaction between users and wearable devices and improve the ability of environmental awareness. Therefore, advanced interaction means are a hotspot in the research of wearable devices.
There are many ways for users to interact with wearable devices [14]: 1) Contextual Awareness: Wearable devices continuously run and collect data, but in most cases, the user does not use them. Wearable devices should run independently, perceive the external environment, and transfer useful information to the user. Starner T et al. proposed in [31] visual environment perception method for wearable devices, and pointed out that a wearable device can observe its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user.
2) Augmented Reality: That is a technology that enhances users' awareness to real world through the information such as sound, video, graphics or GPS data generated by the computer [32]. The goal is to apply virtual information to the real world and to superimpose virtual object, scene or system message generated by computers to the real scene. Zhou et al. presented a design approach and a series of practical proposals of wearable user interfaces in real augmented environment [33].
3) Non-keyboard input: The most familiar way for people to input information into computers or mobile phones is through keyboard or mouse. However it is impossible to connect such input devices to wearable devices because of their miniaturization and lightweight. Users can interact with wearable devices through non-keyboard ways such as voice, handwriting, gestures, data glove etc. For the disabled, the interaction ways of traditional smart devices cannot bring normal experience. But they can wear a wearable device that receives messages from other sensors, which are transmitted to their sensory system after analysis. For example, for a person whose eardrum is broken, hearing devices are directly connected to his/her skull, which enables him/her to sense the voice that is not passing through ears.
Therefore, compared to computers and mobile phones, wearable devices can provide many different ways of humancomputer interaction for users to strengthen their experience. But these ways cannot meet the demand of all users, developers should strengthen the study of human-computer interaction technologies: 1) Apply existing mature human-computer interaction ways to wearable devices, such as handwriting, voice and other non-keyboard input. These ways can be easily realized in wearable devices and better accepted by users.
2) Strengthen the research of currently-immature humancomputer interaction ways, such as contextual awareness, augmented reality, mediated reality, etc. These ways can enhance user experience and make users have greater interests in wearable devices.
3) Propose new human-computer interaction ways. Some ways may be suitable for particular groups or particular environment. However, this approach will increase the burden to developers, and moreover, it may take time to study and practice these new ways.
H. Software Engineering
With the massive popularity of wearable devices, software engineering for wearable devices is becoming more and more important. Although the research of software engineering for wearable devices is still rare, we can predict some aspects/issues that may be of value and worth studying: 1) Demand analysis: how to perfect the requirement documents should be a problem worth studying. From users' comments on the application, the discussion in the BBS or the analysis of related online articles, users' expectation for the functionalities of the wearable devices can be acquired and thus be used to improve the requirement documents.
2) Code recommendation: many functions are likely to be reused in different applications. Through analyzing source code of existing applications, we can recommend function codes to developers to achieve fast development.
3) Application transplantation: it is not necessary to design different applications for each operating system. We can create API mapping among different platforms, and develop an ideal compiler that can complete the application development for multiple platforms.
Software engineering for wearable devices is an emerging field that provides many opportunities for researchers to investigate to.
I. Big Data
With the development of science and technology, we have entered the era of "big data" with "4V" [34], namely the volume, the velocity, the variety, and the veracity. Internet technology and mobile applications promote the development of the big data. Laurila et al. discussed the opportunities and challenges brought by big data in mobile applications [35].
Big data technology will play a great role in promoting the development of wearable industry. A large amount of data will be generated in using wearable devices, including basic personal information, health status, as well as the preference to food, cloth and color etc. We can gain a lot of useful information by using big data analysis technology, which will provide a great help for scientific research, social development and users' life. Big data technology can be used to analyze the huge amounts of data collected by wearable devices, and to detect user's body health factors. For example, the analysis for heart rate and blood pressure can understand user's body status and potential risks. It is an indisputable fact that big data technology can help wearable-device users enjoy more convenient life. Redmond et al. discussed the significance of big data technology for wearable devices in health care [36].
Conversely, wearable industry will also promote the development of big data. Compared to mobile phones, wearable devices will produce larger amount of data. So it needs big data technology to deal with these collected data even more. For example, the wearable market has emerged various smart bracelets with advanced functions, design and parameters; however, these wearable devices cannot use big data technology for processing these data to provide useful information to users.
Wearable devices and big data technology will complement each other and pursue a common development. In the big data environment, wearable market will have great development, create enormous wealth, and make people's life more convenient.
IV. CONCLUSION
Wearable devices will become the mainstream of the development of mobile smart devices, and dramatically change modern way of life. Currently, the development of wearable devices is still in its immature stage, and the major functions focus on running calculation, navigation, remote picture and other related services. However, these services can also be achieved on smart phones. Meanwhile, research on hardware materials and battery life has not achieved a breakthrough; limited screen space makes the product design very difficult; application software development is still in the initial stage. Due to these problems, it will take a long time for wearable devices to become the mainstream of market. | 6,346.2 | 2015-04-02T00:00:00.000 | [
"Computer Science"
] |
Evaluation of Microleakage of RMGIC and Flowable Composite Immersed in Soft Drink and Fresh Fruit Juice: An in vitro Study
ABSTRACT Aim : The objective of the study was to evaluate and compare the effect of a soft drink and a fresh fruit juice on the microleakage of flowable composite and resin modified glass lonomer cement (RMGIC). Methods and materials : 70 non-carious human premolars were collected and stored in saline until further use. Class-V cavities were prepared and restored with RMGIC on the buccal surface and flowable composite on the lingual surface for evaluating microleakage. The experimental groups (Group I and II ) comprised of 60 teeth, while the remaining 10 formed the control group (Group III―Water). The experimental groups were further divided into 2 groups (Group I―Cola drink and Group II―Fresh orange fruit juice) of 30 teeth each. Each group was then further divided into 3 subgroups (Short, Medium and Long-immersion) containing 10 teeth as shown in flow chart. Immersion regime was followed according to Maupome G et al and microleakage was evaluated by using Rhodamine B dye and examined under stereomicroscope. Results : Microleakage data obtained was statistically analyzed by Chi-square test. The teeth showed statistically significant microleakage as the immersion regime increased. Interpretation and Conclusion : Low pH soft drink caused highly significant microleakage at the tooth and restorative material interface in medium and high immersion regimes signifying that the leakage pattern was directly proportional to the number of immersions. Thus, the study conclusively proves that the ‘sipping habit’ associated with commonly available low pH beverages is detrimental to the longevity of restorations.
INTRODUCTION
The average daily requirement of water in human beings is 2-3 liters, of which, in developed countries, more than half comes from soft drinks. Commonly consumed soft drinks cause damage to the teeth due to their low pH and high titratable acidity leading to non-carious cervical tooth loss (NCTL). The sugars in these drinks are metabolized by plaque microorganisms to generate organic acids that bring about demineralization leading to dental caries. 1 The commercial sale of soft drinks has increased by 56% over the last 10 years and now it is estimated that it will keep rising at about 2-3% a year. 1 What with these soft drinks substituting water, their erosive effects on dental hard tissues and the subsequent NCTL pose a special challenge to any dentist for their restoration.
Over the last decade, the prevalence of dental erosion seems to have increased presumably due to an increase in the consumption of soft drinks and fruit juices. 2 It has been recognized as an important cause of tooth structure loss not only in adults, but also in children. 3 In NCTL, the coronal margins of cervical restorations are usually in enamel, while the cervical margins are in dentin and cementum. Dentin, unlike enamel, is a vital tissue and has higher organic content. A restorative system that bonds adequately to enamel and dentin is therefore desirable. 4 A wide plethora of adhesive restorative materials is available for use these days, such as resin composites with their respective dentinal adhesive systems, RMGIC and JAYPEE Compomers. Therefore, the present study was conducted to evaluate the effects of cola drink (Coca Cola ® ), fresh fruit juice (Orange) on the microleakage of Filtek™ Flow (Flowable composite) and Vitremer™ Tri Cure (RMGIC).
METHODOLOGY
Seventy human premolars with no signs of caries or developmental defects extracted for orthodontic treatment purpose were used (Fig. 1). Two restorative materials namely Vitremer™ Tri Cure and Filtek™ Flow (Fig. 2) and soft drink (Coca Cola ® ) and fresh fruit orange and water as control (Fig. 3) were used for the study.
Standardized Class-V cavities (3 mm in length, 2 mm in width and 1.5 mm in depth) were prepared on the buccal and lingual surfaces of the teeth, 1 mm above the CEJ. The cavity preparation was standardized using a William's graduated periodontal probe. 5 The cavities on the buccal surface were restored with Vitremer™ Tri Cure restorative material. Vitremer™ Primer (3M Dental Products) was applied on the cavity walls for 20 seconds; gently air dried and light cured for 30 seconds. Vitremer™ Tri Cure (3M Dental Products) was mixed according to the manufacturer's instructions and placed in the cavity. Immediately after the restorative material was placed, a Mylar strip was adapted over the restoration and cured with a visible light source for 40 seconds. The matrix was removed and the restorations were finished with a fluted carbide bur and polished with a wet abrasive disk (Sof-Lex™, 3M Company). After finishing, the restorations were gently air-dried and a layer of unfilled resin (Vitremer™ gloss 3M Company) was applied and light cured. 6 The cavities on the lingual surface were restored with Filtek™ Flow material. After etching the enamel and dentin with 34.5% phosphoric acid for 15 seconds, the cavities were thoroughly rinsed with water for 15 seconds. They were then air dried gently for 5 seconds to avoid complete desiccation. Two consecutive coats of Single Bond (3M Dental Products) were applied to the whole cavity surface followed by gentle air-drying to remove excess solvent and light cured for 10 seconds. The cavities were filled with flowable resin composite; Filtek™ Flow (3M Dental Products) and cured for 20 seconds. 5 The restored teeth were stored in water at room temperature for 1 week. During this period, the teeth were subjected to 200 thermocycles between 5°C and 55°C water baths. Dwell time was 1 minute with 10 seconds transit between baths. Then the samples were subjected to the various immersion regimens. 7 Out of the 70 prepared tooth samples, 60 were equally divided into two groups of 30 each. Each group was further divided into 3 subgroups as mentioned below. The remaining 10 prepared samples were used as control and immersed in water and named as Group III (Fig. 4).
Groups
Low Immersion regimes were meticulously carried out for 8 days; each immersion lasting for five minutes. For medium immersion and high immersion regimes, the regime was evenly distributed over a 12-hour period. Before and after each immersion, the restorations were copiously rinsed in 0.1 M phosphate buffered saline (PBS pH 7.2). When not exposed to the immersion regime, they were stored in deionized water at room temperature.
At the end of the test period, the apices of the teeth were sealed with sticky wax, and all tooth surfaces except a 1 mm wide zone around the margins of the restoration (bucally and lingually) were painted with nail varnish (Fig. 5). To minimize dehydration of the restorations, the teeth were replaced in deionized water as soon as the nail varnish dried. The teeth were then immersed in 1% Rhodamine B solution (pH-7.2) for 24 hours at 37°C and rinsed, dried, and finally invested in clear resin. 8 Each tooth was sectioned bucco-lingually through the center of the restoration with help of a low speed water cooled diamond disk. The specimens obtained were examined under a stereomicroscope to evaluate the microleakage. Dye penetration was graded based on the extent of penetration along the occlusal wall of the restoration using the criteria recommended by Michal Staininec and Mark Holtz (1988). 9 Scores (Figs 6 to 9) Score 0 No dye penetration Score 1 Dye penetration along occlusal wall but less than ½ way to axial wall Score 2 Dye penetration along occlusal wall but more than ½ way to axial wall Score 3 Dye penetration along occlusal wall, up to and along axial wall In order to avoid bias, scoring of the samples was done by a single blinded investigator on two different occasions and the average scores obtained were tabulated and statistically analyzed. (Table 1) Group I: Cola Drink (Coca Cola ® ) ( Table 1
Intragroup Comparison
When specimens were compared between three immersion regimes, all the specimens scored higher microleakage as the number of immersion intervals increased indicating that higher the time of immersion, more is the leakage. The results were statistically highly significant (p < 0.001).
Intragroup Comparison
There was no statistically significant leakage in the specimens when compared between the low and medium immersion regimes, but statistically significant leakage (p < 0.01) when compared between medium and high immersion regime and statistically highly significant leakage (p < 0.001) when compared between low and high immersion regimes.
Group III: Water (Control) (Table 1 and Graph I)
The specimens of this group were not subjected to any immersion regime. All the 10 (100%) specimens scored scores 0.
Intergroup Comparison of Different Immersion Regimes
Amongst all groups, the specimens showed statistically significant leakage (p < 0.01) in low immersion regime with Group-I showing more leakage compared to Group-II and Group-III. The specimens scored statistically highly significant leakage (p < 0.001) in medium and high immersion regimes with Group-I showing more leakage compared with Group-II and Group-III. Comparing Group-II with Group-III in low and medium immersion regimes, the specimens showed statistically no significant (p > 0.05) leakage, but in high immersion regime, the Group-II specimens showed statistically highly significant leakage (p < 0.001).
Intragroup Comparison
The leakage pattern tends to increase in proportion with the number of times of immersion. The results scored were statistically significant (p < 0.01) between low, medium and high immersion regimes.
Intragroup Comparison
There was no significant leakage pattern between low, medium and high immersion regimes. The results were not statistically significant (p > 0.05).
Group III: Water (Control) (Table 2 and Graph II)
The specimens of this group were not subjected to any immersion regime. All the 10 (100%) specimens scored 0.
Intergroup Comparison of Different Immersion Regimes
The Group I specimens when compared to Group II and Group III, Group II specimens compared to Group III did not show any statistically significant leakage (p > 0.05) in low immersion regime. The Group II specimens compared to Group III scored no statistically significant leakage scores (p > 0.05) in medium and high immersion regimes. The Group I specimens showed statistically significant leakage (p < 0.01) when compared with Group II and Group III in both medium and high immersion regimes.
DISCUSSION
Dental erosion is defined as an irreversible loss of dental hard tissue by a chemical process without the involvement of microorganisms and is due to either extrinsic or intrinsic sources. 10 Enamel, inspite of being the hardest tissue, has been reported to suffer from the devastating effects of soft drinks. 11 Dietary erosion may result from food or drinks containing a variety of acidic ingredients. Frequent consumption of these easily and widely available beverages showed erosion of the enamel in both in vitro and in vivo studies. [12][13][14][15][16][17] Most carbonated beverages and sport drinks have a pH below 3.5 and experiments have shown that enamel dissolution occurs below pH4. 18 Dental caries along with dental erosion being a one-way irreversible destruction of the tooth, replacement of the lost tissue would be the only available option.
Phosphoric acid is a common constituent of most of the soft drinks. 11,18,19 The acid content of the cola soft drink, which is added to give a peculiar tangy taste and has a preservative property, is known to play a well-established role in the erosive process. Substances in cola soft drinks absolutely affect the integrity of the enamel surface. 1 In our study, we used two commonly consumed beverages, i.e carbonated drink as well as fresh fruit juice. This was done because the acidogenic potential of the commonly consumed fresh fruit juice (Orange pH 3.98) 13 and carbonated drink (Coca Cola pH < 3.5) was less than 4. 18 In the oral environment, both dissolution of elements and erosion of the nonsoluble components of the restorative materials occur. Numerous factors like low pH, acidic foods, ionic composition, ionic strength of saliva and enzymatic attack are important parameters, which may influence the quality and the quantity of the substances released from a restorative material; as well as its physical and mechanical characteristics. 20 Considering these concepts, this current study was carried out to evaluate the microleakage of Filtek™ Flow and Vitremer™ after immersing for varying periods of time in a cola drink (Coca Cola ® ) and fresh fruit juice (Orange).
In this study, Class V cavities were prepared on the buccal and the lingual surfaces of extracted human premolars 1 mm above the CEJ 5 this is to reduce the microleakage because cavities place 1 mm below the CEJ have shown significantly more leakage due to the inadequate bonding of the restorative material to the tooth structure. 21,22 The immersion regime followed in our study was about 5 minutes contralateral to those studies that had employed extremely long immersion regime ranging from 15 minutes to 72 hours. 11,15,16,,23,24 This was done to ascertain more realistic consumption pattern replicated under experimental conditions, which would be helpful in determining the actual impact of soft drinks by resembling real time exposure. 8 Before and after each immersion in Cola drink (Coca Cola ® ) and fresh fruit juice (orange), the specimens and pellets were copiously rinsed in 0.1 M phosphate buffered saline (PBS, pH 7.2) 8,25 to buffer the effect of Cola drink (Coca Cola ® ) and fresh fruit juice (orange) after the prescribed exposure time. This was done to return pH to a neutral level once the exposure was over and to avoid prolonged insult to the materials while they were stored in the deionized water.
The microleakage pattern of Filtek™ Flow in the Cola drink group increased as the number of immersion regimes increased. Filtek™ flow in the fresh fruit juice (orange) group showed statistically significant leakage between medium and high immersion regimes.
On comparing the microleakage scores of Filtek™ flow when exposed to the Cola group, fresh fruit juice (orange) group and water (control) under the low, medium and high immersion regimes, it was evident that the microleakage showed an increasing trend in the Cola group as compared to the fresh fruit juice and control groups. Cola beverages contain phosphoric acid as the main acid which has pH of 2.57 18,26,27 as compared to the higher pH of fresh orange juice (pH-3.98) 28 and neutral pH of water. Therefore, the acidic nature of the cola drink would have affected the integrity of the restoration/enamel and increased the microleakage.
Similarly, the microleakage pattern of Vitremer™ in the Cola Drink Group increased as the number of immersion regimes increased which could be attributed to reasons cited above. There were no significant differences in the microleakage scores between the low, medium and high immersion regimes of Vitremer™ in the fresh fruit juice group. Studies involving the evaluation of fresh fruit juice have reported that though orange juice is rich in citric acid, but it has a pH of 3.98. 28 This pH is just about the critical pH drop below 4 needed to cause enamel erosion. 18 Vitremer is a light cured RMGIC, which was developed to improve the handling and working characteristics of the original glass ionomer formulation. Though these materials are known to set by a dual cure mechanism, like glass ionomer but they are believed to release fluoride. This could be the possible reason that there was no significant leakage seen around Vitremer in the fresh fruit juice (Orange) group.
The Vitremer™ restoration specimens showed a trend of increasing microleakage as the number of immersions increased especially in the Cola group 23 from their study inferred that as the period of exposure increased, so did the severity and depth of erosive lesions. 23 The pH of the acidic drinks also played a major role, as it might be true that higher microleakage scores were due to much of the restorative materials being removed along with the enamel. 11 However, it is inappropriate to extrapolate the findings of our study to the conditions existing in vivo in humans. In the oral cavity, any drink or foodstuff will be instantaneously mixed with saliva with a subsequent rise in its pH. After consuming a low pH drink, the pH on the tongue stays low only for a short duration. In addition, acidic drinks have also been shown to stimulate salivary secretion, which in turn facilitates the buffering systems. 18 However, we recommend further studies combining both qualitative and quantitative evaluations, which will indicate more precisely the effects of fruit beverages on the clinical integrity of restorative materials in the oral environment. | 3,819.2 | 2010-09-10T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
IbpB‐bound substrate release in living cells as revealed by unnatural amino acid‐mediated photo‐crosslinking
Small heat shock proteins (sHSPs) are known to bind partially folded proteins under stress conditions. In this study, by using photo‐crosslinking, we showed that the substrate binding of sHSP IbpB in living cells is reversible upon short‐time exposure at elevated temperature, providing in vivo evidence on the existence of substrate protein release at nonstress conditions.
Small heat shock proteins (sHSPs) are known to bind non-native substrates and prevent irreversible aggregation in an ATP-independent manner. However, the dynamic interaction between sHSPs and their substrates in vivo is less studied. Here, by utilizing a genetically incorporated crosslinker, we characterized the interaction between sHSP IbpB and its endogenous substrates in living cells. Through photo-crosslinking analysis of five Bpa variants of IbpB, we found that the substrate binding of IbpB in living cells is reversible upon short-time exposure at 50°C. Our data provide in vivo evidence that IbpB engages in dynamic substrate release under nonstress conditions and suggest that photo-crosslinking may be a suitable method for investigating dynamic interaction between molecular chaperones and their substrates in living cells.
Molecular chaperone proteins play key roles in maintaining proteostasis in vivo by assisting the folding of substrate proteins, preventing or reversing misfolding proteins, and the degradation of their misfolded forms [1,2]. Small heat shock proteins (sHSPs), as a conserved family of molecular chaperone with a low molecular mass of 12-43 kDa, are present in all forms of life [3]. The ability of sHSPs to confer resistance on cells under various stress conditions has been widely reported [4][5][6][7]. They are in proteostasis network referred to as first line stress defenders, binding non-native substrate proteins and holding them in a folding-competent state as 'holdases', which might be subsequently refolded with the assistance of other ATPdependent molecular chaperones such as Hsp60, Hsp70, and Hsp100 [3,[8][9][10][11][12][13]. It has been shown that the sHSPs associate with protein aggregates, altering their biochemical properties, and subsequently facilitate efficient disaggregation and refolding [9,[14][15][16][17]. However, the basic understandings about the interaction between sHSPs and their substrate proteins have been obtained mostly from in vitro studies [3], raising the question of their unique in vivo characteristics. Such unresolved scientific questions regarding the unique properties as regard to mode of operation of sHSPs in vivo include among others, what kind of dynamic changes take place in the interaction between sHSPs and their endogenous substrate proteins and, whether such interaction is reversible or irreversible? The answers to these questions may further our understanding of the molecular mechanisms for sHSPs to function in vivo.
To address these questions, we chose the inclusion body binding protein IbpB, a sHSP from Escherichia coli (E. coli) as a model to investigate the interaction characteristics of sHSPs and their substrate proteins in living cells. IbpB, which was initially identified as a component of inclusion bodies [18,19] and later reported to be present in heat shock formed aggregates [20], confers E. coli cells resistance against stresses [21,22]. In our previous studies, substrate-binding residues of IbpB were characterized by using in vivo site-specific photo-crosslinking as mediated by the genetically incorporated unnatural amino acid pbenzoyl-L-phenylalanine (Bpa) [23], and the substrate proteins captured by Bpa were identified to have remarkable preference for translation-related proteins and metabolic enzymes [24]. In the present study, we chose a total of five IbpB Bpa variants to investigate the interaction features of IbpB-substrate in E. coli cells subjected to both stress and nonstress conditions by adopting in vivo site-specific photo-crosslinking, as it is capable of covalently capturing transiently or weakly interacting proteins [25]. Our results revealed that the IbpB-substrate interaction in living cells is reversible upon short-time exposure to 50°C, providing the in vivo evidence on the existence of substrate protein release at nonstress conditions.
Materials and methods
Bacterial strains, plasmid construction, and protein expression Escherichia coli BW25113-DibpB strain was obtained from Nara Institute of Science and Technology in Japan. E. coli DH5a cells were used for gene manipulation. The recombinant plasmids, expressing wild-type or Bpa variants of IbpB with a tag of six histidine residues being added at the C terminus, were constructed as described previously [23]. The pSup-BpaRS-6TRN plasmid, expressing the orthogonal aminoacyl-tRNA synthetase/tRNA pair for the incorporation of Bpa into IbpB, was cotransformed with the recombinant plasmid into E. coli DibpB cells. Cells were cultured at 30°C in the presence of appropriate antibiotics (final concentrations of 100 lgÁmL À1 ampicillin, 50 lgÁmL À1 kanamycin, and 50 lgÁmL À1 chloramphenicol; Sigma, St Louis, MO, USA), 1 mM Bpa (Bachem AG, Bubendorf, Switzerland), and 0.02% arabinose to induce protein expression.
Chaperone-like activity assay for Bpa variants of IbpB
The chaperone-like activity of each Bpa variant was measured to determine its capacity to suppress the heat-induced aggregation of whole cell extract of E. coli DibpB cells. Briefly, the E. coli DibpB cells overexpressing each Bpa variant of IbpB-His 6 were cultured overnight at 30°C. Cells were harvested by centrifugation, washed twice, and resuspended in 20 mM Tris/HCl buffer (pH 8.0), lysed by sonication, and centrifuged at 13 000 g for 30 min at 4°C to remove the cell debris. The resultant whole cell extract was incubated at 50°C for 1 h. After heat shock treatment, 400 lL cell extract was divided into two parts and 20 lL was taken from one part as the whole proteins. The other part was centrifuged at 13 000 g for 30 min at 4°C; then, 20 lL was taken from the supernatant as the soluble proteins fraction. After discarding all the supernatants, the pellet was washed twice and resuspended using 200 lL Tris/ HCl; then, 20 lL was taken as the insoluble proteins fraction. The whole cell extract, soluble proteins, and insoluble protein aggregates were subjected to 10% Tricine/SDS/ PAGE and Coomassie blue staining analysis. The relative chaperone-like activity was defined by the percentage of the soluble protein of the whole cell extract according to the semiquantification results based on the corresponding Coomassie blue staining results. The mean gel density analysis was measured using IMAGEJ software [26] Bpa-mediated in vivo photo-crosslinking The E. coli DibpB cells transformed with pBAD carrying the gene of IbpB Bpa variant and pSup-BpaRS-6TRN plasmid were initially grown at 30°C to A 600 = 0.4; induced by arabinose for 2 h to express the IbpB variant protein; washed twice using fresh LB to remove arabinose; incubated at 50°C for 10 min; and transferred back to 30°C for a prolonged incubation. Cultures were taken out at indicated time points and immediately transferred to 24-well plate before being subjected to UV irradiation at 365 nm for 10 min using a Hoefer UVC 500 crosslinker. The cells were lysed, analyzed by 10% Tricine/SDS/PAGE, and then immunoblotted with anti-His tag monoclonal antibody.
Semiquantification of relative substrate binding
The relative levels of photo-crosslinked substrates of each Bpa variant were calculated as the percentage of IbpB crosslinked to substrates based on the immunoblotting results. It should be mentioned that the portion of monomeric, dimeric, and trimeric forms of IbpB Bpa variant was subtracted from the total crosslinked protein products during image processing using IMAGEJ software.
Results
Incorporation of the unnatural amino acid Bpa into the N-terminal arm of IbpB at five selected individual position The interaction between IbpB and its substrate proteins in E. coli cells at 30°C and 50°C has been investigated by utilizing genetically incorporated photocrosslinker Bpa [23]. Given that the substrate protein binding of IbpB was shown to be enhanced upon temperature elevation, it is of interest to determine whether a decrease in growth temperature will reversely weaken such interactions. To address this point, five individual residue positions (Phe-4, Leu-6, Trp-13, Ala-20, and Phe-32 as shown in Fig. 1A) from the Nterminal arm of IbpB were selected for Bpa incorporation because these IbpB Bpa variants have been shown to participate primarily in homo-oligomerization at 30 o C and switch to substrate binding at 50°C [23], making it easy to compare the changes in substrate binding of IbpB when temperature rises from 30°C to 50°C and then falls back to 30°C. To avoid the interference of endogenous IbpB, E. coli BW25113-DibpB strain was used to express the Bpa variants. Besides, these Bpa variants were all expressed with a tag of six histidine residues being added at the C terminus for the convenience of western immunoblot analysis. As displayed in Fig. 1B, the expression of five IbpB variants cannot be detected unless by the addition of Bpa, while the expression of wild-type IbpB is independent of Bpa, confirming the successful incorporation of unnatural amino acid Bpa into the selected position.
The chaperone-like activity of IbpB is barely affected by incorporation of Bpa at five selected individual residue positions To confirm that the insertion of Bpa at the selected residue positions does affect the known chaperone function of IbpB, we first determined the chaperonelike activity of five Bpa variants of IbpB by measuring their abilities of suppressing heat-induced aggregation of the whole cell extract prior to performing in vivo photo-crosslinking. Wild-type IbpB or its Bpa variants were overexpressed in E. coli BW25113-DibpB strain and cultured at 30°C, and then, the whole cell extract was isolated and incubated at 50°C for 1 h. After centrifugation, the soluble proteins and insoluble proteins were analyzed by Coomassie blue staining. As for DibpB cells, only about half of the cell extract proteins remained soluble upon heat treatment ( Fig. 2A, Lane 3), while almost all the cell extract proteins were kept in the soluble state when wild-type IbpB was overexpressed ( Fig. 2A, lane 6). This probably could be due to the high level of expression of wild-type IbpB protein, which protected the cell extract proteins from aggregation, therefore unable to detect insoluble proteins in the pellet fraction. The protein level of Bpa variants in the cell extract varied with the Bpa incorporation site and is much lower than that of the wild- type IbpB. However, most of the cell extract proteins were still maintained in the supernatant fraction ( Fig. 2A, lanes 8, 11, 14, 17, 20). We further compared the relative chaperone-like activity of the Bap variants by using semiquantitative analysis. Despite the relative low level of protein expression, all the Bpa variants exhibited over 75% chaperone-like activities (Fig. 2B), indicating that the incorporation of Bpa has a negligible effect on the molecular chaperone function of IbpB.
Substrate protein binding of IbpB in living cells is reversible upon short-time exposure to 50°C
Given that the chaperone-like activity of IbpB has been shown to be maximal at 50°C but almost undetectable at 30°C [22], we selected 50°C as the heat shock temperature and 30°C as the recovery temperature. The E. coli cells expressing IbpB Bpa variants precultured at 30°C were subjected to heat shock at 50°C for 10 min and then transferred back to 30°C for a prolonged incubation. The cell cultures were sampled at different time points (i.e., before heat shock, immediately after heat shock and 1, 2, 4 h after returning to 30°C) and subsequently subjected to photocrosslinking analysis. We observed that after experiencing a heat shock treatment, the interaction of the tested residues Bpa with cellular proteins, as reflected by its photo-crosslinked products (exception of crosslinked homo-oligomers), was substantially decreased when returned to incubation for varying length of time at 30°C, and this decrease in the in vivo photo-crosslinked bound proteins showed different patterns, which is dependent on the locations of the residue. 3. Characterization of the IbpB-substrate interaction in living cells during recovery from heat shock. Escherichia coli DibpB cells were grown at 30°C with the presence of 0.02% arabinose to mid-exponential phase, centrifuged at 2504 g for 1 min, resuspended and washed by fresh LB medium. The obtained cell cultures were exposed to heat shock at 50°C for 10 min, cooled down on ice for 30 s and returned to the incubation at 30°C. The cell cultures were sampled at different time points, i.e. before heat shock, right after heat shock and 1, 2, and 4 h after returning to 30°C. All the samples were exposed to UV light irradiation for 10 min, resolved by SDS/PAGE and immunoblotted by anti-His tag antibody. (A-E) The western blotting analysis of the crosslinking of F4Bpa, L6Bpa, W13Bpa, A20Bpa, and F32Bpa with cellular proteins and the semiquantification of relative substrate-binding levels for each of the five Bpa variants of IbpB, respectively. Each experiment was repeated three times. Semiquantitative analysis data are presented as mean AE SEM. Specifically, the photo-crosslinked products in the F4Bpa were not decreased during recovery for 1 and 2 hours until 4 hours (Fig. 3A). In contrast, the photocrosslinked products of other 4 Bpa variants were gradually decreased at indicated time upon a recovery from heat shock treatment (Fig. 3B-E). It could be seen that a 4-h incubation at 30°C enabled all these five IbpB Bpa variants to display almost the same crosslinking profile similar to that of the cells without heat shock. Besides, it should be mentioned that we have removed the arabinose after the induction of Bpa variants, avoiding the interference of newly synthesized variants in the crosslinking results during the prolonged incubation at 30°C. Together, these data suggested that the substrate protein binding of IbpB in living cells was reversible upon short-time exposure to 50°C, which implied that the substrate proteins bound with IbpB upon heat shock stress and are released at nonstress conditions.
Discussion
It has been found that IbpB was able to bind a wide spectrum of natural substrate proteins at 50°C in living cells [23,24]. Although it is believed that the substrate proteins which are bound to sHSPs would be released after the heat shock condition is removed, this interaction between sHSPs and their substrates during the recovery stage has rarely been examined in vivo. Here, by using Bpa-mediated photo-crosslinking, we characterize the interaction between IbpB and its endogenous substrates in living cells. Our data suggested that the substrate protein binding of IbpB in vivo was reversible upon short-time exposure to 50°C, providing the evidence that decreasing temperature leads IbpB to gradually hand off its bound aggregation-prone and partially unfolded proteins. In vitro study demonstrated that IbpB-bound proteins were stabilized in a conformation that can be subsequently released and specifically refolded by the DnaK-DnaJ-GrpE chaperones [15,16]. However, it remains unknown how does the release of substrate take place in vivo. Further research needs to be done to reveal how the ATP-dependent chaperones participate in this process.
IbpB, or sHSPs in general, function as a robust molecular chaperone to act upon a large diversity of substrate proteins in living cells growing under fluctuating conditions [23,27], therefore making sHSP-substrate interactions complex, and involving multiple sites on the sHSP. It has been demonstrated that the substratebinding residues of IbpB are located predominantly in the N-terminal arm [23]. Here, we chose a total of five IbpB Bpa variants with Bpa incorporated in their N-terminal arm, to investigate the IbpB-substrate interaction. We found decrease in crosslinked cellular proteins in all tested IbpB Bpa variants when returned for further incubation at 30°C after 50°C heat stress. Fu et al. proposed according to their in vivo photocrosslinking data that there are three types of substrate-binding residues in IbpB, classified into types I and II residues activated at low and normal temperatures, respectively, and type III residue mediated oligomerization at low temperature but switched to substrate binding at heat shock temperature [23]. In the N-terminal arm of IbpB, all these three types of substrate-binding residues are distributed. Here, we did not choose the type I residues, as they are capable of mediating substrate binding at 30°C, that would make it not convenient to characterize the interaction dynamics of IbpB-substrate from 30°C to 50°C, and back to 30°C. In contrast, four type II residues (Phe-4, Leu-6, Trp-13, and Ala-20) and one type III residue (Phe-32) which primarily mediate self-oligomerization at 30°C and switch to substrate binding at 50°C were subjected to in vivo photo-crosslinking analysis. Our results showed that, unlike F4Bpa, four other Bpa variants (L6Bpa, W13Bpa, A20Bpa, and F32Bpa) displayed the similar pattern in the decrease of photo-crosslinked bound proteins during the recovery stage. That means the rate of substrate proteins release of these IbpB Bpa variants does not depend on the type of the residue with Bpa incorporated, suggesting that the interaction dynamics between IbpB and its substrates is complicated. We speculate that the positions of Bpa incorporation and the properties of bound substrates may be attributed to the difference in the substrate release rate. It seems that some substrates remain bound longer on certain residues of the IbpB N-terminal arm. Since all the Bpa variants have been checked for function (as shown in Fig. 2), the complexity of the interaction between IbpB and its substrates is not due to the effect of Bpa incorporation.
Furthermore, this is the first study to investigate the substrate release of sHSPs in living cells by using photo-crosslinking, which is an effective approach to obtain very dynamic and even short-lived interactions that happen in vivo. Our study is of interest in methodology for investigating the interaction dynamics between molecular chaperones and their substrates in living cells.
E. coli strains and Prof Peter Schultz for the aminoacyl-tRNA synthetase and tRNA expression vectors. | 4,006.8 | 2020-08-19T00:00:00.000 | [
"Biology"
] |
On the Drinfeld-Sokolov Hierarchies of D type
We extend the notion of pseudo-differential operators that are used to represent the Gelfand-Dickey hierarchies, and obtain a similar representation for the full Drinfeld-Sokolov hierarchies of $D_n$ type. By using such pseudo-differential operators we introduce the tau functions of these bi-Hamiltonian hierarchies, and prove that these hierarchies are equivalent to the integrable hierarchies defined by Date-Jimbo-Kashiware-Miwa and Kac-Wakimoto from the basic representation of the Kac-Moody algebra $D_n^{(1)}$.
Introduction
For every affine Lie algebra g and a choice of a vertex c m of the extended Dynkin diagram, Drinfeld and Sokolov constructed in [6] a hierarchy of integrable systems which generalizes the prototypical soliton equation-the Korteweg-de Vries equation. This construction provides a big class of integrable hierarchies that are important in different areas of mathematical physics. In particular, the integrable hierarchies that are associated to the affine Lie algebras of A-D-E type are shown to be closely related to 2d topological field theory and Gromov-Witten invariants, see [7,10,12,13,20,21,27,32] and references therein. In establishing such relationships the tau functions of the integrable hierarchies play a crucial role, they correspond to the partition functions of topological field theory models. The unknown functions of the hierarchy are related to some special two point correlation functions.
The definition of the tau functions for the Drinfeld-Sokolov hierarchies and their generalizations [22] was given in [23,15] by using the dressing operators of the hierarchies. In terms of the tau functions such integrable hierarchies and their generalizations are represented as systems of Hirota bilinear equations, they can also be constructed by using the representation theoretical approach to solition equations developed by Date, Jimbo, Kashiwara, Miwa [4,2] and by Kac, Wakimoto [26,25]. In this approach the systems of Hirota bilinear equations are constructed from an integrable highest weight representation of g and its vertex operator realization, the tau functions that satisfy these equations are elements of the orbit of the highest weight vector of the representation under the action of the affine Lie group. Note that tau functions of the Drinfeld-Sokolov hierarchies are also defined in [11,31] via certain symmetry (called tau-symmetry in [10]) of the Hamiltonian densities of the hierarchies represented in forms of modified KdV type. Here the unknown functions of the Drinfeld-Sokolov hierarchies in forms of modified KdV type and in that of KdV type are related by Miura type transformations.
For general Drinfeld-Sokolov hierarchies there are no canonical choices for their unknown functions, and the definition of the tau functions given in [11,15,23] in terms of the dressing operators is in certain sense implicit. However, in the particular case when the affine Lie algebra is A (1) n the Drinfeld-Sokolov hierarchy coincides with the Gelfand-Dickey hierarchy [17], the unknown functions can be taken as the coefficients of a differential operator L = D n+1 + u n D n−1 + . . .
The advantage of such a choice of the Hamiltonian densities lies in the fact that they satisfy the tau symmetry condition k n + 1 ∂h k ∂t l = l n + 1 ∂h l ∂t k .
Due to this property of the densities the tau function of the Gelfand-Dickey hierarchy can be introduced, as it was done in [4,10,14,31], by the equations ∂ 2 log τ ∂x∂t k = k n + 1 h k , k = Z + \ (n + 1)Z + . (1.2) Note that the Hamiltonians for the general Drinfeld-Sokolov hierarchies are also given in [6], however the densities given there do not satisfy the tau symmetry condition. In order to fulfill such a condition these densities should be modified by adding certain terms which are total x-derivatives of some differential polynomials of the unknown functions.
In the above formalism of the Drinfeld-Sokolov hierarchy associated to the affine Lie algebra A (1) n , the integrable hierarchy and the relation of its unknown functions with the tau function are relatively explicitly given. The purpose of the present paper is to give a similar representation for the Drinfeld-Sokolov hierarchy associated to the affine Lie algebra D (1) n and the vertex c 0 of the Dynkin diagram. Such a formalism is helpful for people to have a clear picture of the relation of integrable systems with Gromov-Witten invariants and topological field models associated to A-D-E singularities [12,13,16,19,20,21,33]. In fact, Drinfeld and Sokolov already represented in [6] part of the integrable hierarchy in terms of a pseudodifferential operator of the form where the functions u 1 , . . . , u n−1 , u n = ρ serve as the unknown functions of the hierarchy. The integrable systems of the hierarchy can be labeled by the elements of a chosen base {Λ j ∈ g j , Γ j ∈ g j(n−1) | j ∈ 2Z + 1} (1.4) of the principal Heisenberg subalgebra of D (1) n (see Sec. 4 for the definition of these symbols). Denote by P the fractional power L 1 2n−2 of L which is a pseudo-differential operator of the form then the part of the integrable hierarchy that corresponds to the elements Λ j can be represented as [6] ∂L The other part that corresponds to the elements Γ j can not be represented in this way by using only the pseudo-differential operators L, P .
Inspired by the Lax pair representations of the dispersionless integrable hierarchy that appear in 2D topological field theory [7,30], we attempt to represent the flows corresponding to the elements Γ j by the square root Q of L which takes the form However, this operator is not a pseudo-differential operator in the usual sense, because it contains infinitely many terms with positive powers of D, so one cannot compute the square of Q. We note that in the dispersionless case, with D replaced by its symbol p, one can define the square of Q, and define the dispersionless hierarchy by using L, P and Q.
We are to show in this paper that there exists a new kind of pseudodifferential operators which are allowed to contain infinitely many terms with positive power of D such as Q, so we can define the square root of the pseudo-differential operator L in the space of such operators. Then by using the pseudo-differential operators L and Q we can get the Lax pair representation of the remaining part of the integrable hierarchy and define its tau function in a way that one does for the Gelfand-Dickey hierarchy, see Theorem 4.11. By using this new kind of pseudo-differential operators, we also find a Lax pair representation of the two-component BKP hierarchy (see [3], c.f. [29]). We show that the Drinfeld-Sokolov hierarchy of D n type becomes the (2n−2, 2)-reduction of the two-component BKP hierarchy [2]. In this way we also prove that the square root of the tau function satisfies the Hirota bilinear equations that are constructed in [2,26] from the principal vertex operator realization of the basic representation of the affine Lie algebra D (1) n , see (5.21), (5.22) and Theorem 5.2. In order to obtain the above mentioned results, we first extend, in Section 2, the usual definition of the ring of pseudo-differential operators. Then in Section 3 we define a hierarchy of integrable systems and its tau function by using the pseudo-differential operator L of the form (1.3) and its fractional powers P, Q. In Section 4 we show that the constructed hierarchy coincides with the Drinfeld-Sokolov hierarchy associated to the affine Lie algebra D (1) n and the vertex c 0 of its Dynkin diagram. In Section 5 we give a Lax pair representation of the two-component BKP hierarchy, its tau function, and its (2n − 2, 2)-reductions. In the final section we give some concluding remarks.
Pseudo-differential operators
In this section we generalize the concept of pseudo-differential operators and list some useful properties of them.
Definitions
Let A be a commutative ring with unity, and D : A → A be a derivation. The algebra of pseudo-differential operators over A is defined to be This is a complete topological ring, whose topological basis is given by the following filtration The product of two pseudo-differential operators It is easy to see that for every s ∈ Z, the coefficient of D s in (2.1) is a finite sum of elements of A, so the above product is well defined.
In our formalism of the Drinfeld-Sokolov hierarchy of D n type below, one need not only operators in D − but also operators in the following larger abelian group However, it is impossible to extend the product (2.1) to D because when expanding the product of two elements of D one meets summations of infinitely many elements of A, which are not well defined unless A possesses certain topology. Now we assume that on A there is a gradation such that A is topologically complete w.r.t. the induced decreasing filtration and the integer k is called the degree of A. We denote by D k the subgroup that consists of all homogeneous pseudo-differential operators of degree k, then the abelian group D has the following decomposition We introduce the following subgroups of D: It is easy to see that D + is topologically complete w.r.t. the filtration For any A ∈ D k and B ∈ D l , it is easy to see that their product defined by (2.1) belongs to D k+l , so we can extend this product to D + such that D + becomes a ring.
Definition 2.1 Elements of D − (resp. D + ) are called pseudo-differential operators of the first type (resp. the second type) over A. The intersection of D − and D + in D is denoted by and its elements are called bounded pseudo-differential operators.
Sometimes to indicate the algebra A and the derivation D, we will use the notations D ± (A, D) instead of D ± .
The general form of A ∈ D reads
(2. 2) The following lemma is obvious.
This lemma has a graphic interpretation as follows: From this interpretations it is easy to see the following alternative expressions of the elements of A ∈ D ± .
i) If A ∈ D + , then there exists m ∈ Z and a i,j ∈ A j such that A can be written as the following two forms: ii) If A ∈ D − , then there exists n ∈ Z and a i,j ∈ A j such that A can be written as follows: Properties of pseudo-differential operators of the first type are well known. Similar to the operators in D − , we can define the adjoint operator, the residue, the positive part and the negative part of a pseudo-differential operator of the second type. Let A ∈ D + be given by (2.2), then It is easy to see that A * , A + , A − ∈ D + and res A ∈ A. In particular, if A ∈ D ± , then A ∓ ∈ D b .
An operator A ∈ D ± is called a differential operator if its negative part A − vanishes. Note that every differential operator in D − is of finite order, while the ones in D + may be not. The differential operators in D ± form subrings of D ± respectively, and they can act on A in the obvious way. Given a differential operator A ∈ D ± , we denote by A(f ) the action of A on f ∈ A.
Let us introduce some other notations to be used latter. Elements of the quotient space F = A/D(A) are called local functionals, and they are represented in the form Introduce the map We then define the pairing on each of the following four spaces: It is easy to see that this pairing is symmetric and is nondegenerate on each of the above spaces.
Properties of pseudo-differential operators
Now we present some useful properties of pseudo-differential operators. Proof The D − case is well known, we only prove the D + case. Suppose C = [A, B] = 0. We take the dispersion expansions such that neither A ka,a nor C lc,c vanishes, then the coefficient of reads mA m−1 ka,a C lc,c + · · · , where · · · denote the terms with higher degrees in A. This contradicts with [A m , B] = 0. The lemma is proved.
Let ρ ∈ A be an invertible element, we consider the operator where Q + is a differential operator in D + . Such an operator Q is invertible, whose inverse reads Note that Q −1 is a differential operator in D + .
Lemma 2.4
Let Q ∈ D + be given in (2.8), then D can be uniquely expressed as the following form (2.10) Moreover, m h m − res Q m ∈ D(A) for every m ≥ 1.
Proof The first assertion follows from a simple induction. We are going to prove the second one by using the following fact The first assertion shows that We assume (Q m ) + = i≥0 a m,i Q −i with a m,i ∈ A, then where a ′ m,i = D(a m,i ). By using the above three formulae, one can obtain Note that (Q m ) + = Q m when m ≤ 0, so by comparing the coefficients of The lemma is proved.
Lemma 2.5 Let
A be a pseudo-differential operator in D + , and ρ ∈ A be an invertible element. Then there exists a unique pseudo-differential operator Proof Without loss of generality, we can assume A to be homogeneous, So we derive the first part of the lemma.
hence B * ± B = 0 due to the uniqueness in the first part. The lemma is proved.
3 An integrable hierarchy represented by pseudodifferential operators In this section we are to construct a hierarchy of evolutionary partial differential equations starting from a pseudo-differential operator L. This hierarchy possesses a bihamiltonian structure which coincides with that of the Drinfeld-Sokolov hierarchy of D n type, moreover, it possesses a tau function.
Construction of the hierarchy
Let M be an open ball of dimension n with coordinates (u 1 , u 2 , . . . , u n ). We define the algebra A of differential polynomials on M to be There is a gradation on A defined by then it is easy to see that A is topologically complete. We introduce a derivation D of degree one over A as follows where u i,0 = u i . Now let us construct the algebras D ± starting from A and D as we did in the last section.
Let L be the following pseudo-differential operator given in (1.3). Obviously L belongs to D b = D − ∩ D + and satisfies L * = DLD −1 . Here we re-denote the coordinate u n by ρ, and will use this notation frequently in what follows.
Firstly, we regard L as an element of D − , then by using properties of the usual pseudo-differential operators we have the following lemma.
Lemma 3.1 There exists a unique pseudo-differential operator P ∈ D − of the form such that P 2n−2 = L. Moreover, the operator P satisfies [P, L] = 0 and In [4], Date, Jimbo, Kashiwara and Miwa proved the following lemma.
Lemma 3.2 ([4])
The constraint (3.2) to an operator P of the form (3.1) is equivalent to the condition that for every k ∈ Z odd + the free term of (P k ) + vanishes, i.e. (P k ) + (1) = 0.
The above two lemmas imply that the following equations are well defined, and they give evolutionary partial differential equations of u 1 , . . . u n . In particular, D = d dx with x = t 1 , and by taking residue of The flows in (3.3) first appeared in [6] as part of the Drinfeld-Sokolov hierarchy of D n type.
Note that the Drinfeld-Sokolov hierarchy of D n type contains n series of commuting flows, but there are only n − 1 series of flows given in (3.3), so in this sense the equations (3.3) do not form a complete integrable hierarchy. One main result in the present paper is that the nth series of flows of the Drinfeld-Sokolov hierarchy of D n type can be represented by the square root of L regarded as an element of D + . Lemma 3.3 There exists a unique pseudo-differential operator Q ∈ D + of the following form Here Q m are homogeneous differential operators in D b with degree 2 m, and satisfy Q * m = Q m . Moreover, the operator Q satisfies Proof By substituting (1.3) and (3.5) into DQ 2 = DL and comparing the homogeneous terms, we can obtain Here A m are differential operators depending on L, Q 0 , Q 1 , . . . , Q m−1 and satisfy A m + A * m = 0. Then according to Lemma 2.5, Q m can be determined by induction, and they satisfy Q * m = Q m . The symmetry property (3.6) is trivial. To show (3.7), we consider the free terms on both hand sides of (3.8): The lemma is proved.
According to Lemmas 2.3 and 3.3, the following evolutionary equations are well defined: In particular, we have Whe k = 1 we obtain ∂ρ/∂t 1 = 1 2 DL + (1), this flow is linearly independent with ∂ρ/∂t 2i−1 (1 ≤ i ≤ n − 1), so from the bihamiltonian recursion relation (see below) we see that the equations given in (3.3) are linearly independent with that defined in (3.9). Proof The commutativity of these flows follows from the following equivalent representations of (3.3), (3.9): which can be verified as Lemma 2.3. The theorem is proved.
The dispersionless limit of the flows ∂ ∂t k was first given by Takasaki in [30], but the dispersionful one was not given there. Following [30], we call the flows (3.3) and (3.9) the positive and the negative flows respectively. The above theorem shows that the negative and the positive flows form an integrable hierarchy. We will show that it is equivalent to the Drinfeld-Sokolov hierarchy of D n type.
Bihamiltonian structure and tau structure
In this subsection we show that the hierarchy (3.3), (3.9) carries a bihamiltonian structure, and the densities of the Hamiltonians can be chosen to satisfy the tau symmetry condition. We then define the tau function of the hierarchy by using this tau symmetry following the approach of [10].
Given a local functional F = f dx ∈ A/D(A), we define its variational derivative w.r.t. L to be an element X = δF /δL ∈ D such that δF = X, δL , X = X * . (3.14) The existence of such an element can be verified by taking where v 0 = ρ 2 and v 1 , . . . , v n−1 are determined by representing the operator L in the following form Note that the new coordinates v 1 , . . . , v n−1 are related to u 1 , . . . , u n−1 by a Miura-type transformation, and the functionsṽ i determined by the condition L + L * = 0 are linear functions of the derivatives of v 1 , . . . , v n−1 .
On the other hand, the variational derivative X defined in (3.14) is determined up to the addition of a kernel part Z that satisfies The following compatible Poisson brackets are given in Proposition 8.3 of [6] (see also [9]) for the bihamiltonian structure of the Drinfeld-Sokolov hierarchy of D n type: where F and G are two arbitrary local functionals, and Note that in the above formulae of the Poisson brackets the second component in the pairing , belongs to D b for any Y ∈ D, so from the definition of , given in (2.7) we see that the first component X is not restricted to the space D + or D − . One can show by a direct computation that the definition of these Poisson brackets is independent of the choice of the kernel parts of X and Y , so they are well defined.
Theorem 3.5 The hierarchy (3.3), (3.9) has the following bihamiltonian representation: Here F ∈ F is any local functional, and the Hamiltonians are given by Proof Let us start with the computation of the variational derivatives of the Hamiltonians H k . By using the identity P 2n−2 = L (see Lemma 3.1) and the symmetric property of the pairing , we have To show (3.18), we first note due to Lemma 3.2 the validity of On the other hand, by using the commutativity between L and P (see Lemma 3.1) we can also represent ∂L ∂t k in the following form: Now the equivalence of the flows (3.3) with (3.18) follows from the above identities together with the relation By using the property (3.6) of the operator Q we know that for any k ∈ Z odd + the free term of Q k vanishes, then a similar argument as above leads to the equivalence of the flows (3.9) with (3.19). The theorem is proved.
Hence the chosen h α,p give a tau structure, in the sense of [10], of the bihamiltonian structure of the integrable hierarchy (3.3), (3.9). This tau structure defines the tau functionτ of the integrable hierarchy by (3.24)
Drinfeld-Sokolov hierarchies and pseudo-differential operators
In this section we first recall some facts about the Drinfeld-Sokolov hierarchies associated to untwisted affine Lie algebras, see details in [6]. Then we consider the Drinfeld-Sokolov hierarchy of D n type and identify it with the hierarchy (3.3), (3.9) constructed in the last section. ii) the homogeneous/standard gradation
Definition of the Drinfeld-Sokolov hierarchies
We will use notations such as g <0 = i<0 g i below.
In [6] Drinfeld and Sokolov assigned a standard gradation to any chosen vertex c i of the Dynkin diagram of g and used the standard gradation to construct an integrable hierarchy. As mentioned in the beginning of the present paper, we only consider the case that the vertex is chosen to be c 0 which is the special one added to the Dynkin diagram of the corresponding simple Lie algebra. Integrable hierarchies that associated to different choices of the vertices are related by Miura type transformations.
Denote by E (resp. E + ) the set of exponents (resp. positive exponents) of g. Let s be the Heisenberg subalgebra associated to the principal gradation, which is defined to be the centralizer of Λ = n i=0 e i . One can fix a basis λ j ∈ g j (j ∈ E) of s.
Let C ∞ (R, W ) be the set of smooth functions from R to a linear space W . We consider operators of the form where D = d dx , and x is the coordinate on R.
Proposition 4.1 ([6])
There exists an element U ∈ C ∞ (R, g <0 ) such that the operator L 0 = e −ad U L has the form
2)
and for different choices of U , the map H differs by the addition of the total derivative of a differential polynomial of q.
We fix a U as given in the above proposition, and introduce a map The Drinfeld-Sokolov hierarchy is a hierarchy of partial differential equations of gauge equivalence classes of L defined by Here ϕ(λ j ) + stands for the projection of ϕ(λ j ) onto C ∞ (R, g >0 ), and the gauge transformations of L read For the classical untwisted affine Lie algebras, Drinfeld and Sokolov proposed a way to represent their hierarchies via certain scalar pseudodifferential operators over A, the algebra of gauge invariant differential polynomials of q in (4.1). They gave such representations for the full hierarchies of the A (1) n types by using pseudo-differential operators of the first type. However, for the D (1) n case, as pointed out by Drinfeld and Sokolov, the pseudo-differential operators in D − are not enough to represent the full hierarchy. Our purpose of introducing the space D + in the present paper is to represent the full Drinfeld-Sokolov hierarchy of D n type in terms of scalar pseudo-differential operators.
The following lemma tells how to construct scalar pseudo-differential operators from the operator L . ii) For any upper triangular matrixÑ ∈ R m×m with unity on the main diagonal one has ∆(Ñ RÑ −1 ) = ∆(R).
Positive flows of the Drinfeld-Sokolov hierarchy of D n type
In this subsection, we recall the approach given in [6] that represents part of the Drinfeld-Sokolov hierarchy of D n type as the positive flows (3.3) by using pseudo-diferential operators.
We first recall the matrix realization of the affine Lie algebra g of D (1) n type [25,6]. Denote by e i,j the 2n×2n matrix that takes value 1 at the (i, j)entry and zero elsewhere, then one can realize g by choosing the Weyl generators as follows: e 0 = λ 2 (e 1,2n−1 + e 2,2n ), e n = 1 2 (e n+1,n−1 + e n+2,n ), (4.6) f 0 = 2 λ (e 2n−1,1 + e 2n,2 ), f n = 2(e n−1,n+1 + e n,n+2 ), (4.8) (4.10) In particular, the associated simple Lie algebra g 0 of D n type is realized as where S is the following matrix and A T = (a l+1−j,k+1−i ) for any k × l matrix A = (a ij ). Note that in this realization the algebra g is just The set of exponents of g is given by where (n−1) ′ indicates that when n is even the multiplicity of each exponent congruent to n − 1 modulo 2n − 2 is 2. A basis of the principal Heisenberg subalgebra s can be chosen as where Λ = n i=0 e i , and Γ =κ e n,1 − 1 2 e n+1,1 − λ 2 e n,2n + λ 4 e n+1,2n + (−1) n e 2n,n+1 − 1 2 e 2n,n − λ 2 e 1,n+1 + λ 4 e 1,n (4.12) with κ = 1 when n is even and √ −1 when n is odd. Here Λ j and Γ j are define to be the j-th power of Λ and Γ respectively for j > 0, while for j < 0 (4.13) We now rewrite the Drinfeld-Sokolov hierarchy of D n type (4.4) into the form (4.14) We call the flows ∂ ∂t k and ∂ ∂t k the positive and the negative flows of the Drinfeld-Sokolov hierarchy of D n type respectively. We will show that these flows coincide with the positive and negative flows (3.3) and (3.9) defined by the pseudo-differential operator L.
The coefficients q 1 , . . . , q n−1 and ρ are gauge invariant differential polynomials of q that appears in (4.1). They serve as coordinates of the orbit space of gauge transformations, and we will use them as unknown functions of the Drinfeld-Sokolov hierarchy.
In the notions of Lemma 4.3, we let R = D − and denote by R + the subalgebra of R consisting of differential operators. We define an R + -module structure on V by D · α = L can α, α ∈ V. Denote L = −∆(R), where ∆ is the operation defined in Lemma 4.3, then L * = −L by using (4.11) and the third part of Lemma 4.3. It is easy to see that L has the form (3.13). This observation gives a Miuratype transformation between u 1 , . . . , u n and q 1 , . . . , q n−1 , ρ, so the algebra A defined above coincides with the one that is given in the last section. Moreover, the second part of Lemma 4.3 implies that L is invariant w.r.t. the gauge transformations (4.5), thus the Drinfeld-Sokolov hierarchy can be represented by the operator L, or equivalently by L = D −1 L.
Note that the operator L / ∈ R + , since V is only an R + -module L cannot act on V , and the first part of Lemma 4.3 cannot be applied directly. To resolve this problem, Drinfeld and Sokolov decomposed V into two subspaces such that D − can act on one of them, then the first part of Lemma 4.3 can be applied. In this way, the positive flows of the Drinfeld-Sokolov hierarchy (4.14) are represented in the form (3.3) as the positive flows given by the pseudo-differential operataor L of the form (1.3).
In the matrix realization of g, the elements Λ and Γ are 2n × 2n matrices with entries in C[λ], so they can act on the space V . One can verify that the following decomposition holds true Denote T = e U , where U is the matrix appeared in Proposition 4.1 with L = L can , then we also have Since the operator λ −1 Λ 2n−2 is the identity operator when restricted to V 1 , let P = ϕ(λ −1 Λ 2n−2 ) with ϕ being defined in (4.3), then P is the projection from V to V ′ 1 . We denote the projection of α ∈ V in V ′ 1 by α ′ = Pα, and define the action Here the operator L 0 defined in (4.2) now reads with f k , g k ∈ A and the negative powers Λ, Γ defined in (4.13). Note that It follows from [L 0 , Λ] = 0 that [P, L can ] = 0, then by acting P on both sides of (4.17) one has By using the second equality, one can represent the positive flows ∂ ∂t k of the Drifeld-Sokolov hierarchy (4.14) in the form (3.3). We are to explain in the next subsection that the negative flows of (4.14) can be represented as (3.9).
The first equality of the above lemma gives the following result.
Negative flows of the Drinfeld-Sokolov hierarchy of D n type
In the last subsection, the pseudo-differential operator representation for the positive flows of the Drinfeld-Sokolov hierarchy of D n type is obtained by introducing a D − -module structure on the space V ′ 1 and using Lemma 4.3 as was done in [6]. In order to obtain a similar representation for the negative flows, we try to assign a D + -module structure to V ′ 2 . However, it seems that there is no such a structure on V ′ 2 , so we first extend the space V ′ 2 to a larger one V ′′ 2 which admits a D + -module structure, then we employ Lemma 4.3 and obtain the pseudo-differential operator representation for the negative flows of the Drinfeld-Sokolov hierarchy of D n type.
Recall that V 2 as an A − -module is spanned by the following two vectors: The action of Γ restricted to V 2 satisfies Γ 2 = λ, so we introduce Γ −1 = λ −1 Γ, see (4.13). It is easy to see that every vector α ∈ V 2 can be uniquely expressed in the form This observation shows that the space V 2 is in fact a rank-one free module of the following algebra This is the algebra of "pseudo-differential operators of the first type" (see Sec. 2.1) over the algebra A with the derivation "D" being the following trivial map Γ : A → A, f → 0, which surely gives a derivation of degree one over A.
By regarding another trivial map as a derivation of degree one, one can also define the algebra of "pseudodifferential operators of the second type" with respect to the algebra A and the derivation Γ −1 as We denote byV 2 the rank-one free module of the algebra D + (A, Γ −1 ) with generatorψ 1 , which has a linear topology induced from that of D + (A, Γ −1 ). It is easy to see that the algebra D − (A, Γ) is a subalgebra of D + (A, Γ −1 ) (see Lemma 2.2), hence V 2 is a subspace ofV 2 .
To define the space V ′′ 2 , we need to extend the space V to certain spacê V that involvesV 2 as a subspace. Since the space V is defined to be (A − ) 2n , in which the algebra A − = A((λ −1 )) can also be defined as D − (A, λ) with λ being the trivial derivation, we similarly extend the space V tô The spaceV has a linear topology induced from that ofÂ. It is easy to see that the linear transformations Λ, Γ, T = e U : V → V can be extended naturally toV . Then the expression is also convergent inV according to its topology, hence the spaceV 2 is indeed a subspace ofV . Now let us introduce another subspace ofV : then V ′ 2 is a subspace of V ′′ 2 . As in the previous subsection we define a map with ϕ defined in (4.3). Then we have the following commutative diagram We also denote the composition of Q and the inclusion V ′ 2 ֒→ V ′′ 2 by Q, and write α ′′ = Qα for any vector α ∈ V . Proof To see that T −1 ψ ′′ 1 is another generator besidesψ 1 , we only need to show that these two vectors are related by the action of a unit of the algebra D + (A, Γ −1 ).
Recall T = e U , in which according to the present matrix realization the element U given in Proposition 4.1 has the form U 0 + O(λ −1 ) with U 0 being a strictly upper triangular matrix, and that the vectorψ 1 defined in (4.22) can be represented asψ By using the general form (4.23) of elements of V 2 and the identity Γ 2j+1 | V 2 = λ j Γ, one can represent T −1 ψ ′′ 1 ∈ V 2 in the following form: (4.25) Obviously the element 1 Aiming at a D + -module structure on the space V ′′ 2 such that the action of D coincides with (4.16) when restricted to the subspace V ′ 2 , we need to define the action of (L can ) i (i ∈ Z) on the space V ′′ 2 . Note that the operator L 0 : V → V given in (4.19) can be extended toV , we denote its restriction on the spaceV 2 byL 0 , which readŝ Here g 1 ∈ A is invertible as indicated in [6], so the operatorL 0 is invertible onV 2 , and its inverse is given bŷ One can expand the right hand side and obtainL in which A 00 = c 000 = g −1 10 with g 10 being the projection of g 1 onto A 0 . Note that g 10 /ρ is a positive constant, where ρ appears in the definition (4.15) of L can , and we have normalized Γ such that this constant is 1. Since A rs are differential operators of degree s, i.e., A rs (A d ) ⊂ A d+s , then by using the expressions (4.26) and (4.24) one can verify that the action ofL −1 0 onV 2 is well defined. Also note that the imageL −1 To go forward, we need to present another expression for vectors inV 2 .
Lemma 4.7 Every vector α ∈V 2 can be uniquely expressed in the form Proof According to Lemma 4.6, we suppose α ∈V 2 has the form where · · · stands for the terms of the form (4.27). Let us proceed to prove the lemma by induction on the lower bound k of the index j. (4.28) From the expansion (4.26) it follows that where A (l) 00 = g −l 10 , hence by using (4.25) we havê where c r,s ,c r,s ∈ A s . The above computation represents the action of the operatorL −l 0 (4.29) on certain vector inV 2 by an element in D + (A, Γ −1 ). By using equation (4.30), we can eliminate the term a m+k,k Γ m+k T −1 ψ ′′ 1 in (4.28) and arrive at Then by induction on the upper bound of the index i appearing in the first summation we have which shows that the lower bound of the index j has increased by one. The lemma is proved. Now we are ready to introduce a D + -module structure on the space V ′′ 2 by defining the action which extends the action (4.16) on V ′ 2 to an action on V ′′ 2 . Then Lemma 4.7 is equivalent to the following theorem. Let us apply Lemma 4.3 to the algebra R = D + and the module V ′′ 2 . By acting the projection operator Q to both sides of (4.17), we have hence L · ψ ′′ 1 = λ ψ ′′ 1 , where L = −D −1 ∆(R) as given before. According to Lemma 3.3 we introduce a pseudo-differential operator Q ∈ D + such that L = Q 2 , and consider the action of Q i on V ′′ 2 for any integer i.
Lemma 4.9
For any integer i the following equality holds true: Proof We only need to prove the case i = 1. Since V ′′ 2 is a free D + -module, there exists an element A ∈ D + such that ϕ(Γ)ψ ′′ 1 = A · ψ ′′ 1 . Note that [ϕ(Γ), L can ] = 0, so the action of ϕ(Γ) on V ′′ 2 commutes with D ∈ D + , hence By using the freeness of V ′′ 2 , we have A 2 = L = Q 2 . It follows that A = ±Q. To show A = Q, we only need to compare their leading terms. Equation (4.30) leads to which implies that the leading term of res A is g 10 . On the other hand g 10 takes the same sign with ρ = res Q, thus A = Q. The lemma is proved.
By using Lemmas 2.4 and 4.9, one can prove the following proposition. The argument is almost the same with the one for Proposition 4.5 in [6], so we omit the details here. Proof It is shown in [6] that the Drinfeld-Sokolov hierarchy of D n type has a bihamiltonian structure given by the two Poisson brackets (3.16), (3.17). For the flow (4.4) corresponding to the element λ j , the Hamiltonian with respect to the second Poisson bracket is given by where H is given in (4.2) and (· | ·) is the trace form defined by We choose a basis (1.4) of the Heisenberg subalgebra s. as Note that where k, l run over all odd integers, hence by using (4.19) we have They are the Hamiltonians for the positive and negative flows of the Drinfeld-Sokolov hierarchy (4.14) w.r.t. the second Poisson bracket (3.17).
According to Propositions 4.5, 4.10 and Theorem 3.5, these Hamiltonians satisfy where H k ,Ĥ k are the Hamiltonians of the integrable hierarchy (3.3), (3.9) with respect to the second Poisson bracket (3.17). So the Drinfeld-Sokolov hierarchy of D n type (4.14) and the integrable hierarchy (3.3),(3.9) coincide. The theorem is proved.
The two-component BKP hierarchy and its reductions
In this section we represent the two-component BKP hierarchy that is introduced in [3] via pseudo-differential operators, and show that the hierarchy (3.3), (3.9) is just a reduction, which was considered in [2], of the twocomponent BKP hierarchy.
The two-component BKP hierarchy
LetM be an infinite-dimensional manifold with local coordinates andà be the algebra of differential polynomials onM : As in Section 3, we assign a gradation onà such thatà is topologically complete. Define a derivation D by then the algebrasD ± = D ± (Ã, D) of pseudo-differential operators can be constructed as we did in Section 2.1.
4)
and that for any k ∈ Z odd + (P k ) + (1) = 0, (Q k ) + (1) = 0. (5.5) Proof The expression of P is obvious. To show that of Q, we consider its negative part: The symmetry property (5.4) is obvious, which implies (5.5). The lemma is proved.
We define the following evolutionary equations: where k ∈ Z odd + . According to (5.3) and (5.5), it is easy to see that these flows are well defined, and they yield the Lax equations of the form (3.11), (3.12). By a straightforward calculation one can verify the commutativity of these flows, hence they form an integrable hierarchy indeed. We will show that this hierarchy possesses tau functions, and that these tau functions satisfy the same bilinear equations of the two-component BKP hierarchy defined in [3].
First, let us introduce two wave functions w = w(t,t; z) = Φe ξ(t;z) ,ŵ =ŵ(t,t; z) = Ψe xz+ξ(t;−z −1 ) , (5.8) where x = t 1 , the function ξ is defined by and for any i ∈ Z the action of D i on e xz is set to be D i e xz = z i e xz .
It is easy to see that P w = zw, Qŵ = z −1ŵ , and that the flows (5.6), (5.7) are equivalent to the following equations Here (Q k ) − w is understood as (Q k ) − Φ e ξ(t;z) , and (Q k ) −ŵ is defined similarly. The following theorem can be proved as it was done for the KP hierarchy given in [4,5].
Theorem 5.2 The hierarchy (5.6), (5.7) is equivalent to the following bilinear equation res z z −1 w(t,t; z)w(t ′ ,t ′ ; −z) = res z z −1ŵ (t,t; z)ŵ(t ′ ,t ′ ; −z). (5.12) Here and below the residue of a Laurent series is defined as Let ω be the following 1-form By using the equations (5.6) and (5.7), one can show that ω is closed, so given any solution of the hierarchy (5.6), (5.7) there exists a function τ (t,t) such that ω = d (2 ∂ x log τ ) . (5.14) Moreover, one can fix a tau function such that the wave functions can be written as Introduce a vertex operator X as then the bilinear equation (5.12) reads which is equivalent to =res z z −1 X(t; z)τ (t,t)X(t ′ ; −z)τ (t ′ ,t ′ ). (5.17) Recall that in [3,24], Date, Jimbo, Kashiwara and Miwa defined the two-component BKP hierarchy from a two-component neutral free fermions realization of the basic representation of an infinite-dimensional Lie algebra g ∞ , which corresponds to the Dynkin diagram of D ∞ type [25]. The tau function of their hierarchy satisfies the bilinear equations (5.17) and defines two wave functions as (5.15), (5.16), so the equations (5.6), (5.7) give a representation of the two-component BKP hierarchy in terms of pseudodifferential operators.
Remark 5.3
In [29], Shiota gave a Lax pair representation of the twocomponent BKP hierarchy as follows. Let φ (ν) (ν = 0, 1) be the following pseudo-differential operators of the first type then the two-component BKP hierarchy can be defined as (5.18) Here on the right hand side of the second equation it means the action of the differential operator P (1−ν) k + on the coefficients of φ (ν) . It is easy to Introduce the wave functions with ξ given in (5.9). The hierarchy (5.18) was shown [29] equivalent to the following bilinear equation By comparing the bilinear equations (5.19) and (5.12), it is easy to see that Shiota's wave functions are related to ours by from which one can obtain the relations between a
Reductions of the two-component BKP hierarchy
Given an integer n ≥ 3, the condition P 2n−2 = Q 2 defines a differential ideal ofÃ, which is denoted by I. It is easy to see that this ideal is preserved by the flows (5.6), (5.7), so we obtain a reduction of the two-component BKP hierarchy.
Let L = P 2n−2 = Q 2 , then according to Lemma 5.1 the operator L has the form (1.3). Hence the algebra A defined in Section 3.1 is isomorphic toÃ/I, and the reduced hierarchy is an integrable hierarchy over A. It is easy to see that the derivatives of L with respect to t k ,t k are exactly given by (3.3), (3.9). Namely the hierarchy (3.3),(3.9) is the reduction of the two-component BKP hierarchy under the condition P 2n−2 = Q 2 .
From the definition (3.24) and (5.14) of the tau functionsτ and τ it follows that they are related by τ 2 =τ . (5.22)
Conclusion
We represent the full Drinfeld-Sokolov hierarchy of D n type into Lax equations of pseudo-differential operators, which is analogous to the Gelfand-Dickey hierarchies. We also give a Lax pair representation for the twocomponent BKP hierarchy, and show that the Drinfeld-Sokolov hierarchy of D n type is the (2n − 2, 2)-reduction of the two-component BKP hierarchy. The key step in our approach is to introduce the concept of pseudodifferential operators of the second type, which are defined over a topologically complete differential algebra, so that they may contain infinitely many terms with positive power of the derivation D.
Our Lax pair representations of the Drinfeld-Sokolov hierarchy of D n type and the two-component BKP hierarchy are convenient for further studies. In a subsequent publication [34], we will show that the two-component BKP hierarchy carries a bihamiltonian structure, which is expected to correspond to an infinite-dimensional Frobenius manifold (c.f. [1]).
Note that the bilinear equation (5.17) corresponds to the basic representation of the affine Lie algebra D ′ ∞ in the notion of [24]. It is shown in [28] that the (2n − 2, 2)-reduction (5.21) corresponds to the basic representation of the affine Lie algebra D (1) n . Then according to [25,26], the bilinear equation (5.21) is equivalent to the Kac-Wakimoto hierarchy constructed from the principal vertex operator realization of the basic representation of the affine Lie algebra D (1) n [26]. By comparing the boson-fermion correspondences, one can obtain the relation between the time variables t,t of the Drinfeld-Sokolov hierarchy of D n type (or the Date-Jimbo-Kashiwara-Miwa hierarchy) and the time variables s j (j ∈ E + ) of the the Kac-Wakimoto hierarchy t k = √ 2 s k ,t k = √ 2n − 2 s k(n−1) ′ .
In [21], Givental and Milanov proved that the total descendant potential for semisimple Frobenius manifolds associated to a simple singularity satisfies a certain hierarchy of Hirota bilinear/quadratic equations, see also [18,19,20]. Such a hierarchy of bilinear equation is shown to be equivalent to the corresponding Kac-Wakimoto hierarchy constructed from the principal vertex operator realization of the basic representation of the untwisted affine Lie algebra [21,33,16]. So we arrive at the following result. v) the Givental-Milanov hierarchy for the simple singularity of D n type.
Remark 6.2
The equivalence between the hierarchies ii) and iv) was also contained in a general result obtained by Hollowood and Miramontes in [23].
Note that the bihamiltonian structure (3.16), (3.17) is of topological type [8,10,9], its leading term comes from the Frobenius manifold associated to the Coxeter group of D n type. In [10] a hierarchy of dispersionless bihamiltonian integrable systems is associated to any semisimple Frobenius manifold, such an integrable hierarchy is called the Principal Hierarchy. It is also shown that there is a so called topological deformation of the Principal Hierarchy which satisfies the condition that its Virasoro symmetries can be represented by the action of some linear operators, called the Virasoro operators, on the tau function of the hierarchy. We expect that the Drinfeld-Sokolov hierarchy associated to D (1) n and the c 0 vertex of its Dynkin diagram coincides, after a rescaling of the time variables, with the topological deformation of the Principal Hierarchy of the Frobenius manifold that is associated to the Coxeter group of type D n . We will investigate this aspect of the hierarchy in a subsequent publication. | 11,377.8 | 2009-12-30T00:00:00.000 | [
"Mathematics"
] |
Improved Finite-Control-Set Model Predictive Control for Cascaded H-Bridge Inverters
In multilevel cascaded H-bridge (CHB) inverters, the number of voltage vectors generated by the inverter quickly increases with increasing voltage level. However, because the sampling period is short, it is difficult to consider all the vectors as the voltage level increases. This paper proposes a model predictive control algorithm with reduced computational complexity and fast dynamic response for CHB inverters. The proposed method presents a robust approach to interpret a next step as a steady or transient state by comparing an optimal voltage vector at a present step and a reference voltage vector at the next step. During steady state, only an optimal vector at a present step and its adjacent vectors are considered as a candidate-vector subset. On the other hand, this paper defines a new candidate vector subset for the transient state, which consists of more vectors than those in the subset used for the steady state for fast dynamic speed; however, the vectors are less than all the possible vectors generated by the CHB inverter, for calculation simplicity. In conclusion, the proposed method can reduce the computational complexity without significantly deteriorating the dynamic responses.
Introduction
Multilevel converters generally consist of power switch elements and DC voltage sources such as independent sources or capacitors, which enables the synthesization of output voltage waveforms with several steps.These multilevel converters have been widely used in medium-voltage high-power industry because of their superior performance that includes a higher quality of output waveforms and a lower switching frequency compared to the two-level converters [1][2][3][4][5][6].The multilevel converters are commonly classified into Neutral Point Clamped (NPC), Flying Capacitor (FC), and Cascaded H-bridge (CHB) converters [7][8][9].Among them, the CHB types, which are based on a modular structure with isolated dc sources, do not require an increased number of clamping diodes and capacitors, each of which needs voltage balance control, as the voltage level is increased.In comparison with the other multilevel converters, because of the advantage of the CHB converters' modularity, they are relatively simple to construct with high level of voltages [10][11][12].Regarding the control issues of the CHB converters, linear proportional and integral controllers combined with multicarrier-based pulse-width modulation (PWM) schemes, such as the level-shift and phase-shift methods, have been extensively studied [13,14].Besides the traditional linear control algorithms along with the PWM methods, studies on the finite-control-set model predictive control (FCS-MPC) methods, that can be simply implemented by removing the PWM block, have been properly conducted, given that the computational capability of the controllers has been improved as a result of the recent development of microprocessors [15][16][17][18].The FCS-MPC algorithm predicts all possible next step trajectories of the control targets dependent on all possible switching states which the CHB converters can produce.Those predicted future values are compared with the reference values to select an optimum switching state.This straightforward FCS-MPC approach, which takes advantage of the inherently discrete nature of the converter switching actions, has several advantages, such as fast transient response, easy addition of constraints to the controller, and simple implementation through the removal of the PWM blocks.Owing to these advantages, the FCS-MPC methods have been widely applied for controlling the NPC, FC, and CHB multilevel converters [19].The FCS-MPC method for the CHB converter [20,21] and inverter [22] calculates all the resulting voltage vectors from all the possible switching states to regulate the load currents of the converters.This basic principle of the FCS-MPC method, which predicts the next step behaviors using all possible voltage vectors, results in a problem of computational complexity for multilevel converters with a high number of voltage levels.This computational burden is a drawback for the CHB converters that are more accustomed to high voltage levels because the number of voltage vectors which the converter must generate increases quickly in proportion to the increased voltage level [20][21][22].Because of a short sampling period, it is difficult to consider every voltage vector while attempting to determine an optimal switching state in the CHB converters with a high voltage level.In addition to the load current control block, external control algorithms such as speed controls and torque controls are added in the drive systems with the CHB converters which occupies a considerable calculation amount in general [23].As a result, it can be necessary to reduce the calculation amount without significantly deteriorating other performances.In [22], a FCS-MPC method for CHB inverters with a reduced computational load is proposed.This method considers only seven vectors nearest to a present optimal vector to determine an optimal vector at the next step.In comparison with the conventional method, while considering all possible vectors, this approach can considerably reduce the level of computational complexity because it takes into account only seven vectors, similar to the two-level converters, regardless of the voltage level.However, when transient states occur, this method requires more steps to track the reference values by considering only the neighboring vectors, thus resulting in a slower dynamic response than the conventional method, while using all possible vectors.This paper proposes an FCS-MPC algorithm with reduced computational complexity and fast dynamic response for CHB inverters, in which different candidate vector subsets to search for an optimal voltage vector are developed for the respective steady and transient states.The proposed method presents a robust approach for describing a next step as a steady or transient state by comparing an optimal voltage vector at a present step and a reference voltage vector at the next step.Because the proposed determinant algorithm is based on voltage vectors determined in the αβ plane, the distinction between steady state and transient state is free from switching ripple components and noise.During steady state, only an optimal vector at a present step and its adjacent vectors are considered as a candidate-vector subset.On the other hand, this paper defines a new candidate vector subset for the transient state, which consists of more vectors than those in the subset used for the steady state for fast dynamic speed, but of less vectors than all the possible vectors generated by the CHB inverter for calculation simplicity.The proposed method determines an optimal vector during the transient state by utilizing the new subset, which results in an excellent transient response performance.The proposed method, compared to the conventional technique using all possible vectors, can reduce the computational complexity without significantly deteriorating the dynamic responses.The effectiveness of the proposed method is verified via simulations and experimental results with a five-level CHB inverter, and is subsequently compared with that of the conventional method to demonstrate the merits of the proposed method.This paper is structured as follows: the principle of the FCS-MPC method for CHB inverters along with previous studies is described in Section 2. In Section 3, the proposed FCS-MPC method for reduced computational load and fast dynamics is presented.The simulations and experimental results of the conventional and proposed methods using a five-level CHB inverter are shown and compared in Section 4. The conclusions of this paper are then given in Section 5.
Conventional Finite-Control-Set Model Predictive Control Methods for Multilevel CHB Inverters
The multilevel CHB inverter is based on a series-connected structure as illustrated in Figure 1a.A single module used in the CHB inverter in Figure 1b consists of a single-phase two-level full bridge inverter with four switches.Each module is supplied with a DC voltage of equal magnitude as the individual power supply.Since each module consists of a two-level full bridge inverter, the output voltage of the module, v acx (x = 1, 2, ..., N), can be one of {V dc , 0, −V dc }.Consequently, the output voltage of each phase, v iN (i = a, b, c), can be expressed as the sum of the output voltages of each module.
where, N is the number of series-connected H-bridge inverter cells per phase.The relationship of the total number of voltage vectors including the redundant voltage vectors, L v , of the multilevel CHB inverter with the number of modules is expressed as The total number of the switching states, S N , is expressed with the voltage vectors, L v , which dramatically increases with increasing voltage level, as Using the Kirchhoff voltage law in Figure 1, the three-phase inverter voltage (v aN , v bN , v cN ) can be calculated as where, R L , L L , and i io (i = a, b, c) denote the load resistance, load inductance, and load current at a, b, and c phases, respectively.Assuming three-phase balanced sinusoidal load currents, the common-mode voltage (v nN ) shown in Equation ( 4) can be calculated as The relationship of the inverter output, load, and the common-mode voltages can be written as The three-phase load voltage can be simplified with a vector notation as where, In this paper, the simple Euler method is used to develop the discrete time model, and di o /dt can be expressed as follows, using a constant sampling period T s .
By using the αβ transformation and substituting Equation (8), Equation (7) in the continuous abc frame can be represented in the discrete αβ frame as where, A = L−RT s L , B = T s L .In addition, i oα [(k + 1)T s ] and i oβ [(k + 1)T s ], i oα [kT s ] and i oβ [kT s ] represent the α and β components of one-step future current and present-current vectors, respectively.Furthermore, v oα [kT s ] and v oβ [kT s ] represent the α and β component voltage values, which can be applied at a present step, respectively.
Predicting one-step future currents can be obtained through the possible voltage vectors of the CHB multilevel inverter, as shown in Equation (9).Thus, as the output voltage level of the CHB inverter increases, the total number of one-step predicted current values greatly increases.The conventional FCS-MPC method (termed as MPC-conv1 in this paper) considers all predicted currents generated by all possible voltage vectors to determine an optimal voltage vector at the next step [20][21][22].Therefore, as the level increases, the calculation complexity of the MPC-conv1 method is increased.The cost function, h, defined using Equation (10), can be used to determine an optimal vector leading to the smallest error values between the reference and the predicted current values.
where, i * oα [(k + 1)T s ] and i * oβ [(k + 1)T s ] represent the α and β components of the reference current vector at the next step, respectively.
Using Lagrange extrapolation, the one-step future reference current can be calculated for the cost function in Equation (10) as Figure 2 shows the voltage vector diagram of the multilevel CHB inverter with M-level, in which the number of redundant voltage vectors are included.It can be seen that the outer voltage vectors possess less redundant voltage vectors than their inner counterparts.When the voltage level increases in the multilevel CHB inverter, the number of voltage vectors and the number of the redundant voltage vectors quickly increases as well.In order to reduce the redundant vector state in the CHB inverters, a voltage vector, which can minimize a common-mode voltage among numerous redundant vectors, is selected [22].The number of non-redundant voltage vectors, N nr , according to the voltage level, can be calculated as Table 1 lists the number of voltage vectors and voltage levels according to the voltage level.Although the number of voltage vectors is reduced by eliminating the redundant vectors, as indicated in Table 1, the MPC-conv1 method considering all non-redundant voltage vectors generated by the multilevel CHB inverter still suffers from a rapidly increased computational load as the voltage level increases.Therefore, the incorporation of a computational complexity reduction algorithm into the MPC-conv1 method is needed for the multilevel CHB inverter.In steady state, the reference sinusoidal currents change slowly, and, correspondingly, an optimal voltage vector forcing the actual load currents to track the reference currents smoothly moves without sudden transitions.This implies that the optimal voltage vectors as well as the reference current vectors steadily rotate in the αβ plane.Thus, an optimal voltage vector at the next step is located near an optimal vector at the present step.On the basis of this fact, the FCS-MPC method was proposed, which selects a next-step optimal vector only among the neighboring vectors of a present optimal vector [22].In this paper, this is referred to as the MPC-conv2 method, which uses a subset consisting of seven vectors near a present optimal voltage vector in the αβ plane, in the calculation process of the cost function.Compared to the MPC-conv1 method, the MPC-conv2 variant can greatly reduce the computational complexity by considering only the optimal vector of the previous step and its adjacent vectors.As a result, the total number of voltage vectors calculated to determine an optimal vector for every step in the cost function in Equation ( 10) is only seven, and this is regardless of the voltage level, which is exactly the same as the case of the two-level inverter.Figure 3 shows, in the αβ plane, the candidate voltage vectors used in the MPC-conv2 method in the case that an optimal voltage vector at present step is V 21 .In addition, the MPC-conv2 method can produce the same harmonic spectrum quality of load current waveforms as the MPC-conv1 method in steady state.However, despite the dramatically decreased calculation load, the MPC-conv2 method is more vulnerable to a slower transient response than the MPC-conv1 method.The MPC-conv2 method requires more steps to follow the step-change of the reference currents than the MPC-conv1 method when a transient state occurs.This is due to the limited candidate voltage vectors in the search process for an optimal vector.
Proposed Predictive Control Method for Single-Phase Inverter
This paper proposes an FCS-MPC algorithm with reduced computational complexity and fast dynamic response for multilevel CHB inverters to effectively improve the computational load of the MPC-conv1 method and the slow dynamic response of the MPC-conv2 method.Figure 4 represents a block diagram of the proposed FCS-MPC method for a multilevel CHB inverter.The proposed algorithm employs different candidate vector subsets to determine an optimal voltage vector for the steady and transient states, respectively.As a result, a distinction algorithm, which can categorize a next step as either steady or transient state, is developed in the proposed method.In order to develop the distinction algorithm to more clearly distinguish between steady and transient states, the proposed method utilizes predicted voltage vectors instead of the current vectors used in conventional methods.The load dynamic equation ( 9) can be rewritten as the relationship between the voltages and the present and one-step future load currents where, C = R − L T s and D = L T s .In addition, by assuming that the actual currents at the next step become equal to the reference current values by applying the reference voltage vector at the present step, the Equation ( 13) can be expressed as T s ] represent the α and β components of the reference voltage vector, respectively.The Equation ( 14) is shifted by one-step future to apply the delay compensation method, which is needed for the inevitable time delay of the controllers [24].
The reference voltage vector obtained using Equation ( 15) is compared using a cost function with the candidate voltage vectors to determine a future optimal voltage vector that enables a future load current vector to track a reference current vector.During steady state, only an optimal vector at a present step and its adjacent vectors, similar to the MPC-conv2 method, are considered as a candidate-vector subset in the proposed method.A cost function to determine an optimal voltage vector in steady state, h steady , is defined with seven adjacent voltage vectors as where, v adj oα [(k + 1)T s ] and v adj oβ [(k + 1)T s ] are the α and β components of seven adjacent one-step future voltage vectors in a hexagon closest to a present optimal voltage vector.The proposed method uses the cost function with only the neighboring vectors of an optimal vector at a present step, as in the MPC-conv2 method, in which the error terms are replaced with the voltages in Equation ( 16) instead of the currents in Equation (10).Thus, the proposed method results in a similar performance to that of the MPC-conv2 method in terms of harmonic spectrum performance of load currents and voltage waveforms during steady states.
Contrary to the steady state, the proposed method defines a new candidate vector subset for the transient state, which consists of more vectors than those in the subset used for the steady state in order to overcome the slow transition speed of the MPC-conv2 method.This paper compares an optimal voltage vector at a present step with a reference voltage vector at a next step to identify the next step as a transient state.By using voltage vectors instead of current vectors, larger identification values can be achieved to interpret a next step as either a transient or a steady state.As shown in Figure 5, if the one-step future reference voltage vector calculated in Equation ( 15) is located inside the smallest hexagon centered at the present optimal voltage vector in the αβ plane, the developed algorithm interprets the next step as a steady-state condition.On the other hand, in a case where the reference voltage vector at the next step is positioned outside the hexagon, the algorithm describes, instead, a transient state, as shown in Figure 5. Therefore, the difference between two nearest voltage vectors in the αβ plane is used to identify a next step as a transient or a steady state.A determinant factor, D tran , to indicate that the transient condition occurs at the next step is defined as: where, v opt oα [kT s ] and v opt oβ [kT s ] represent the α and β component of the optimal voltage values at a present step, respectively.In addition, a value of 0.67V dc indicates the distance between the two closest voltage vectors in the αβ plane.It should be noted that, because the proposed determinant algorithm is based on voltage vectors determined in the αβ plane, the distinction between steady state and transient state is free from switching ripple components and noise, thus highlighting the robustness of the algorithm.In the case that the proposed algorithm recognizes a transient state based on the determinant factor, it uses a new candidate vector subset, which consists of more voltage vectors than those in the subset used for the steady state, to achieve a much faster process compared to the MPC-conv2 method, but almost the same fast dynamic response as the MPC-conv1 method using all the voltage vectors.Furthermore, in comparison with the MPC-conv1 method, the proposed method employs fewer vectors than all possible vectors generated by the multilevel CHB inverter for calculation simplicity.Thus, the proposed method can offer almost the same fast transient response as the MPC-conv1 method using all the possible vectors, but with reduced calculation complexity in transient states as well as in steady states.The proposed method determines an optimal vector during transient states by utilizing a new subset, which results in an excellent transient response performance.Figure 6 shows the candidate voltage vectors considered using the proposed method during the transient states.The candidate voltage vectors in the proposed method are chosen in such a way that the difference between a reference voltage vector and its closest candidate vector is less than 0.67V dc , no matter in which position a reference vector is located in the αβ plane.We selected the shapes of the vectors in Figure 6, by selecting all the voltage vectors in one row and rejecting all the vectors in the other row.This is because we wished to select the candidate vectors in such a way that an optimal vector selected by the proposed method would be as close as possible to an optimal vector selected by the MPC-conv1 method using all the vectors, to use a reduced set of voltage vectors without significantly deteriorating the dynamic speed.This subset of the candidate vectors for transient conditions can produce almost the same fast dynamic response as the MPC-conv1 method using all the voltage vectors.In the transient state, a cost function, h tran , is used, which considers the new candidate voltage vectors shown in Figure 6, instead of the adjacent vectors of the present reference voltage vector in the MPC-conv2 method, or all the possible voltage vectors in the MPC-conv1 method.The total number of the candidate voltage vectors used for the transient state in the proposed method is nearly half of all the possible voltage vectors, as evident in Figure 6.The cost function for the transient state, h tran , is defined as: where, v tran oα [(k + 1)T s ] and v tran oβ [(k + 1)T s ] represent the α and β components of the one-step future voltage vectors, respectively, as shown in Figure 6.As a result, the proposed method offers much faster dynamic responses than the MPC-conv2 method under the transient conditions.Furthermore, in comparison with the MPC-conv1 method, the proposed method under transient conditions produces almost the same transient speed and lower calculation complexity (reduced by nearly half).
Figure 7 shows how the proposed method and the MPC-conv1 method select optimal voltage vectors under transient conditions.Assume that both methods select V 28 as the optimal vector at the kth step because it is the closest one to the reference vector at the present step v * oαβ [kT s ].In the case that a reference vector, v * oαβ [(k + 1)T s ], moves at (k + 1)th step because of a transient state, as shown in Figure 7a, it is seen that both the proposed and the MPC-conv1 methods select the vector V 18 as the optimal vector, which makes the two methods produce the same dynamic response.If a reference vector, v * oαβ [(k + 1)T s ], moves at (k + 1)th step, as shown in Figure 7b, the MPC-conv1 method, considering all the voltage vectors, chooses V 50 as the optimal vector at (k + 1)th step, which is the nearest one to the reference voltage.On the other hand, the proposed method selects the vector V 49 as the optimal vector at (k + 1)th step, because the reduced set of the proposed method does not include the voltage vector V 50 .Although the proposed method yields a slightly slower dynamic response than the MPC-conv1 method because of the reduce set of candidate voltage vectors, it should be noted that the two vectors, V 49 and V 50 , selected by the proposed and the MPC-conv1 methods, respectively, are adjacent vectors with a difference of 0.67V dc .Thus, the selection processes of the optimal voltage vectors by the two methods are similar after the transitions, and, therefore, the proposed method can reduce the computational complexity of the algorithm without significantly reducing the dynamic speed.
Simulation and Experimental Results
The developed MPC method was tested via computer simulations using a five-level CHB inverter, which consists of two cells with separate DC power sources (V dc = 40 V).A sampling period, T s , of 200 µs and an R-L load of (R = 20 Ω and L = 15 mH) were used in the simulation.Simulations with the two conventional methods, the MPC-conv1 and MPC-conv2 methods, were carried out for comparison purpose with the proposed method.Figures 9 and 10 show the three-phase load currents and the Fast Fourier Transform (FFT) analysis of the a-phase load current during steady states, obtained using the two conventional methods and the proposed method.The three methods show the same performance under steady-state conditions.Moreover, the total harmonic distortion (THD) values obtained by all three methods are the same, as shown in Figure 10.It can be seen that the proposed and the MPC-conv2 methods, both of which use only neighboring voltage vectors of a present optimal vector, produce the same quality in terms of load current waveforms and frequency spectrum as the MPC-conv1 method that utilizes all the possible voltage vectors.Because the sampling frequency is much faster than the frequency of the reference voltage, the trajectory of the optimal voltage vector changes very slightly during one sampling period in steady state.Therefore, considering only the adjacent voltage vectors of a present optimal vector is enough to determine a next-step optimal vector.Figure 11 shows the simulated waveforms of the a-phase reference current, a-phase actual current, and a determinant value D tran under a transient-state condition, such as a magnitude change of the reference currents from −3 A to 1.5 A, a frequency change of the reference currents from 50 Hz to 75 Hz, and a load change from 20 Ω to 10 Ω in the proposed method.During the step-changes in the reference current magnitude and the load resistance in Figure 11a,c, it can be observed that the determinant value D tran is kept below the threshold value to interpret a next step as a steady state before the transient conditions occur.However, the value quickly increases in both plots under transient conditions.Once the transient phase has lapsed in both plots, the determinant value D tran decreases to a value lower than the threshold value.On the other hand, it can be seen from Figure 10b that the value D tran remains lower than the threshold value, irrespective of the step-change in frequency of the reference currents.This is because the frequency change produces only rotational speeds of the reference current and voltage vectors without changes in magnitude.Therefore, similar to the steady states, the step-change in the frequency of the reference currents can be quickly followed by considering only the adjacent vectors of an optimal voltage vector.The determinant value D tran can then be utilized to accurately identify a transient state with recognizable values between steady states and transient states.12. Obviously, the MPC-conv2 method, which considers only seven vectors inside the smallest hexagon near the present optimal vector for the sake of reduced computational load, produces a slow transient response when a transient state occurs.
In contrast, the actual load current generated by the MPC-conv1 method at the expense of the high computational load associated with the consideration of all vectors, tracks its reference value faster than the MPC-conv2 method.It is evident from Figure 12 that the proposed method, which reduces the number of candidate vectors in comparison with the MPC-conv1 method, can force the actual load current to follow its reference value as fast as the MPC-conv1 method.In addition, it is shown that the inverter-phase voltage waveform obtained using the proposed method is the same as that of the MPC-conv1 method, despite a reduced number of candidate voltage vectors.
Figure 13 shows the simulation results of the a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase voltage (v an ) obtained using the three methods during the transient state of a frequency change in the reference currents.Unlike the magnitude step-change of the reference currents in Figure 12, all three methods produced the same results in the frequency step-change of the reference currents as shown in Figure 13.In Figure 11b, only the frequency change of the reference currents does not result in an abrupt increase of the current and voltage vectors before and after the step-change in the αβ plane.Thus, fast dynamics can be achieved by considering only the adjacent vectors of an optimal vector at a present step.Figure 14 shows the simulation results of the a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase voltage (v an ) obtained using the three methods during the transient state of a load change.Like the magnitude step-change of the reference currents in Figure 12, different transient responses are exhibited in Figure 14.It is clearly seen in Figure 14 that the proposed and MPC-conv1 methods produced the same fast transient responses as well as the same inverter voltage waveforms, whereas the MPC-conv2 method, with only the neighboring voltage vectors, produces a slow transient response.Therefore, it can be inferred that the proposed method can achieve a fast dynamic speed, similar to the MPC-conv1 method, despite a reduced number, by approximately a half, of candidate voltage vectors.The dynamic speeds of the actual load currents obtained from all three methods under transient conditions are compared in Figure 15, during step-change in the reference current magnitude from −3 A to −1.5 A, reference current frequency from 50 Hz to 75 Hz, and load change from 20 Ω to 10 Ω.In the case of the magnitude change of the reference current from −3 A to −1.5 A, as shown in Figure 15a, the MPC-conv2 method experiences the transient period for ~0.8 ms, which corresponds to four sampling periods.On the other hand, the transient period of the proposed method, which is almost the same as that of the MPC-conv1 method, is 0.2 ms, corresponding to one sampling period.Figure 15b shows the step-change of the reference current magnitude from −3 A to 1.5 A, which is a larger current change than in Figure 15a.The MPC-conv2 method undergoes the transient phase for 2.4 ms, whereas the proposed and the MPC-conv1 methods complete the transient period after approximately 0.6 ms.Therefore, the proposed method presents load current dynamics that is four times faster than that of the MPC-conv2 method in both cases.Furthermore, the proposed method exhibits a transient response as fast as the MPC-conv1 method, even when considering that it requires approximately half less voltage vectors than the MPC-conv1 method.In the case of the frequency change of the reference current from 50 Hz to 75 Hz, there is no observable difference in each method, as shown in Figure 15c.As depicted in Figure 15d, in the case of the load change, the transient response time of the MPC-conv2 takes 1.7 ms, while it takes 0.6 ms in the case of the proposed and the MPC-conv1 methods.As a result, the proposed method results in a dynamic speed that is as fast as that of the MPC-conv1 method in the step-changes, even with a reduced calculation complexity during the transient states.However, the reduced set of voltage vectors used in the proposed method can lead to a deteriorated performance, which corresponds to a slower dynamic speed, depending on the transition conditions.The proposed method can have exactly the same performance as the MPC-conv1 in transient or can show a slightly slower dynamic speed than the MPC-conv1, depending on the transient conditions.A variety of transient conditions were applied to thoroughly compare the responses of the proposed and the MPC-conv1 methods.Figures 16 and 17 show the simulation results of the a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase inverter voltage (v an ) obtained using the MPC-conv1 and the proposed methods during the transient state of reference current changes from −3 A to −2.5 A and 3 A, respectively.It is seen that the proposed method, under these transient conditions, generates exactly the same waveforms of the load currents and the inverter-phase voltages as the MPC-conv1 method, despite a reduced number of candidate voltage vectors.
In addition, Figures 18 and 19 show the simulation results of the a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase inverter voltage (v an ) obtained using the MPC-conv1 and the proposed methods during the transient state of load changes from 20 Ω to 19 Ω and 5 Ω, respectively.It is seen that the proposed method, under these transient conditions, generates exactly the same waveforms of the load currents and the inverter phase voltages as the MPC-conv1 method, despite a reduced number of candidate voltage vectors.Figures 20 and 21 illustrate the simulation results of the a -phase load current (i oa ), a-phase reference current (i * oa ), and a-phase inverter voltage (v an ), and the numbers of the optimal voltage vectors obtained using the MPC-conv1 and the proposed methods during the transient state of changes in the phase angles as well as in the magnitudes of the reference currents.Under these transient conditions, it is shown that the proposed method yields different inverter-phase voltage waveforms compared to the MPC-conv1 method during a couple of the sampling periods after the transition instants, because of the different selection of the optimal voltage vectors derived from the reduced set of candidate voltage vectors.However, it is clearly seen that the inverter-phase voltage waveforms as well as the selected optimal voltage vectors obtained by the two methods become exactly the same a couple of the sampling periods after the transition instants.In addition, the dynamic speeds of the load current waveforms of the proposed methods are almost the same as those of the MPC-conv1 method.Thus, the deterioration of the transient dynamics of the proposed method at the expense of the reduced set of voltage vectors is negligible, as shown in Figures 20 and 21, although the proposed method might lead to a very slightly slower dynamic speed than the MPC-conv1 method under some transient conditions.To verify the proposed method, a prototype of the three-phase five-level CHB inverter was fabricated with single-phase full bridge inverter modules (Infineon F4-30R06W1E3).The two conventional MPC methods (MPC-conv1 and MPC-conv2 methods) and the proposed method were implemented using digital signal processor (DSP) boards (TMS320F28335).The experiments were carried out to create sinusoidal load currents with a fundamental frequency of 60 Hz using the same sampling period (T sp = 200 µs) and R-L load (R = 20 Ω, L = 15 mH) as in the simulation tests.Each module was supplied with a separate DC voltage (V dc = 40 V) using diode rectifiers connected to a multiwinding transformer.A block diagram and a prototype photograph of the five-level CHB inverter are shown in Figure 22.
Figure 23 shows the experimental results of the three-phase load current, a-phase reference current, a-phase inverter-phase voltage, and ab line-line voltage under the steady-state condition.Similar to the simulation results, the three methods produce the same sinusoidal current and voltage waveforms with high quality and the a-phase load current tracking its reference current in the steady state.Figure 24 shows the experimental waveforms of the a-phase load current, a-phase reference current, and a-phase inverter output voltage obtained using the three methods, during a transient state where the magnitude of the reference current is suddenly changed from −3 A to 1.5 A. The actual currents from both the conventional and proposed methods accurately follow their reference current, however, with different dynamic speeds depending on each algorithm.It is obvious from Figure 24 that the proposed method has the same dynamic speed as the MPC-conv1 method, which is faster that the MPC-conv2 method.The magnified experimental waveforms during the transient periods are also shown in Figure 24, to clearly illustrate the dynamic performance of the three methods.It can be seen that the proposed method results in load current dynamics that is as fast as the MPC-conv1 method and even much faster than the MPC-conv2 method.In addition, it is shown that the inverter-phase voltage waveform obtained using the proposed method is the same as that of the MPC-conv1 method, although the proposed method utilizes approximately half less candidate voltage vectors compared to the MPC-conv1 method.
Figure 25 shows the experimental results of the a-phase load current, the a-phase reference current, the a-phase voltage obtained with the three methods during the transient state of a frequency change in the reference currents.As for the simulation results with the frequency step-change in Figure 13, the three methods yield the same experimental results because of the smooth rotation of the current and voltage vector with no sudden movement before and after the step-change in the αβ plane.In the experimental setup, the number of clocks in the digital signal processor (DSP) board was measured for calculating the time required to perform the entire algorithm of the conventional (MPC-conv1 and MPC-conv2) and proposed methods.The DSP execution time calculated by the number of DSP clocks, THD values, and current errors are summarized in Table 2 to compare the experimental results obtained using the three methods.In comparison with the MPC-conv1 method, the proposed method requires an execution time of approximately 25% and 50% in steady and transient states, respectively.The current quality performances, such as the THD values and the current errors, are the same for all three methods.The dynamic response speed of the proposed MPC method is almost the same as that of the MPC-conv1 method, but much faster than that of the MPC-conv2 method.
Conclusions
This paper has presented an MPC algorithm with reduced computational complexity and fast dynamic response for multilevel CHB inverters, in which different candidate vector subsets to determine an optimal voltage vector are respectively developed for steady and transient states.
The proposed method presents a robust technique for interpreting a next step as either a steady or transient state by comparing an optimal voltage vector at a present step and a reference voltage vector at the next step.Because the proposed determinant algorithm is based on voltage vectors determined in the αβ plane, the distinction between steady state and transient state is free from switching ripple components and noise.During steady state, only seven voltage vectors inside the smallest hexagon near a present optimal voltage vector are considered as a candidate-vector subset.On the other hand, a new candidate vector subset for the transient state is defined, which consists of more vectors than those in the subset used for the steady state for fast dynamic speed; however, the vectors are less than all the possible vectors generated by the CHB inverter for calculation simplicity.The proposed method determines an optimal vector during the transient state by utilizing the new subset, which results in an excellent transient response performance.As a result, the proposed method, compared to the conventional methods, can reduce the computational complexity without significantly deteriorating the transient response.
Figure 2 .
Figure 2. Voltage diagram of the M-level CHB inverter with the number of redundant voltage vectors in the αβ plane.
Figure 3 .
Figure 3. Candidate voltage vectors in the model predictive control (MPC)-conv2 method in the case that a present optimal vector is V 21 .
Figure 4 .
Figure 4. Block diagram of the proposed finite-control-set model predictive control (FCS-MPC) method for a multilevel CHB inverter.
Figure 5 .
Figure 5. Positions of a reference voltage vector at a next step and of an optimal voltage vector at a present step in the case of steady transient states in the proposed method.
Figure 6 .
Figure 6.Candidate voltage vectors considered during transient states using the proposed method for five-level CHB inverters.
Figure 7 .
Figure 7. Reference voltage vectors and optimal voltage vectors of the proposed and the MPC-conv1 methods under transient states (a) in the case that the proposed method and the MPC-conv1 method select the same optimal vectors (b) in the case that the proposed method and the MPC-conv1 method select different optimal vectors.
Figure 8
Figure 8 shows a flow chart of the proposed algorithm, which distinguishes between a steady and transient state for a next step by calculating the determinant factor D tran .
Figure 8 .
Figure 8. Flow chart of the proposed method.
Figure 9 .
Figure 9. Simulation waveforms of the three-phase load currents (i oa , i ob , i oc ) and the reference output current (i * oa ) in steady state from the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Figure 10 .
Figure 10.Simulation waveforms of the Fast Fourier Transform (FFT) analysis of a-phase load currents (i oa ) in steady state from the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Figure 11 .
Figure 11.Waveforms of the a-phase reference current, a-phase actual current, and a determinant value in the proposed method under transient-state condition.(a) Magnitude change of the reference currents from −3 A to 1.5 A, (b) frequency change of the reference currents from 50 Hz to 75 Hz, (c) load change from 20 Ω to 10 Ω.
Figure 12
Figure12shows the simulation results of the a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase inverter voltage (v an ) obtained using the three methods during the transient state
Figure 12 .
Figure 12.Simulation waveforms of the a-phase actual current (i oa ), a-phase reference current (i * oa ), and a-phase inverter-phase voltage (v an ) during the transient state of the magnitude change of the reference currents from −3 A to 1.5 A obtained using the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Figure 13 .
Figure 13.Simulation waveforms of the a-phase actual current (i oa ), a-phase reference current (i * oa ), and a-phase inverter-phase voltage (v an ) during the transient state of a frequency change of the reference currents from 50 Hz to 75 Hz obtained using the (a) MPC-conv1 method, (b) MPC-conv2 method, and (c) proposed method.
Figure 14 .
Figure 14.Simulation waveforms of the a-phase actual current (i oa ), a-phase reference current (i * oa ), and a-phase inverter-phase voltage (v an ) during the transient state of the load change from 20 Ω to 10 Ω obtained using the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Figure 15 .
Figure 15.Comparison of the dynamic speed of the actual load current waveforms obtained using the MPC-conv1, MPC-conv2, and proposed methods during step-change in (a) reference current magnitude from −3 A to −1.5 A, (b) reference current magnitude from −3 A to 1.5 A, (c) reference current frequency from 50 Hz to 75 Hz, (d) load change from 20 Ω to 10 Ω.
Figure 16 .
Figure 16.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter phase voltage during the transient state of the reference current change from −3 A to −2.5 A obtained using (a) the MPC-conv1 method, (b) the proposed method.
Figure 17 .
Figure 17.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter phase voltage during the transient state of the reference current change from −3 A to 3 A obtained using (a) the MPC-conv1 method, (b) the proposed method.
Figure 18 .
Figure 18.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter phase voltage during the transient state of the load change from from 20 Ω to 19 Ω obtained using (a) the MPC-conv1 method, (b) the proposed method.
Figure 19 .
Figure 19.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter phase voltage during the transient state of the load change from 20 Ω to 5 Ω obtained using (a) the MPC-conv1 method, (b) the proposed method.
Figure 20 .
Figure 20.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter phase voltage during the transient state of the changes of the reference current in magnitude from −3 A to −1 A and a 40 • phase advance using (a) the MPC-conv1 method, (b) the proposed method.
Figure 21 .
Figure 21.Simulation waveforms of the a-phase actual current, a-phase reference current, and a-phase inverter-phase voltage during the transient state of the changes of the reference current in magnitude from 3 A to 1 A, and a 40 • phase delay using (a) the MPC-conv1 method, (b) the proposed method.
Figure 22 .
Figure 22.A five-level CHB inverter.(a) Block diagram of the experimental setup and (b) photograph of the prototype setup.
Figure 23 .
Figure 23.Experimental waveforms of the three-phase load currents (i oa , i ob , i oc ) and a-phase reference current (i * oa ), inverter-phase voltage (v an ), and line-line voltage (v ab ) during a steady state obtained using the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Figure 24 .
Figure 24.Experimental waveforms of a-phase load currents (i oa ), a-phase reference current (i * oa ), and a-phase inverter-phase voltage (v an ) during a transient state of the reference current magnitude change from −3 A to 1.5 A obtained by (a) the MPC-conv1 method, (b) the MPC-conv2 method, (c) the proposed method.
Figure 25 .
Figure 25.Experimental waveforms of a-phase load current (i oa ), a-phase reference current (i * oa ), and a-phase inverter phase voltage (v an ) during a transient state of frequency step-change of the reference currents from 50 Hz to 75 Hz obtained using the (a) MPC-conv1 method, (b) MPC-conv2 method, (c) proposed method.
Table 1 .
Relationship of the number of cells, voltage levels, and voltage vectors of the multilevel CHB inverter. | 10,334.8 | 2018-02-02T00:00:00.000 | [
"Engineering"
] |
Digital Twin Concepts with Uncertainty for Nuclear Power Applications
: Digital Twins (DTs) are receiving considerable attention from multiple disciplines. Much of the literature at this time is dedicated to the conceptualization of digital twins, and associated enabling technologies and challenges. In this paper, we consider these propositions for the specific application of nuclear power. Our review finds that the current DT concepts are amenable to nuclear power systems, but benefit from some modifications and enhancements. Further, some areas of the existing modeling and simulation infrastructure around nuclear power systems are adaptable to DT development, while more recent efforts in advanced modeling and simulation are less suitable at this time. For nuclear power applications, DT development should rely first on mechanistic model-based methods to leverage the extensive experience and understanding of these systems. Model-free techniques can then be adopted to selectively, and correctively, augment limitations in the model-based approaches. Challenges to the realization of a DT are also discussed, with some being unique to nuclear engineering, however most are broader. A challenging aspect we discuss in detail for DTs is the incorporation of uncertainty quantification (UQ). Forward UQ enables the propagation of uncertainty from the digital representations to predict behavior of the physical asset. Similarly, inverse UQ allows for the incorporation of data from new measurements obtained from the physical asset back into the DT. Optimization under uncertainty facilitates decision support through the formal methods of optimal experimental design and design optimization that maximize information gain, or performance, of the physical asset in an uncertain environment.
Introduction
The concept of the digital twin (DT) is permeating across nearly all engineering disciplines, as well as other fields. The increased interest in DTs relates to their role in the Industry 4.0 revolution [1], and is driven by technology advances that bridge physical assets with digital models, e.g., the Internet of Things (IoT). Industries that do not successfully transition through this revolution will likely be diminished. In this regard, the nuclear power industry is no exception. The anticipated value of DTs in nuclear power can generally be taken as gains in efficiency and safety-both perpetual goals in nuclear power applications. The impact of such DTs could be far-reaching, including real-time monitoring that enables automation and predictive maintenance, accelerated development time, enhanced risk assessment, and optimization in various aspects of the system (e.g., operation, component design, fuel utilization, shielding, etc.). Therefore, DTs for nuclear power systems should be considered a key technology for expanding nuclear power applications. With the impetus of the world's current climate crisis, carbon-free, clean energy technologies, like nuclear power, are critical to ensure the viability of our future.
According to the authors of [2,3], the DT terminology dates to the early 2000s, and it is usually credited to Grieves who later documented it in a white paper [4] from 2014. The U.S. Airforce in 2011 [5] and NASA in 2012 [6] defined the DT more completely for aerospace applications and identified it as a key technology. Several other definitions of DTs were also proposed prior to more contemporary efforts [7][8][9][10]. Many of the recent papers focus on generalizing the idea of the DT [2,3,[11][12][13], as the earlier works were concerned specifically with aircraft, high-fidelity simulation, and structural mechanics.
Currently, there are many contemporary review articles on DTs focused on the definitions, challenges, enabling technologies, and opportunities [2,3,14,15]. (For readers interested in a comprehensive review of DTs, we recommend the work by Rasheed et al. [3], which has an extensive bibliography.). One recent review article on DTs identifies three distinct application areas: manufacturing, healthcare, and smarty city environments [2]. In this review, aircraft, power, and energy systems were all grouped under manufacturing. However, this review included only one paper each for power systems, aircraft, and energy systems. Therefore, the discussions in the aforementioned reviews tend to focus on these broader views with industrial processes generally lumped in with manufacturing, and ignore the specific considerations of any subdiscipline.
Works that are more specific to the nuclear engineering discipline include those in [16][17][18][19]. In [16,17], Garcia et al. mention DTs, but they do not review or advance DT concepts, nor provide clear or concise definitions of them. Rather, these works focus on the broader concepts of secure embedded intelligence and integrated state awareness, where DTs have a specific role. The DT concept is closely tied to Industry 4.0, and Lu et al. [18] provide a review of nuclear power plants in the context of AI and Industry 4.0. This work also mentions DTs, and advances the concept of a nuclear power plant as a Human-Cyber-Physical System, very much in the spirit of DTs, but limits the focus to AI-based approaches. Additionally, there is a review article in preparation by Lin et al. [19] that focuses on uncertainty quantification (UQ) of machine learning (ML) generated DTs for prognostics and diagnostics supporting autonomous management and control of nuclear power systems. The main contributions of this work are the review of ML methods to support DT creation and assessment, and how the software underlying the ML-based DTs may be evaluated in a risk-aware and regulatory setting. Overall, however, we make the observation that there is a notable dearth of literature on DTs for nuclear systems.
We do not seek to replicate the reviews for general DTs in this paper, nor do we seek to describe the technical details of a particular DT solution or software architecture. Instead, our aim is to convey the salient elements of some of these works and offer perspectives about the relationship to ongoing activities in the nuclear engineering community. With this paper we seek to add to the literature for DT concepts related to nuclear applications, and in keeping with the spirit of other contemporary works, we focus on understanding and defining the DT concept in the context of nuclear power. Specifically, the objective of this paper is to review and relate these broader works to the nuclear engineering community. We aim to accomplish this by identifying what aspects of DT development and enabling technologies are most appropriate for study and advancement by the nuclear engineering community, and where the community should be looking to adopt existing technologies. Much of the paper focuses on physics-based simulation technologies for DT developmentrather than ML-and the importance of UQ in connecting the physical and digital assets. Further, we provide commentary on the recent directions and activities of research and development in nuclear power applications, and modeling and simulation (M&S). We also offer suggestions on how we might refocus some of current efforts to support DTs in nuclear power applications. In general, we will not focus on the aspects of risk-informed analysis or AI/ML methods as these are suitably covered by Lin et al. [19].
For the remainder of this paper, we first provide some background on nuclear power technology. Then, a review and discussion about the important details of defining DTs is given in Section 3 with a newly proposed definition for nuclear power applications. Section 4 reviews existing nuclear engineering M&S capabilities in the context of DTs. Next, in Section 5, we offer perspectives on the needs and challenges of developing and using DTs for nuclear power applications. This section focuses primarily on the computational and simulation aspects of DTs. Following this, in Section 6, we transition the discussion to UQ from a more technical angle. Here, we identify and examine challenges and promising methodologies for linking the physical and digital halves of the DT. Finally, we summarize the paper with some concluding remarks in Section 7 on the relevance of DTs as an enabling technology for nuclear power in a clean energy future.
Brief Overview of Nuclear Power Systems
Nuclear power systems produce heat energy through the fission process which is driven by the interaction of neutrons with the nucleus of an atom. In the fission process, heavy nuclides break apart and deposit their kinetic energy locally as heat. The primary element for fission in commercial energy production is uranium. The principle advantages of nuclear fission as a thermal energy source are its overall power density, long operation times without needing to refuel, and lack of carbon emissions.
In currently operating nuclear power systems, the heat produced by fission is transported away and typically used in a Rankine cycle to rotate a turbine to produce electricity. This is essentially the same power conversion cycle as combined-cycle natural gas plants and other fossil fueled power plants. Some example diagrams of existing reactor types are illustrated in Figure 1 to show some of the primary components in the system. Advanced nuclear systems (also called Gen-IV reactors), which have yet to operate commercially (although there are a handful of historical examples), may use different power conversion cycles, such as a Brayton cycle or working fluids (such as molten salts). Simplified reactor system schematics of Generation IV reactor designs are given in Figure 2 to illustrate some of the main components in these nuclear power systems. To create a DT of a nuclear power system, these components-at least-would need to be digitized and modeled; however, in actual nuclear power systems, there can be thousands of components. Because the power conversion systems for nuclear energy are similar to power conversion systems used in the fossil industry, there is considerable overlap in the components that are required for digitization to construct a digital twin. The defining aspect of nuclear power systems is the nuclear core. Although some other systems and components utilize highly specialized materials and are subject to unique degradation modes as a result of operating in intense radiation environments, the particular aspects of a nuclear power DT that differentiate it from other DT efforts derive from the presence of radiation and the physics resulting from the interaction of radiation with matter in the physical system. Consequently, much of our discussion throughout this manuscript focuses on these defining features.
For the modeling of nuclear power systems, the usual equations of structural mechanics and dynamics, fluid dynamics, and heat transfer are no different than other fields of engineering. In materials performance at the engineering scale, the physics are so complex that first principles equations are not as well defined, so empirical formulations, Arrhenius-like equations, and statistical mechanics are used to develop models that may be quite different from those found in other engineering disciplines. The equations that are most unique to the nuclear system are the Boltzmann neutron transport equation (NTE), Equation (1), and the nuclide transmutation and decay equation (often called the Bateman equation), Equation (2). There are also modified forms of the NTE that describe the transport of gamma rays, and charged particles, after introducing terms for electromagnetic forces. These equations are relevant to the transport of other particles in reactors, and are important to modeling the mechanisms for the detection of radiation.
1 v ∂ψ( r, Ω,E,t) ∂t In the time-dependent NTE shown by Equation (1), the fundamental unknown is the angular neutron flux, ψ, which has dependent variables for its position in 3D space, r; its direction of flight, Ω (described by two independent angles); its energy, E (or velocity, v); and time, t. Physically it represents the number of neutron tracks generated per unit volume for a given speed and direction at a moment in time. The NTE is a conservation equation for the time rate of change of the angular flux where neutrons are lost by leaking out of the system or colliding with a nucleus. Neutrons of different energies and directions of flight may scatter into a particular unit of phase space, be generated through fission, or some other external source, Q. The probability that a neutron collides with a nucleus is described by the total macroscopic cross section, Σ t , where some of those interactions may result in a scattering event or a fission event. The corresponding probability for scattering from another direction, Ω , and energy, E , into E and Ω is described by the scattering kernel, Σ s ( r, Ω · Ω, E → E, t). The probability that a neutron-nucleus interaction results in a fission is given by the macroscopic fission cross section, Σ f . On average, a fission event produces ν(E) neutrons with the probability of the neutrons being emitted at E given by χ p (E). The remaining fraction of neutrons emitted, β, are delayed. These neutrons are emitted through the radioactive decay of precursors, C k , at a rate of λ k , and have a different emission spectrum, χ d (E), than the prompt neutrons. The precursor concentrations have their own set of differential equations that follow as simplifications to Equation (2), and we give these in Equation (5). Reactors operate by sustaining a chain reaction, so during normal operation, the derivative of the angular flux with respect to time is essentially zero.
The Bateman equation describes the evolution of the materials in the reactor subject to neutron bombardment and radioactive decay. It is also a conservation equation. The time constants in this equation can vary from microseconds to millions of years. For practical engineering applications, important effects described by Equation (2) occur on time scales from days to months. The solution of this equation is essential to knowing the state of the materials in the reactor, its resulting criticality (whether or not it can sustain a chain reaction), and how much fuel is left. In this equation, N i represents the number density of isotope i. The system of differential equations can include up to every known isotope in the chart of nuclides, which is more than 2000. λ is the decay constant for radioactive decay of the nuclide; j→i is the fraction of decays of nuclide j resulting in formation of nuclide i; σ a is the microscopic absorption cross section; f j→i is the fraction of neutron absorptions in nuclide j resulting in the formation of nuclide i; and φ is the scalar neutron flux, which is the angular flux integrated over Ω. For non-solid fuels, the Bateman equation must be modified to include an advection term to account for the motion of nuclides in space.
Common approximations of the NTE include the neutron diffusion equation, Equation (3), which neglects the dependence on the direction of flight for neutrons and assumes that they are distributed isotropically in angle, and the point kinetics equations, Equation (4), that ignores the spatial and energy dependence of the neutrons in a reactor. Although simpler, these equations are still widely used in reactor analysis, and in a broader sense may be understood as reduced order models of Equation (1).
In the diffusion equation, most of the terms have the same meaning as given in Equation (1), and φ( r, E, t) = 4π ψ( r, Ω, E, t)d Ω is the scalar neutron flux. The main approximation introduced in the diffusion equation is that the leakage (advection) operator is replaced by a diffusion operator with a diffusion coefficient, D( r, E, t). Essentially this assumes that how neutrons move through a system can be sufficiently described by Fick's Law of Diffusion.
The point kinetics equations describe the time dependence of the power, P of a pointreactor, where ρ is the reactivity that results from perturbations to the system by passive feedback or operator intervention, and Λ is the prompt neutron lifetime. We also introduce the other set of important differential equations for the production of delayed neutrons that are emitted milliseconds to 10 s of seconds after a fission event (rather than simultaneously with the fission event). The delayed neutron precursors, C k , correspond to daughter nuclides produced from fission; however, typically these data are obtained from regression models such that there are 6 or 8 delayed neutron precursors instead of the actual dozens of nuclides emitting neutrons. β k is the fraction of delayed neutrons emitted per fission by precursor group k, where β = ∑ k β k is the total delayed neutron fraction.
In summary, these are some examples of the types of nuclear power systems for which we seek a digital twin. Further, the first-principles equations unique to nuclear power systems, and common approximations to them used in regular engineering analysis, were presented and discussed to provide context to the underlying problems that must be simulated that are unique to a nuclear power system DT.
Defining Digital Twins for Nuclear Power Systems
The concept of twins are not new in nuclear power. Since the beginning of the commercial nuclear power industry, it has been practically standard practice to commission a duplicate control room for training purposes. In some of the early U.S. Government research programs for developing reactor systems, physical twin assets were also used. Since that time however, there has not been much innovation or advancement in the concepts of twinning. To be sure there have been significant advances in the technology underlying the various components, their design, and associated means of making measurements, but there has not been much in the way of conceiving of the physical systems and their twins or their use in a way that is fundamentally different. This is contrasted with the recent interests around the notion of the DT, particularly the high-fidelity kind, which is essentially made possible by the incredible advances in computers and simulation. M&S has now matured to a point that researchers in this area think it is possible, and imminent, to be able to create a DT. Exactly how that may be done is the focus of the Section 5. Prior to that, consistent, pedantic definitions of the DT for nuclear applications are needed.
Review of Digital Twin Definitions
Presently, the definition of a DT is quite broad and varies. This is due to the wide range of disciplines interested in DTs. Several definitions for DTs often contain nuanced but significant differences. What we are interested in elucidating in this section are the key characteristics that lead to these subtle differences and how to best organize this information into a conceptual model amenable to understanding the application of DTs to nuclear reactor systems.
Like the work by Fuller et al. [2], we start with a short review of some of the definitions given previously: • Tuegel et al. [5] define the DT as being "ultrarealistic in geometric detail, including manufacturing anomalies, and in material detail, including the statistical microstructure level, specific to this aircraft tail number". Here, the DT concept is focused on high-fidelity simulation by the finite element method (FEM) and computational fluid dynamics (CFD) for the prediction and management of the structural life of aircraft. The authors note that another key feature of their concept is the ability to "translate uncertainties in inputs into probabilities of obtaining various structural outcomes". • Glaessgen and Stargel [6] have a similar definition centering on ultra-high-fidelity simulation being integrated with a vehicle's health management system. In this paper, the authors focus more on certification of the vehicles and a reliance on the assumed similitude of data used for certification. They identify this as a shortcoming to be addressed by DTs. Their definition for a DT is "an integrated multiphysics, multiscale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin". • Boschert and Rosen [7] provide a very general definition with "the Digital Twin itself refers to a comprehensive physical and functional description of a component, product or system, which includes more or less all information which could be useful in allthe current and subsequent-life cycle phases". Here, the authors acknowledge that the DT concept is variable in terms of where it is applied in the product life cycle and the overall fidelity of data and models encompassed. • Chen [8] similarly broadens the usage of DT and defines it as "a computerized model of a physical device or system that represents all functional features and links with the working elements". • Schluse et al. [10] take a slightly different perspective on DTs focusing on the value as an asset for experimentation. Nevertheless, many of the same fundamental requirements arise from their definition of experimental DTs as "a one-to-one replica of a real system incorporating all components and aspects relevant to use simulations for engineering purposes but also inside the real system during real-world operations".
• Tao et al. [21] describe the DT in a few ways. First, it is a concept "associated with cyber-physical integration." Further, DTs create "high-fidelity virtual models of physical objects in virtual space in order to simulate their behaviors in the real world and provide feedback", and "reflects a bi-directional dynamic mapping process". • Rasheed et al. [3] state that a "Digital twin can be defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision making". • Lin et al. [19] also offer a good definition with: "A DT is a digital representation of a physical asset or system that relies on real-time and history data for inferring complete reactor states, finding available control actions, predicting future transients, and identifying the most preferred actions".
Analysis of Key Characteristics
From the selected definitions given above, a few commonalities stand out: highfidelity simulations, integration of calculated and measured quantities, detailed equivalence with a unique physical system, and application over a product's/asset's life cycle. Some definitions differ on the degree of equivalence between the digital and physical twins, noting that every detail is important or there may be only a subset of information that is relevant. Some definitions denote real-time simulation capability; others acknowledge the need for probabilistic methods. Several of the definitions describe the DT in the context of its application (e.g., prediction and management of the structural life or optimal control), although this application may vary.
Proposed Definition
In proposing our definition of the DT and related concepts, we draw from the concepts described by Fuller et al. [2]. However, we find these definitions by themselves lacking the context of a life cycle, so our definition expands these to exist within the life cycle laid out by Boschert et al. [7].
Fuller et al. [2] develop a taxonomy of three definitions that differentiate concepts based on information flow between the physical and digital assets, and categorize two of these as misconceptions. Their taxonomy defines digital models, digital shadows, and digital twins. We consider each of these valid, rather than some being misconceptions, as they each serve a distinct function in the life cycle to be described later. This differs from definition put forth by Lin et al. [19] where a DT is defined in terms of its function (intended use), its model (how it is developed), and interface (how information is communicated to the operator).
We propose the use of Fuller's taxonomy as these definitions are relational with clear features, and do not necessarily preclude aspects of Lin's conceptualization. The conceptual relation of the digital model, digital shadow, and digital twin is illustrated by Figure 3. As an example, in this figure, photographs of the Ford Nuclear Reactor represent the physical asset. Below this are a collection of images comprising digital representations of the Ford Nuclear Reactor. The digital representations include component models, simulation results, and virtual environments. The blue ovals represent the digital objects that comprise the digital model, digital shadow, and digital twin; the arrows illustrate the relationships and information exchange among these digital objects. Our interpretation of Fuller's Taxonomy is as follows: • The digital model may or may not be associated with a physical asset. Thus, it need not integrate with physically measured or sensed quantities. This is how we might think of most of the existing M&S efforts in nuclear engineering. Full simulation models of planned or existing reactors, and their systems and components, capable of simulating the system physics and dynamics comprise the digital model. What distinguishes the digital model from the digital twin is that information generated by the digital model is not automatically integrated with the physical asset. • The digital shadow extends the digital model by incorporating information from an existing physical asset to update the digital model, but does not utilize any information generated by the digital representation in the physical asset. We note that digital representations of historic facilities that no longer exist can qualify as digital shadows. The inverse of the digital shadow, where information only flows from a digital model to a physical asset, is not a coherent paradigm for useful engineering analysis as there is a physical system operating with essentially no connection to reality. Therefore, this situation is not explicitly defined or discussed further (For the curious reader, this paradigm essentially aligns with Plato's Allegory of the Cave [22] or Putnam's more contemporary "Brain in a vat" [23]). • The digital twin is therefore the "closed loop" model of the physical asset and the digital representation(s). The digital twin exchanges information in real-time with the physical system to update its state and perform predictive calculations that are then used to inform decisions and control actions on the physical asset.
The definitions above do not necessarily clarify use or differentiate meaning within the context of a life cycle. Thus, we expand these definitions to describe how they should be used in the various phases of the physical asset's life cycle. For the life cycle phases, we adapt those definitions from Boschert et al. [7]. The sequence of phases are shown in Figure 4 and include • the initial conceptual design, • the engineering design, • procurement and construction, • the operational phase that undergoes intermediate service, and • and finally the decommissioning phase.
The lifetimes of the various digital objects are illustrated below with the life cycle phases in Figure 4. This figure shows that the digital model is the initial point of creation and may persist indefinitely. The lifetime of the digital twin is fully coincident with the lifetime of the physical asset. If there is no asset, there is no digital twin. Beyond the lifetime of the twin there is the digital shadow. The shadow can also exist coincidentally with the physical asset. Moreover, the shadow can exist indefinitely as integration of data from the physical asset may be manual. This recognizes the value of destructive testing and post-irradiation examinations of various components of the plant, where there can still be valuable information learned, and integrated into the digital object to refine the underlying models for the next generation of products. The last aspect of our proposed definition is not to presume a single model, instance, fidelity, or physics in the digital representation. Rather, the digital object may be able to provide multiple models at the appropriate fidelity. This detail was not a part of the initial definitions of digital twins [5], but as these definitions evolved, it was incorporated [7]. The desire to have varying fidelity is one of the unique challenges that we discuss further in Section 5.
To summarize, the definition we propose for the DT includes the following components: • Prior to the existence of any information exchange between the digital and physical assets, the digital object is described as a digital model. This often encompasses the conceptual and engineering design phases.
• The digital model may exist alongside the physical twin and indefinitely, if there is no information exchange with the physical asset. • Following the creation of a physical asset, a digital shadow may be created that incorporates information from the physical asset in either an automated or manual sense, but it does not provide information back to the physical asset. • The digital shadow may also persist indefinitely. • The digital twin exists only as long as there is a physical asset. • The digital twin has real-time, automated, two-way information exchange between the digital representation and physical asset. • The digital twin may involve a set of models of varying fidelity and complexity. • A digital twin has a corresponding digital model and digital shadow. The digital model and digital shadow are specific aspects of the twin.
In the remainder of this paper, we refer to the collection of these digital objects simply as the digital twin.
DTs should include a physical and functional description of all systems, structures, and components that captures as much detail as possible that is useful for any analysis, real-time prediction, monitoring, or control in any phase of the life cycle. The requirements and enabling technologies for the digital representations are discussed in Section 5. Next, we consider historical and contemporary digital representations of nuclear power systems to compare their capabilities to our definition of the digital twin.
Historical and Contemporary Digital Representations of Nuclear Power Systems
Several simulation capabilities for nuclear systems have been developed over the past 60 years. Advancing M&S capabilities of nuclear power systems has also been a key area of research in the past decade. Furthermore, there are numerous commercial tools not specific to nuclear power systems that have features to support DTs.
Common Nuclear Engineering Simulation Tools
The first tool we focus on is the plant simulator. Plant simulators have been around for several decades. They are generally full-scope and real-time-two attributes often desired of DTs. Their use is historically for training, and oftentimes includes a (physical or digital) duplicate of the control room. These replica control rooms are even known to be kept up to date with the minutiae of the real control room with details such as scratched surfaces or replaced instruments. Two commercial vendors for plant simulators are Western Services Corporation with their 3KEYMASTER TM platform that relies on RELAP5-3D [24] for system thermal-hydraulics, MARS [25] for core thermal hydraulics, and NESTLE [26] for core neutronics among other components. The other commercial vendor is GSE Solutions, which has their Generic PWR product (GPWR), but is also developing new build simulators for NuScale, AP1000, mPower, and the PBMR. We mention these as important technologies because they are full-scope, and they are real-time. They are not necessarily high-fidelity, and it is not clear how they might be adapted to integrate real-time information from a physical asset, or provide predictive capabilities. Nevertheless, they have clearly defined the necessary computational resource and modeling requirements to support the full-scope, real-time simulation of a plant. These contemporary products essentially run on single-core desktops with 2-3 GHz processors at real-time, or faster than real-time. This is in stark contrast to the high-fidelity tools to be discussed momentarily.
To accomplish this in the GPWR, the SimExec engine manages the dynamic execution of the various models for each component at the associated time intervals and maintains a master state of a few thousand state variables. To run real-time, component models are essentially lumped parameter dynamics models with strict execution time requirements (e.g., the model for component x must execute at 7 Hz). This engine can also integrate with a highly realistic interactive digitization of an actual control room if needed.
The requirements for plant simulators were detailed some time ago by Wiltshire in 1986 [27]. In that paper, Wiltshire describes a state-of-the-art system with the simulation requirements for an advanced gas reactor simulator. For a full-scope system, there are not only differential equations, but also algebraic (arising from property correlations and definitions of coefficients) and boolean systems of equations (arising from diagnostics). Table 1 summarizes the orders of magnitude in unknowns for some various components based on the information in [27] and more recent estimates. We note that Wiltshire describes a hardware system that uses 52 parallel computers, while today much of this is calculated on a single processor. Table 1. Number of unknowns by type of equation for a real-time, full-scope plant simulator. The O-notation indicates order of magnitude. For example, "O(1000)" in the second row, second column means the "Reactor (x,y)" equations need to solve thousands of unknowns from the discretized differential equations.
Beyond the full-scope plant simulator, the nuclear engineering community has focused considerable efforts on the system dynamics models with codes such as TRACE [28] and RELAP, which we note are already a part of these simulators in some cases. However, it is likely the versions in the simulator are not the most advanced form of these tools or necessarily use the highest fidelity representation of the plant. In addition to the system dynamics, the core modeling is the other area that receives considerable attention in terms of the tool development because the physics involved in the reactor are specific to the nuclear engineering discipline. The industry standard tool here is SIMULATE [29], which solves for the core power distribution and dynamic response-at least for light water reactors. For advanced reactors, DIF3D [30] is still the standard for non-pebble bed designs. In pebble bed designs, the core modeling tools are VSOP [31] and AGREE [32]. Reactor vendor specific tools also exist.
The high-fidelity simulation tools for plants thus far have mainly focused on individual components. The recent programs supporting development of these next generation tools were CASL [33], and now NEAMS [34]. Many of the tools developed under NEAMS are finite element-based, while under CASL, the VERA software [35] was an amalgam of models and codes, with the most novel contribution being the core simulator, VERA-CS [36]. The NEAMS finite element tools are generally based on the MOOSE framework [37]. From the MOOSE framework, there is also a suite (or zoo) of applications implementing various physics for modeling various components. For advanced reactor systems analysis, the SAM [38] code is also in development based on the MOOSE framework.
The tools described thus far are all mechanistic in nature-they arise from physically based governing partial differential equations. Beyond these mechanistic tools, the RAVEN framework [39] provides capabilities to support users with reduced order model (ROM) construction, statistical analysis, UQ, probabilistic risk assessment, and others. In a sense, RAVEN is designed to integrate with the mechanistic tools to exercise its capabilities.
Collectively, these tools do not rise to the level of supporting a DT on their own. They do, however, meet the criteria for a digital model, but this should not be surprising. In the nuclear engineering community, the last decade of research focus has been primarily on advancing the M&S capabilities that support digital models. As a result, we have several new codes and a program for advancing the fidelity of the mechanistic models through the adoption of more computationally expensive models. Nearly all of the NEAMS tools are designed to run in parallel on high-performance computing (HPC) systems, and are far from being able to provide value for real-time DTs. However, they can still serve a role in support of DTs as a basis for ROM construction, and other ways we will discuss in Section 5.1.3. Beyond this role however, the investments in high-fidelity simulation may not yield much return in the DT space without further advances.
Relevant Non-Nuclear Commercial Simulation Tools
Outside of nuclear engineering, there has been considerable progress in providing capabilities to support DTs. From the modeling side, any commercial software suite for dynamic systems modeling is capable of developing a full-scope, real-time digital model like the nuclear plant simulator. Some of these commercial tools at present are: Simulink, SimulationX, Dymola, MapleSim, 20sim, ANSYS Twin Builder, MSC Apex, etc. These products are generally far ahead of the corresponding U.S. Department of Energy (DOE) counterparts in terms of their overall feature set and capabilities for usability. However, beyond the physics for thermal-fluids and structural mechanics modeling, nearly all of them do not contain representations of the requisite physics for nuclear power applications described in Section 2.
This observation presents a conundrum about where to put forth effort. Should the M&S community work to integrate their capabilities with commercial tools or should they expend the effort to bring the quality and capabilities of their existing tools to the level of the commercial tools? This is a complicated question that the authors will forgo proposing an answer to. However, in the next section we discuss some tangible opportunities that relate to this consideration.
Emerging Tools and Capabilities
The emerging capabilities for DTs in the nuclear arena have both digital and physical components. On the digital side, there has been a notable investment in Modelica-based [40] models through the TRANSFORM library [41] and by the Integrated Energy Systems group at the Idaho National Laboratory (INL) where they have recently built system dynamics models of the NuScale reactor [42]. These models are notable for demonstrating the capabilities to model the dynamics of nuclear systems using Modelica, which is a modeling language that is arguably the most appropriate for DT applications. The emerging physical assets that present good opportunities are those under development in the microreactor program, and in particular the MAGNET [43] test bed. The Compact Integral Effects Test (CIET) facility has also been modeled with TRANSFORM [44] and SAM [45]. There is also an effort that has just started for developing and demonstrating a DT in the SAFARI project [46]; which is one of many recent ARPA-E funded projects under the GEMINA program with a DT component [47][48][49][50]. Last, the recently demonstrated KRUSTY experiments in the Kilopower project [51] demonstrated good prediction from the digital models and measured high quality data.
Thus, these recent and emerging tools and facilities represent an existing foundation on which one can build towards a functional DT. However, there is not yet an automated connection between the physically measured quantities and the digital models. Consequently, additional work is needed to realize nuclear system DTs. This is what we discuss next.
Enabling Technologies and Challenges for Digital Twins of Nuclear Power Systems
In this section, we describe the enabling technologies to realize nuclear DTs adhering to the definitions developed in Section 3. We focus on where contributions should be made by the nuclear engineering community. Moreover, technologies that are sufficiently mature, although not sufficiently familiar to the nuclear engineering community, where contributions are not necessarily needed, are also identified. Finally, some potential approaches to realizing a nuclear DT and their challenges are described. Since a significant aspect to the challenges and enabling technologies is in the area of UQ, Section 6 is devoted entirely to this topic.
System Dynamics Modeling
System dynamics models are well established in several engineering disciplines. The evidence for this is in the plethora of multidisciplinary commercial simulation tools listed in Section 4.2. There is also an extensive history of their application in nuclear power systems. This is an enabling technology for DTs that will likely not require any revolutionary developments or contributions. Codes like TRACE, RELAP, and SAM are all systems dynamics codes developed specifically for nuclear reactor applications. We can think of these models as lumped parameter systems of varying fidelity-although the aforementioned examples are capable of capturing considerably more complex physics. Lumped parameter models can be sufficiently accurate and sufficiently inexpensive to evaluate. This has been proven repeatedly, as demonstrated by the examples of the simulators described in Section 4.1. Where system dynamics models typically fall short is in the applicability of their coefficients. In these models, the ROMs are known and can be rigorously derived with some assumptions (e.g., the point kinetics equations). The physics and applicability of these models rely almost entirely on the coefficients. The coefficients are the physics. Therefore, the challenge with system dynamics models is in having a way to adapt or recalibrate the coefficients to better match a physical asset's behavior.
Model Based Controllers
One promising role for DTs in nuclear engineering is that they will support autonomous control and operation. Numerous methods exist in controls engineering, and we suggest that model-based controllers are the specific enabling technology for DTs. Modelbased control systems rely on some underlying model that is reasonably predictive of the system dynamics. Several examples of model based control exist for nuclear power systems [52][53][54][55], and are generally applied to the core power, but may be used for other components or quantities of interest [56,57]. These models are typically lightweight so as to meet requirements for real-time execution and may be constructed rigorously through physics-based methods, statistical methods, purely data-driven approaches, or ML. Model predictive control (MPC) [58] is one example of model-based control with several variants that extend the underlying method to be robust in the presence of noise [59], applicable to nonlinear systems [60], or incorporate some ML methods [61]. We propose that modelbased control is superior to model-free controllers (e.g., PID) because it is easier to make guarantees about the limits of the controller's behavior and explainability is more easily achieved. Furthermore, in most cases we have a good sense of what the mechanistic model is, and purely model-free methods ignore this knowledge.
Automated ROM Construction
Another key technology for DT development will be automated ROM construction. There is more than 60 years of digital models of nuclear systems and many of these are not real-time. The simulation components of the DT will need to be primarily real-time. In any situation where an existing digital model is not real-time, it is amenable to representation as a ROM. As the digital models span a range of codes, models, tools, inputs, etc., having an automated way to produce a ROM will be a key enabling technology to seamlessly generate the necessary pieces to build a DT. There are numerous techniques that exist to produce ROMs. Some of these are given in Figure 5, illustrating how they might relate to other models, and one another, in terms of their knowledge of the physics and complexity.
Any of these approaches, and others, are suitable for ROM construction. Although we suggest that knowledge of the physics being modeled by the ROM should be used to prioritize what technique is used. Some capabilities exist in this regard through RAVEN [39], and they have been successfully demonstrated for risk-informed safety margin characterization [62]. However, the full spectrum of model order reduction methods is not available, so some contributions could be made here. An additional, practical, and critical consideration for automated ROM construction tools is ease of use by the community. To have tangible gains in productivity, modelers should expect to put forth as little effort as possible to create a ROM. Therefore, usability and flexibility should be the focus of this effort, not necessarily novel contributions to model order reduction techniques. Opportunities for novel contributions still exist though, but these should focus on developing a priori and a posterior error estimates of the ROM construction techniques to facilitate confidence in the usability and flexibility.
Functional Mockup Interfaces
The functional mockup interface (FMI) [63] is an open standard for a software interface to facilitate model exchange and co-simulation. This is an enabling technology because of the need to integrate different models of different fidelities involving different physics. Effectively, FMI standardizes an interface for coupling computational models independent of the tools. It uses a combination of XML files for describing the model contents and interface, and C-interfaces, shared libraries, or source code to provide a means to execute the code. There have been numerous attempts in the nuclear engineering community to develop such a capability [64][65][66][67][68]. However, none of these have seen broad adoption because they are largely tool-or framework-based. Our recommendation is that this standard be adopted to facilitate interaction with commercial tools and various software tools developed within the nuclear engineering community. Furthermore, the standard can support integration with physically sensed data and extended reality (either augmented or virtual). Presently, the standard is supported by several international experts and industry entities. We present Figure 6 from Touran et al. [68] as an example of the various interfaces that can exist for a nuclear energy DT; here, each of the interfaces could be implemented with FMI enabling broad interoperability with other commercial or open source tools.
A Digital Twin Paradigm
As we noted earlier, much of the necessary theory for the enabling technologies of DTs, and in some cases the technology itself, exists in some form. Therefore, we offer the opinion that the development and realization of a DT is primarily an applied research area, as opposed to a basic research activity. Our paradigm for the DT-physical asset interaction is illustrated in Figure 7. In this figure, the DT is a real-time dynamics model based on mechanistic models. This model calculates predicted responses and control actions for the DT, and receives inputs from the physical asset. It is assumed to contain various levels of controllers and decision making frameworks. This is to execute all in real-time and uses FMI as the basis of information exchange (or similar standard). Below the FMI-based communication layer is an offline-but on-demand-capability that uses resources that are slower than real-time. This also includes some data repository component for recording the history of the DT operation. Use of this on-demand resource would be managed by Optimal Experimental Design methods described in Section 6.3. The on-demand capabilities would likely consist of data-driven approaches to make sense of the sensed data and the existing high-fidelity M&S capabilities in nuclear reactor analysis. The purpose of the on-demand capabilities is to evaluate the inconsistencies in the simplified real-time dynamics of the DT and, through ML processes (whether they are statistics-based, involve regression, or use neural networks), provide corrections to the DT. These corrections may account for changes in model coefficients due to the natural evolution of the physical asset (e.g., burnup, lower power states, high power states, etc.) or known limitations of the model (e.g., due to assumptions like linearity). We see this as one way to make use of the existing advanced M&S capabilities in NEAMS.
To achieve this paradigm using the aforementioned technologies there are several challenges that we now describe.
Security
As nuclear power systems represent sensitive and critical infrastructure, security of the information around the DTs should be considered a necessary requirement. Information security for DTs though should not be something developed by nuclear engineers, instead the community should consider itself a user of this technology. As a challenge, what is needed in this area is the proper engagement of experts in the field of cyber-physical system security to ensure the requirements of nuclear power applications are captured. Trying to devise our own solutions to the security problem are likely misguided. As an example, standards for IoT systems security are already in development [69]. Further, the notion of secure embedded intelligence [16] could be leveraged to devise solutions to this challenge.
Integration with Prognostics and Health Management
A recognized area for value extraction from DTs is in prognostics and health management (PHM). PHM appears repeatedly across multiple potential DT applications. The exact manner in which PHM-related data integrates with DTs remains unclear, specifically how the data is used as it is collected from the physical asset. Some proof-of-concepts for integration with PHM have been demonstrated [70] through using the enabling technologies mentioned in Section 5. A similar approach could be leveraged for nuclear power applications; however, this area represents one of the key challenges where the nuclear engineering community will need to engage the broader PHM community to develop novel solutions.
Data Collection, Curation, Transmission, and Integration
The tasks of data collection from sensors, curation of this data, and transmission and integration of the sensor data into the DT will likely present substantial challenges. However, all of these challenges are expected to be practical in nature, which is not to dismiss their importance. A DT must be able to perform these tasks. There is nothing theoretically complex about them, but they should be expected to be tedious, error-prone, and initially unreliable. For nuclear power system DTs, these capabilities should ideally be "turn-key". Furthermore, these capabilities more generally fall under the capabilities of IoT infrastructure-where there are notable advancements happening rapidly. For example, the problem of localization of either people or radiological material inside a nuclear facility is paramount, but devising solutions to this problem like Pascale et al. [71] is not necessarily within the domain of expertise of nuclear engineers. Consequently, our suggestion is that these issues be addressed by IoT experts. A possible exception to this would be for nuclear engineers to engage with IoT experts to ensure the reliability of IoT in harsh environments (e.g., high temperature and high radiation).
Integration with Risk Assessments
DTs must be integrated with risk assessments for regulatory acceptance. Moreover, integration with risk will likely be necessary to achieve the benefits from increased reliability of the physical system and improved automation. One way that the integration of the DT and risk has been envisioned is through dynamic probabilistic risk assessment. Traditional Probabilistic Risk Assessment (PRA)-which is not dynamic-is one of the cornerstone safety analysis methodologies for reactor licensing. Through traditional PRA, fault-trees and event-trees are developed with appropriate probabilities of failures and events, then analyzed to produce core damage frequencies. The core damage frequency is defined as the probability of core damage per reactor year of operation. The core damage frequency is also usually categorized by the severity of the event. The probabilities that go into the PRA are typically based on best estimate design information and relevant experiment or operational data, and include some conservatism. The conventional PRA is performed throughout the life of the physical asset for licensing purposes, but generally it is not done in real-time.
The traditional PRA is limited in this sense as the probabilities that should go into it can very well be a function of the current reactor state. The condition and health of various components as they age is a first-order effect to determining these probabilities, as is the operational state (e.g., the turbine is rotating at a resonant frequency of the blades and begins vibrating). Dynamic PRA can utilize the latest state information of the DT-and even near-term forecasts by the DT-to provide more accurate and real-time risk assessments. These risk assessments can include the time-to-failure of a component in the physical asset, the resulting event due to this failure, and the potential for radiological release.
We consider the area of integrating DTs with risk analyses to be one of the challenges for the nuclear engineering community given the unique risks of nuclear power. Fortunately, activities in support of addressing this challenge are underway by both the DOE and NRC, and build on a strong foundation. Since the 1980s the U.S. NRC has invested in software capabilities for PRA-the latest iteration of which is the SAPHIRE code [72]. Further, the NRC also recently updated their standard review plan for digital instrumentation and control systems that would necessarily be a part of DTs. The DOE has also recently launched the Risk-Informed Systems Analysis project that developed the Risk Assessment process for Digital I&C (RADIC) [73] which discusses digital-based systems, structures, and components that would exist within a DT. Lin et al. [19] discuss these and other approaches to the integration of DTs with risk assessments in detail. We refer the reader to this work for a more in-depth discussion of these approaches and challenges.
Computing Infrastructure and Reliability
A hidden challenge for realizing DTs relates to the reliability of the computing infrastructure and DT implementation. The DT's reliability is intimately woven into into the security, data transmission, and risk assessment challenges. Therefore, unless this is considered separately, it tends to remain as a hidden challenge. This challenge basically has to address the overall question of how the implementation and reliability of the hardware, computing, and digital infrastructures for the digital asset affects the physical asset. As an example, consider what level of reliability is required for the DT to use sensor data and make predictions. Moreover, what is the consequence of a failure of this system?
If the real-time DT suddenly goes offline (partially or fully), would it result in increased risk of failure of the physical asset? This is a fundamental question that should go into the design of the DT, where we as a community should desire that there is a minimal and acceptable increased risk to the physical asset in the event of a DT malfunction. Hopefully, it should not be the case that the risk to the physical asset is decreased when the DT goes offline; otherwise, the value proposition of the DT becomes questionable. With the adoption of a DT, the fault-tree and even-tree analyses of the PRA that nuclear engineers are accustomed to, suddenly become much more complex. All of the systems, structures, and components supporting the DT-and its models and software-now contribute to the overall risk assessment of the physical system. Some of these questions are being investigated by the nuclear community, as in Lin et al. [19], and members of the broader IoT community have also identified this challenge.
In [74], Nguyen et al. focus on exactly this problem for IoT infrastructure used in healthcare monitoring. Part of the challenge addressed by this work is in identifying and comparing relevant metrics and applying these in the right way to asses IoT reliability. Another challenge is the design of the IoT infrastructure to maximize reliability-even in the presence of cyber-security threats. Those proposed in this reference include the mean time to failure, mean time to recovery, and steady-state availability. These metrics are standard in the risk and safety analysis of nuclear systems [75], however the nuclear system designers and regulators are less familiar with the reliability data of IoT infrastructures and underlying computing infrastructure. Advancing the state of PRA for nuclear systems to incorporate the reliability of the DT, which depends on the reliability of the software and hardware underlying the DT, and its corresponding effects back into the physical-digital coupled system will be a challenge.
Standardization
A common attribute of successfully deployed technologies is a reliance on standardization. The nuclear industry has a history of standards with high pedigree, and this has resulted in one of the safest industries with high reliability and capacity factors for existing installations. Presently, the standardization of DTs is a recognized challenge with no standards having been finalized yet. However, there are ongoing efforts by the International Standards Organization under the joint technical committee ISO/IEC JTC 1/SC 41 for the Internet of Things and Digital Twin. Some standards relevant to DTs exist [76] without reference to twins, but nearly all of those devoted to DTs are in development [77][78][79][80][81][82]. Moreover, none of these are necessarily tailored for nuclear applications. Therefore, beyond the general standardization of DTs, the nuclear engineering community will need to addresses the challenge of standardizing DTs for nuclear power applications. This activity should be performed by professional societies, national and international standards organizations, and regulators with an interest in, or oversight of, nuclear energy.
Leverage the Progress in High-Fidelity Advanced Modeling Simulation
One of the major challenges for the nuclear engineering community will be to consider how to best leverage the existing activities and recent progress of the advanced M&S efforts within the U.S. DOE. Over the last 10 years there has been a roughly $500 million investment by DOE's Office of Nuclear Energy in M&S efforts. The presumption in these programs was that the computing platforms would be leadership class, or other large HPC clusters, having tens of thousands of processors and terabytes of memory. Consequently, many of the tools developed under these programs are designed to run on these platforms, and this is precisely the opposite of what is needed for real-time DTs. Exactly how a program focused on developing DTs will leverage advanced M&S is an open question. Although we propose the strategy represented in Figures 7 and 8 as one possibility.
Another possibility relates to developing variable fidelity models. Here, any of the existing high-fidelity models can expose an FMI standard interface and be incorporated into a system model. This would likely not yield a real-time digital representation, but could be leveraged as an on-demand capability.
Uncertainty Quantification
The last challenge we identify is in UQ. This is a necessary capability for reliable DTs as it is needed for integrating information into the physical asset, and for integrating data from the physical asset into the DT. The challenges in this area are sufficiently broad and complex that we dedicate the next section to their discussion.
Uncertainty Quantification for Digital Twins
Establishing reliability and trust in DTs is crucial for their adoption in practice, especially for safety/mission-critical settings with potentially catastrophic consequences such as those in the nuclear domain. UQ is an enabling technology for initiating the assessment of these traits. With access to information about where the DTs are confident or uncertain, designers, operators, and other stakeholders can become aware of the different possible responses and outcomes. Consequently, informed decisions can be made on control, design, policy, or further experimentation. UQ therefore promotes transparency of the DT, and is a crucial component of decision support systems. Further, combined with code verification and model validation, the VVUQ (verification, validation, and uncertainty quantification) system [83] has grown to become the standard in many fields of computational science and engineering.
Among a broader selection of UQ paradigms, we follow a framework that rigorously characterizes uncertainty using the mathematical formalism of Bayesian probability [84][85][86][87]. While a frequentist perspective views probability as a frequency within an ensemble, a Bayesian perspective regards (and derives) probability as an extension of logic [88]. In this view, a probability distribution represents the state of uncertainty, and is updated through Bayes' rule as new evidence (e.g., sensor measurements) become available. This update rule naturally handles observations that materialize sequentially over time and offers a coherent representation of evidence aggregation. The updated distribution, called the Bayesian posterior, consistently concentrates towards the true parameter values as more measurement data are obtained (see, e.g., in [89]). Furthermore, a Bayesian approach is advantageous for accommodating sparse, noisy, and indirect measurements; consolidating datasets from different sources and of varying quality; and rather importantly injects domain knowledge and expert opinion. Its use with digital models and in the context of scientific research has been widely demonstrated [90][91][92].
In the remainder of this section, we do not attempt to make a comprehensive review of UQ work in nuclear engineering, or even a review of UQ algorithms. Instead, we will present a technical overview on a number of key UQ tasks that fall under the interaction cycle between the physical and digital twins in Figure 3, which is further accentuated in Figure 8 below. These tasks are forward UQ, inverse UQ, and optimization under uncertainty (OUU). Forward UQ is concerned with "How well do we know our simulated predictions?"; inverse UQ is concerned with "How well do we know the state of our physical asset using our sensor data?"; and OUU is concerned with "What actions should we take to improve our predictions about the Physical Asset, or to improve its performance?" We offer a discussion highlighting UQ concepts perhaps less often encountered in the current nuclear engineering literature, and our view of their challenges pertaining to DTs.
Forward UQ
Forward UQ entails characterizing the uncertainty of DT prediction about the response of the physical asset resulting from the uncertainty in the DT input parameters-that is, a propagation of uncertainty in the forward direction of the DT simulation. In Figure 8, forward UQ resides on the side of the Digital Asset, without needing direct interaction with the Physical Asset. We use the term "parameters" here to encapsulate all sources of uncertainty that can be applied to the DT. For example, this may include physical and material constants, nuclear data, manufacturing tolerances and defects, boundary and initial conditions, control actuations and forcings, geometric features, and any additional tunable model parameters or latent variables. A simplified abstraction may take the form where θ denotes the uncertain parameters, G(θ) is the DT prediction of quantities of interest (QoIs) made at θ, is measurement noise, and y is the (noisy) observation perceived by sensors in the physical asset. Then, given the current uncertainty state of θ expressed as a probability density function (PDF) p(θ), forward UQ seeks to characterize the corresponding pushforward distribution p(G(θ)), or predictive distribution p(y) (The pushforward is the distribution of the noiseless signal predicted by the DT, and the predictive is the distribution of the noisy measurement that may be observed by the sensors.). Computationally, forward UQ is typically tackled by Monte Carlo (MC) sampling [93], from which useful statistics such as the mean, covariance, probabilities of rare/failure events, and expectations of performance and health metrics may be obtained. For example, one major area of forward UQ in nuclear science involves propagating uncertainty from the cross section data of nuclear isotopes through reactor analysis calculations, with some recent examples found in [94,95]. However, MC sampling converges slowly and is considered prohibitively expensive. We further discuss this challenge in Section 6.4 together with those from the other UQ tasks.
Inverse UQ
Inverse UQ deals with incorporating measurements from the physical asset (e.g., via sensors) into the DT. That is, we want to find out what plausible values of θ could have lead to the observations y. This is an inverse problem, with a flow in the inverse direction of a DT simulation. In Figure 8, this corresponds to the flow of information from the Physical Asset to the Digital Asset. In contrast to single point estimation, Bayesian inference provides a probabilistic inverse solution: Here, p(θ) depicts the prior uncertainty on θ before having measurements, and p(θ|y) represents the updated posterior uncertainty after having these measurements. p(y|θ) is the likelihood function describing the discrepancy between the DT prediction and actual measurements, e.g., from Equation (6): p(y|θ) = p (y − G(θ)). Thus, each likelihood evaluation translates to one DT evaluation G(θ), typically the most expensive component of all UQ computations. Last, p(y) is the model evidence (marginal likelihood) that serves as a PDF normalization, generally this is intractable to compute and avoided whenever possible. Solving the Bayesian inference problem (the inverse UQ task here) thus entails characterizing the posterior p(θ|y).
Computationally, attempting to directly approximate p(θ|y) using functional approximation techniques would inevitably involve estimating p(y) (a difficult integration problem), and only feasible to low dimensional θ (i.e., less than 3). A more scalable approach involves sampling from p(θ|y) via Markov chain Monte Carlo (MCMC) algorithms [93,96,97] that completely avoid the need for computing p(y). However, even the more advanced MCMC variants, such as the Hamiltonian MC [98,99], are only effective for O(100) dimensional θ in practice. As an example in nuclear systems, the work in [100] provides a recent review of inverse UQ methods-Bayesian and non-Bayesian-for application to thermal-hydraulic models.
Another important area of inverse UQ is within the time-dependent settings, for example, where the parameters θ are evolving over time according to a dynamical system in the DT, and when data y are also streamed in the form of a time-series (e.g., sensors continuously operating under a fixed sampling frequency). Such cases are commonly encountered in state estimation problems (e.g., θ(t) being an uncertain state evolving over time) and can be effectively approached with methods of data assimilation [101,102] that encompasses the well-known Kalman filter (KF), ensemble KF, particle filter, etc. The broader problem classes of filtering, smoothing, and forecasting all can be derived from a sequential Bayesian inference framework.
Overall, these different inverse UQ methods all seek to combine the predictive power of our digital asset together with sensor observations in order to describe the uncertainty about our knowledge of our physical asset's current state.
Optimization under Uncertainty
Optimization under uncertainty (OUU) is associated with decision-making (i.e., taking actions) in the DT context. We divide OUU into two types: optimal experimental design (OED) and design (performance) optimization. In the former, OED focuses on selecting new experiments (e.g., expensive high-fidelity simulations), if any, in order to improve our digital model predictions. "Experiments" can be interpreted broadly, and may entail computational or physical experiments (The term "OED" stems from the statistics community, and it refers to the statistical design of experiments (i.e., to optimize for certain desirable statistical properties). In setting up an experiment in practice, however, much more considerations need be incorporated requiring the expertise, experience, and instinct of a seasoned experimentalist.). These experiments do not need to achieve real-time requirements of the DT, and can be carried out in the background and incorporated into the DT when complete. Therefore, we view OED here for acquiring new information, but not from the physical twin, in order to improve our digital asset's predictions. This task is found within the Digital Asset box in Figure 8. In the latter, design (performance) optimization is concerned with taking actions on the physical asset that can improve its performance. This task resides between the Predictions and Actions ovals on the Digital Asset side in Figure 8.
We begin by introducing OUU in a general form, with d denoting the decision (action) variable: The OUU problem then seeks d * that maximizes an objective function J that reflects the anticipated value from the proposed decision, subject to any decision constraints. What makes OUU different in comparison to classical optimization problems is the presence of uncertainty.
For OED, we present simulation-based OED [103][104][105][106] that leverages the predictive capabilities in a DT. This is in contrast to exploration-based design of experiment that does not make use of a digital model, such as space-filling, Latin hypercube, and factorial design sampling procedures (see, e.g., in Cox & Reid [107] and Chapters 1-6 in Santner et al. [108]). For example, a common choice for J in OED is the expected information gain (EIG) on θ: where we useŷ to differentiate these observations to be from the experiments rather than the physical asset. D KL is the Kullback-Leibler (KL) divergence that measures the degree of dissimilarity between the posterior and prior distributions, and the expectation Eŷ |d accounts for different possible observations under the proposed experimental design. Therefore, in this case, we want to find an experiment that, averaged over all possible experiment outcomes, provides the greatest change from the prior to the posterior (i.e., the new measurement is most informative in reducing our uncertainty about θ). For design (performance) optimization, we consider problems that target engineering performance metrics in J and in the constraints, under the current state of uncertainty. In nuclear power systems, some examples of such quantities include the expectation of power production, variance of market demand, probability of power outage, etc. Under this direction, mathematical frameworks such as the reliability-based design optimization (RBDO) and robust design optimization (RDO) are commonly used to incorporate chance constraints and variability in the performance objective (see, e.g., Wang et al. [109]). These frameworks are integral to address the challenges of incorporating risk and reliability discussed in Sections 5.3.4 and 5.4.
Computation for OUU is typically highly demanding, especially for OED which in itself involves solving many inverse UQ subproblems. Conceptually, in OED, each evaluation of J(d) in Equation (9) at a given d requires MC sampling of many different experimental outcomes and performing Bayesian inference using MCMC followed by KL divergence estimation for each scenario. This J(d) estimation is further wrapped under a numerical optimization routine that needs to be able to handle noisy objectives (due to MC sampling). Overall, this triply-looped procedure-optimization over MC sampling over MCMC and KL estimation-must be accompanied by other numerical advances in order to be feasible. For example, the work by Ryan et al. [110] presents a nested MC estimation for J(d) that sidesteps the need for MCMC, and the work in Ryan et al. [111] provides an overview of some recent approaches, and we will point to a few more in Section 6.4.
Challenges in UQ for Digital Twins
A prominent challenge of applying UQ to DTs is the need for speed. This arises from the common application of DTs for real-time monitoring and control of the physical assets, as well as the aggregated complexity of a large physical system. Performing simulations of the DT that potentially involves multiscale, multiphysics, and multidisciplinary interactions on a supercomputer would be rarely viable for online usage. Correspondingly, substantial computational acceleration is needed for the various UQ tasks. The availability of many UQ software packages such as DAKOTA [112], UQTk [113], and QUESO [114] also greatly facilitates the democratization of UQ adoption and further development.
One major strategy to this computational challenge is to trade model fidelity for speed, by building ROMs or surrogate models as discussed in Section 5.1.3 and Figure 5. Another strategy is to focus on advancing solution techniques for the UQ tasks. For example, MC sampling efficiency can be improved through importance sampling and quasi Monte Carlo methods [115] while MCMC mixing can be enhanced with adaptive and gradient-informed strategies for proposing new locations of Markov chain progression (see, e.g., in [116,117]). The parameter space that needs to be explored can also be decreased through various dimension-reduction techniques. Alternatively, improved scaling to higher dimensions can also be achieved by approximating the uncertainty distributions with simpler families, such as the use of Gaussian distributions via variational inference [118,119] and Laplace approximation. Combinations of these techniques have also been leveraged in OED, such as the use of surrogate modeling and gradients [120,121], Gaussian approximations [122,123], and low-rank structures [124].
Acceleration can also be achieved through reducing the need for processing large amounts of data. Indeed, a high-resolution spatial-temporal sensor network attached to the physical asset may create a huge quantity of measurement data that overwhelms the available computational capabilities, necessitating a strategic selection/prioritization of data processing. In this regard, many methods for data reduction would be highly valuable, such as techniques aimed at data dimension reduction (e.g., principal component analysis, tensor decomposition, and autoencoders; see in [125,126] for an overview) and subsampling (e.g., randomized algorithms [127] and coresets [128,129]).
While our discussion so far has revolved around computation, there are also questions regarding the UQ problem formulation. For instance, in addition to parametric uncertainty, there are also contributions from model discrepancy [90] resulting from modeling assumptions, unknown physics, or other inadequate portrayals of the physical asset. Yet another crucial challenge is to achieve an integration of the different UQ tasks instead of approaching them in isolation; indeed, one can imagine that additional benefits may be realized if we have a better and longer forecast of what might unfold in the future. A general problem of sequential decision-making under uncertainty can be mathematically characterized via a partially observable Markov decision process (POMDP), which also connects to the work of reinforcement learning and dynamic programming. Some initial investigations have taken place within the context of UQ and DT, such as in [130][131][132].
We end this section by returning to one of the key desirable properties for DT and AI technologies in general: trust. We view UQ to be an important enabler to achieve trustworthy DT tools, as it promotes greater transparency on the competency of the computational models. However, we note that there are many other important factors for establishing trust, such as beneficence, explainability, operation reliability, ability of human control, and even the psychology and culture of human users that defines their behavior. These fields are very much beyond the scope of our discussions, and we refer interested readers to the articles in [133,134] as a starting point.
Summary and Conclusions
This paper provides an overview of recent papers defining DTs in general terms. We review and discuss these definitions to provide an appropriate concept for nuclear power applications. Our proposed definition for the DT includes several components-the digital model, the digital shadow, and the digital twin-that each serve a unique purpose during a physical asset's life cycle. The differentiating factor in these definitions is how information is exchanged between the physical asset and its digital representations. The defining feature of a digital twin is a closed-loop of automated information exchange between the digital representation and physical asset in real-time.
With this definition, we survey the history of tools and capabilities for digitally modeling a nuclear power system. This discussion identifies that for some time nuclear plant simulators have been close to meeting the criteria of a digital twin, but lack the integration of sensed data from a physical asset. Another item identified in Section 4 is that recent modeling and simulation activities have been focused on capabilities that are less amenable to DT development, although there are some exceptions.
After our survey on existing nuclear capabilities, we discussed the enabling technologies and the challenges of their adoption by the nuclear engineering community to close the gap between the existing modeling and simulation capabilities and advanced sensing capabilities to achieve the realization of a DT. In the enabling technologies, the Modelica modeling language and Functional Mock-up Interface are identified as appropriate platforms for DT development. Model-based control and the recent advances in this field are also identified as key to realizing the DT. We strongly recommend the model-based approaches over model-free as, for most aspects of modeling, a nuclear power system there is a deep understanding of the underlying physics. The use of model-free methods will typically ignore the knowledge and experience the larger community has with nuclear power systems. Therefore, relying solely on these methods seems misguided. Instead we promote their use as augmenting model based approaches to correct for known assumptions or unknown behaviors.
Several challenges to DT realization are then discussed. With these, some like security and IoT capabilities are better solved by experts from other fields, and the community interested in nuclear power systems should consider itself a stakeholder or customer in these technologies. Others, like integration with PHM, could be collaborative with the nuclear community. The challenges unique to the nuclear field will include the development of additional standards, determining how to best utilize the existing modeling and simulation infrastructure, and the way to integrate our simulation technologies with risk assessments.
Last, we presented a technical overview on a number of key UQ tasks that fall under the interaction cycle between the physical and digital twins. Namely, forward UQ to propagate uncertainty from digital representations to predict behavior of the physical asset, inverse UQ to incorporate new measurements obtained from the physical asset back into the DT, and optimization under uncertainty to facilitate decisions of experiments that maximize information gain, or actions that maximize performance for the physical asset performance under an uncertain environment. We offered discussions of their challenges pertaining to UQ for DTs, residing primarily within the areas of computational speed, integration among different UQ tasks, and the role of UQ within the more expansive goal of establishing DT trust. | 17,354.4 | 2021-07-14T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Quantum Deep Learning Functional Similarities on Remdesivir, Drug Synergies to Treat COVID-19 in Practice.
Νovel SARS coronavirus 2 (SARS-CoV-2) of the family Coronaviridae starting in China and spreading around the world is an enveloped, positive-sense, single-stranded RNA of the genus betacoronavirus encoding the SARS-COV-2 (2019-NCOV, Coronavirus Disease 2019. Remdesivir drug, or GS-5734 lead compound, rst described in 2016 as a potential anti-viral agent for Ebola diseade and has also being researched as a potential therapeutic agent against the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the coronavirus that causes coronavirus disease 2019 (COVID-19). Computer-aided drug design (CADD), Structure and Ligand based Drug Repositioning strategies based on parallel docking methodologies have been widely used for both modern drug development and drug repurposing to nd effective treatments against this disease. Quantum mechanics, molecular mechanics, molecular dynamics (MD), and combinations have shown superior performance to other drug design approaches providing an unprecedented opportunity in the rational drug development elds and for the developing of innovative drug repositioning methods. We tested 18 phytochemical small molecule libraries and predicted their synergies in COVID-19 (2019- NCOV), to devise therapeutic strategies, repurpose existing ones in order to counteract highly pathogenic SARS-CoV-2 infection and the BRD4- conserved residue associated COVID-19 pathology. We anticipate that our quantum deep learing similarity approaches can be used for the development of anticoronaviral drug combinations in large scale HTS screenings, and to maximize the safety and ecacy of the Remdesivir, Colchicine and Ursolic acid drugs already known to induce synergy with potential therapeutic value or drug repositioning to COVID-19 patients. .pdbqt and .mol2 format les (26,27-42,46-48) present in the e-Drug3D dataset. (28,30-48) The ensemble of 3D molecule conformations (31,32-45,48) of the larger drugs from e-Drug3D were provided on a separated drug library dataset.
Drugs under clinical trials (COVID-19 repurposing dataset)
Our approach was focused on identifying a cluster of (1-48) similar chemotypes followed by parallel docking grid generation using the advanced to the next MM-PBSA-WSAS, KNIME-HTS-HVS lter, and chemical structure preparation wizard as provided by the BiogenetoligandorolTM cluster of algorithms that had the potential to target the (2-48) FURIN-ADAMTS1-ROR-GAMMA-SARS-COV-2 conserved domains and t the geometric constraints without any restraints or constraints when lling in the open valence of the SARS-COV-2-ACE2-RORγ-BRD4-FURIN (1-30,41-48) binding pocket residues. These were then (6-48) converted into substructure searches of one copy of SARS-CoV-2 main protease (7-45) which were used to mine commercially available compounds and covalently bonded SARS-COV-2 inhibitors using the (9-32) eMolecules database. The dataset of the selected hit drugs followed the force eld parameters as applied to the partial atomic charges of the selectred ligands which were derived using the RESP and collected from published articles (3,5,7,13,48) and approved drugs listed on the (2-39) DrugBank database in the "Clinical Trial Summary by Drug" section to t the HF/6-31G* electrostatic potentials generated using the Gaussian 16 software package. (4,(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) We intended to generate these from two different branches to model the viral protein using the isotropic position scaling algorithm. (7,48) One branch of the selected hit elements was refered to as the "Remdesivir Literature Substructures" branch, (8, which was based on the Remdesivir, Colchicine and Ursolic acid substructures using the Antechamber module as extracted (15,46) from published bromodomain inhibitors.
Discussion
In this research report, we found that the Cluster of the Recombovir-(Drug Combination) which were identi ed during screening of a compound diversity set performed by the BiogenetoligandorolTM cluster of algorithms on the intersection track (Lys711 and Arg355/SARS-CoV2 PLpro and Lys711 and Arg355. The chemical strucutures of the Remdesivir, Colchicine and Ursolic acid) targeted into the Lys711 and Arg355 residues and inside the residues of the Phe19, Trp23, and Leu26, which are located in an alphahelical region of the SARS-CoV2 PLpro N terminus that binds to the N-terminal Lys711 and Arg355 hydrophobic pocket (17). The druggable scaffold of this drug combination of the Remdesivir, Colchicine and Ursolic acid small molecules target into the binding domaisn of these three critical SARS-CoV2 PLpro residues; the combination of the hit compounds therefore competes with endogenous SARS-CoV2 PLpro for binding to Lys711 and Arg355. We created a new track to display the contacts of this drug combination with each of those: Lys711 and Arg355, and SARS-CoV2 PLpro. Interestingly, this drug combination consisted of the Remdesivir, Colchicine and Ursolic acid chemical structures targets the Lys711 and Arg355 homo-dimerization site and intersects within the Lys711 and Arg355-Recombovir-(Drug Combination) binding sites, suggesting that they may also interfere within the binding pockets of the Lys711 and Arg355 homodimerizations.The key residues of the seven hot spot residues that contribute to S spike glycoprotein trimerization can be potentially In-Silico dow-regulated by the docking combination of the therapeutic agents of the Remdesivir, Colchicine and Ursolic acid drugs in order to signi cantly block the quaternary structure assembly of the SARS-CoV-2 replication protein. Furthermore, the combination of the drugs of the Remdesivir, Colchicine and Ursolic acid targets the BRD4, JQ-1, ISG15, IFN-β, IL-1β, I-BET 151 and OTX-015 exhibited viral inhibition with a short hairpin RNA (shRNA; shBRD4) which is involved in the recognition of molecular patterns and mediate severe in ammatory responses, may also interact within the binding domains of the S protein via 10 active residues located in the S1 subunit. I also suggested that the Recomborovir-(Drug Combination) might interact with this ganglioside-binding domain within the S protein (61). Drug repurposing or the chemical optimization of existing drugs represent an effective drug discovery approach and Drug Combination therapeutic approach which has the potential to reduce the time and costs associated to the de novo drug discovery and development of this anti-COVID-19 clinical trial process (62-72,80-94). This In-silico project demonstrated that the combination of the drugs of the Remdesivir, Colchicine and Ursolic acid can potentially inhibit the SARS-CoV-2 replication (63), (64). Additionally, the BiogenetoligandorolTM algorithm cannot determine binding free energies or binding orientations of small molecules. For that aim, other docking tools or molecular dynamics studies should be applied, as explained above. Therefore, the BiogenetoligandorolTM approach that takes a multiple-sequence-conserved alignment (coMSA) in .pdbqt format le as an input is mainly aimed at binding residues recognition in cases on QMMM homology modeling techniques where the binding partner is a small chemical compound or small peptide. Its druggability to predict docking tness scoring effectiveness allowing us to generate this drug repurposing screening approach by combining its cluster free energy ranking output with other chemistry informatics and repositioning In-Silico tools. Therefore, it can be used as an AI-strategy in complex inverse docking and quantum simulation pipelines. We envision the BiogenetoligandorolTM quantum thinking procedure as the rst step in a ligand parallel and inverse docking and free energy simulation (-400.794, 329.678, -337.184, -907.342, -52.667, -894.194, -194 Therefore, solutions provided by the BiogenetoligandorolTM cluster of AI-Algorithms in this project indicated to us that the Colchicine, Remdesivir and Ursolic acid drugs are considered to be <<co-administered>> (Figures3e, 4a, 4b, 4c, 4d, 4e, 4f, 4g) which is something more than important and have to be considered as a rst approximation that may require subsequent parallel re nement and docking analysis using more accurate free energy ranking models. In conclusion, BiogenetoligandorolTM -LigandorolTM is very e cientand is not just proposed as an alternative drug repurposing and computational method, but rather as a combined complementary deep learning similarity and quantum mechanics predictive tool to be used in tandem with other In-Silico drug retargeting and computational platforms which could led us to the rational design of novel drug combinations of small molecules and more effective repositioning experimental methods.
Declarations
Availability of data and materials The author con rms that the data supporting the ndings of this study are available within the article [and/or] its supplementary materials.
Competing interests
No potential competing interest was reported by the author.
Funding
The author received no nancial support for the research, authorship, and/or publication of this article.
Authors' contributions
Author's diverse contributions to the published work are accurate and agreed.
Author has contributed in the below multiple roles: Ø Conceptualization Ideas, Formulation or evolution of overarching research goals and aims Ø Methodology, Development and design of methodology; creation of models Ø Software, Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components Ø Validation, Veri cation, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs Ø Formal analysis Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data Ø Investigation, Conducting a research and investigation process, speci cally performing the experiments, or data/evidence collection Ø Resources, Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools Ø Data Curation, Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse Ø Writing -Original Draft, Preparation, creation and presentation of the published work, speci cally writing the initial draft (including substantive translation) Ø Writing -Review & Editing Preparation, creation and/or presentation of the published work by those from the original research group, speci cally critical review, commentary or revision -including pre-or postpublication stages Ø Visualization, Preparation, creation and presentation of the published work, speci cally visualization/ data presentation Ø Supervision, Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team Ø Project administration, Management and coordination responsibility for the research activity planning and execution. Acknowledgments I would like to deeply express my special thanks of gratitude to my teacher (George Grigoriadis Pharmacist) as well as our CEO and principal (Nikolaos Grigoriadis Phd Pharmacist) who gave me the golden opportunity to do this wonderful project on the Quantum Deep Learning Chemistry topic, which also helped me in doing a lot of Original Drug Repurposing and Drug Combination Research and I came to know about so many new things I am really thankful to them.
Signi cance Statement
Drug repurposing/repositioning/rescue proposed a computational method to identify potential drug indications by integrating various applications of an existing drug to a new disease indication. In this paper we ltered out residues with relatively small surface accessible areas, and/or with incompatible charge and hydrophobic properties to the ligands of the Remdesivir, Colchicine and Ursolic acid small molecules which could improve the prediction binding free energies or binding orientations of different drug combinations of the Remdesivir, Colchicine and Ursolic acid to treat COVID-19. Finally, an comprehensive web platform by applying AI deep learning models was designed based on our BiogenetoligandorolTM protocol for drug repurposing to signi cantly reduce user time for data gathering and multi-step analysis without human intervention.In conclusion, BiogenetoligandorolTM -LigandorolTM is not proposed as an alternative drug repurposing method, but rather as a complementary deep learning quantum mechanics tool to be used in tandem with other drug retargeting computational and small molecule repositioning experimental methods. | 2,528.4 | 2020-12-17T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Nonlinear representation of the confidence region of orbits determined on short arcs
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. If all resident space objects larger than 1 cm are considered, this number increases to an estimate of 700,000 objects. The next generation of sensors will be able to detect small-size objects, producing millions of observations per day. However, due to observability constraints, long gaps between observations will be likely to occur, especially for small objects. As a consequence, when acquiring observations on a single arc, an accurate determination of the space object orbit and the associated uncertainty is required. This work aims to revisit the classical least squares method by studying the effect of nonlinearities in the mapping between observations and state. For this purpose, high-order Taylor expansions enabled by differential algebra are exploited. In particular, an arbitrary-order least squares solver is implemented using the high-order expansion of the residuals with respect to the state. Typical approximations of differential correction methods are then avoided. Finally, the confidence region of the solution is accurately characterized with a nonlinear approach, taking advantage of these expansions. The properties and performance of the proposed methods are assessed using optical observations of objects in LEO, HEO, and GEO.
Introduction
Since the era of space exploration started, the number of Earth-orbiting objects has on average grown (Liou et al. 2013). A more crowded space environment raises the possibility of satellite collisions, thus seriously threatening the viability of space activities. Tracking and monitoring Earth-orbiting objects is therefore essential. For this purpose, catalogs of as many resident space objects (RSOs) as possible have been built and are continuously maintained. Such catalogs are used to enable safe space operations, e.g., to predict orbital conjunctions (Hobson et al. 2015). The utility and reliability of these catalogs depend on the accuracy and timeliness of the information used to maintain them. Regular and direct observation of RSOs is therefore a crucial source of information to perform orbit determination (OD) and maintain the abovementioned catalogs. In the OD process, we can differentiate initial orbit determination (IOD) and accurate orbit estimation (AOE). The former is typically employed to estimate the value of all six orbital parameters (the unknowns) from six independent scalar observations (e.g., three pairs of right ascension and declination), when a priori information on the orbit is not available. It is worth noting that some techniques have been developed, which provide an IOD solution even when the processed observations are less than six, as it happens with the admissible region (Milani et al. 2004). However, the orbit is not fully resolved unless some additional constraints are available. The IOD gives a first estimate, and this solution is then used to obtain follow-up observations and refine the OD process. In contrast, the AOE is used to obtain a better estimate of a priori orbital parameters from a large set of tracking data (Montenbruck and Gill 2000). The AOE requires that RSOs be observed on a regular basis and observations belonging to the same object identified. The latter task is known as data association.
Currently, more than 20,000 man-made objects larger than 10 cm in size are tracked by the US space surveillance network (SSN) (Fedrizzi et al. 2012). However, since the size of launched spacecraft is continuously decreasing [e.g., constellations of CubeSats such as Flock 1 or future mega-constellations like OneWeb (Radtke et al. 2017)], RSOs of small dimension will need to be tracked as they can yield catastrophic collisions. Due to observability constraints, observations of such small objects may be characterized by long observational gaps. Thus, a future challenge will be to accurately determine the orbit of the object with a single passage of the object above an observing station, when a short arc is observed. The estimated number of RSOs larger than 1 cm is around 700,000 (Pechkis et al. 2016;Wilden et al. 2016). This large number of objects will turn the data association problem into an even more challenging task. Realistic description of the uncertainties of IOD solutions is required to perform reliable data association, as well as to initialize Bayesian estimators for orbit refinement (Schumacher et al. 2015). However, when the initial orbit solution is based on observations spread over a short arc, only partial information about the curvature of the orbit can be inferred and, thus, the estimated orbit will be affected by a large uncertainty.
One common approach to handle very short arcs is based on attributables and admissible regions (Milani et al. 2004). An attributable is defined as a f our-dimensional vector with data acquired from a short arc. In the case of optical observations, an attributable contains two angles (e.g., right ascension and declination) and their angular rates, A = (α, δ,α,δ). Regardless of how many measurements are acquired for a newly observed object, only four quantities are kept in the attributable. The resulting orbit is then undetermined in the range ρ, range-rateρ space. The two degrees of freedom of the attributable thus generate the 2D plane in which the admissible region lies. The region is bounded by some physical constraints such as semiaxis and eccentricity (DeMars and Jah 2013). For each point in the admissible region, we can define a virtual debris (VD), made of a (ρ,ρ) pair and an attributable, that defines the admissible region. Because all six components are defined, the VD has a known orbit.
In this work, we focus on observation scenarios in which observations span arc lengths that are long enough to allow us to solve a least squares (LS) problem, but too short to accurately determine the orbits. We refer to this situation as short arc observations (in contrast with too short arc observations, in which an orbit cannot be determined). In this framework, we propose to (1) solve a LS problem using all observations belonging to the same tracklet with an arbitrary-order solver, and (2) nonlinearly characterize the confidence region of the LS solution. Our main objective is thus to shed some light on the effects of nonlinearities resulting from observations on short arcs. Unless differently specified, we will assume Gaussian, uncorrelated and zero mean measurement noise throughout the paper (which is a common assumption and not directly affected by the separation between observations) such that the non-Gaussianity of the determined state will be only due to the effect of nonlinearities, which is the main focus of our work. The LS solver is implemented to make the most of differential algebra (DA) techniques (Berz 1986(Berz , 1987(Berz , 1999 and high-order terms are exploited to provide an accurate description of the confidence region. In particular, by using DA we can approximate the LS target function as an arbitrary-order polynomial, thus enabling a highorder representation of the confidence region. This accurate representation of the confidence region directly in IOD is of crucial importance for observation correlation and initialization of Bayesian estimators (Schumacher et al. 2015).
After finding the OD solution and its uncertainty region, in most practical applications it is necessary to draw samples according to OD statistics. These applications include initialization of particle filters (Simon 2006) and computation of collision probability (Jones et al. 2015). In this work, we propose four methods for the nonlinear representation of the LS confidence region. The first method is based on the concept of gradient extremal (GE) (Hoffman et al. 1986), which has already been introduced in astrodynamics under the name of line of variation (LOV) . Due to the effect of nonlinearities, the numerical procedure to determine samples on the LOV is quite complex for short arcs (Milani and Gronchi 2010). DA techniques are introduced here to simplify this numerical procedure by taking advantage of the polynomial representation of the involved quantities. The developed technique can then be applied along any eigenvector of the solution covariance matrix. In the second method, the concept of LOV is extended to cases in which the confidence region is shown to be two-dimensional, by introducing the gradient extremal surface (GES) The third approach combines a mono-dimensional LOV with its or a high-order DA polynomial to obtain a two-dimensional sampling. Finally, a method to enclose the confidence region in a six-dimensional box is introduced. This approach could be particularly useful for applications in which high accuracy is required, e.g., the computation of low collision probabilities. The proposed algorithms for solving the LS problem and the nonlinear representation of the confidence region are accompanied by the definition of indices to estimate the relevance of high-order terms and to determine the dimensionality of the confidence region.
The work presented in this paper is based on preliminary results shown in Principe et al. (2016Principe et al. ( , 2017. In representing the confidence region, new algorithms are presented taking advantage of, and extending, the concept of GE. The effectiveness of this approach is tested with even shorter arc lengths than in Principe et al. (2016). Furthermore, indices are introduced to establish the most suitable description of the confidence region.
The paper is organized as follows. First, a description of the LS method and the confidence region of the LS solution is given. The DA implementation of the LS solver is presented next, followed by some algorithms used to nonlinearly characterize and sample the confidence region. An introduction to the indices and our strategy for dealing with IOD problems on short arcs conclude this section. The properties and performance of the proposed approaches are assessed using a realistic observational scenario of four objects in different orbital regimes. Some final remarks conclude the paper.
Classical least squares
We need to find the solution of the OD problem in order to track RSOs. Thus, given some observations, the aim is to compute the orbit of an object. The orbit is expressed in terms of an n-dimensional state vector at a reference epoch x(t 0 ). Different ways of expressing the state vector can be used, e.g., in the modified equinoctial elements (MEE) (Walker et al. 1985), or as a position-velocity vector (r, v) in the Earth-centered inertial (ECI) coordinates.
The OD problem is generally addressed by using the LS method, devised by Gauss (1809). Input of the algorithm is a tentative value x = x(t 0 ). Then, the predicted observations are computed at each observation epoch. Let y be an m-dimensional vector containing the predicted observations, that is y = h(x), where m is the number of measurements. Note that h is a nonlinear function that composes the propagation from the reference epoch to observation epochs with the observations space projection. The differences between the actual observations y obs and the predicted ones y are referred to as residuals. The residuals are collected in the m-dimensional vector ξ = y obs − y. The LS solution is the state vector x * that minimizes the target function (1) We can find the minimum of J (x) by computing its stationary points, i.e., It is worth noting that x * can be a minimum, maximum, as well as a saddle. Thus, to ensure that x * is a minimum, it is required that the Hessian of the target function in the stationary point, H * = ∂ 2 J ∂ x 2 (x * ), is positive definite. To solve the system of nonlinear equations given by Eq.
(2), we can use an iterative method, e.g., Newton's method. Convergence of this method is ensured if a suitable initial estimate is available. This estimate is usually obtained by solving the IOD problem, in which the number of observations is minimum, m = n. The solution of the iterative method is (Milani and Gronchi 2010) x where x i is available from previous iterations or is the IOD solution when i = 1. F is an m x n matrix with the partial derivatives of the residuals with respect to the state vector components, that is C is the n x n normal matrix, and S an m x n x n array with elements for p = 1, . . . , m and q, r = 1, . . . , n.
The full Newton's method is generally not used for OD problems, because of practical problems in the computation of the second derivatives in matrix S (Milani 1999). For this reason, the term ξ T S in Eq. (5) is often neglected. This quantity is negligible when the residuals are small. The resulting method is called the differential correction technique (Milani and Gronchi 2010) or Gauss-Newton method (Hansen et al. 2013).
Confidence region and statistical properties of the LS solution
The solution of the LS x * , although minimizing the cost function, does not generally correspond to the true orbit, which lies within an uncertainty set around the LS solution, the so-called confidence region. From an optimization perspective, following Milani and Gronchi (2010), the confidence region includes orbits with acceptable target functions. To determine the confidence region, the target function J (x) is expressed as where J * = J (x * ) and δ J (x) is called the penalty. Then, the confidence region Z is defined as the region in which δ J is smaller than or equal to the control value K 2 (a method to determine K 2 is provided in Sect. 4). Thus, where the subset A depends on the chosen orbital elements. The target function J (x) is usually expanded around x * at second order, i.e., linearizing the mapping between the state and observation, resulting in where ∇ J = ∂ J ∂ x (x * ) and The Hessian matrix H can be expressed as Reminding that ∂ J ∂ x (x * ) = 0, the confidence region definition becomes This expression can be then manipulated by taking advantage of the eigen decomposition theorem: V is a square matrix, whose columns are the eigenvectors of C, while C d is a diagonal matrix containing eigenvalues of C (Franklin 1968). Hence, the confidence region expression becomes where Because C is positive definite, all of its eigenvalues are positive and C d can be expressed as where γ 2 1 , . . . , γ 2 n > 0. Then, Eq. (14) becomes In conclusion, due to the quadratic form of the penalty, the confidence region is represented by an ellipsoid with axes aligned with the columns of V and size determined bỹ The LS method can also be endowed with a probabilistic interpretation, in which its solution is a random vector due to the random nature of the measurements. If the mapping between state and residuals is linearized and the measurement noise is assumed to be randomly distributed, uncorrelated, and with zero mean value, then the first two statistical moments of the solution can be straightforwardly derived from those of the measurements. Specifically, x * is the solution mean and the covariance matrix is given by the inverse of the normal matrix P = C −1 (Montenbruck and Gill 2000). Moreover, with the additional assumption of Gaussian noise, the LS solution is distributed according to a multivariate Gaussian probability density function (PDF) p(x) (Gauss 1809), and the first two statistical moments fully statistically describe the solution. Then, where |P| is the determinant of P. In this case, contour levels of δ J (x) are also contour levels of p(x), thus ellipsoids with equal residual values (boundaries of the confidence region) are surfaces of equal probability.
3 Least squares solution with differential algebra DA techniques enable the efficient computation of derivatives of functions within a computer environment. The unfamiliar reader can refer to Berz (1999) for theoretical aspects and to Armellin et al. (2010) for a self-contained introduction for applications in astrodynamics.
Here we take advantage of DA techniques to develop a high-order iterative algorithm to solve LS problems. We first describe a general procedure that finds the solution of a system of nonlinear equations, g(x) = 0, in the DA framework. This algorithm was presented in Principe et al. (2016) and is recalled here for the sake of clarity. The algorithm is as follows: 1. Given the solution x i (from the previous iteration, or from the initial guess when i = 1), initialize the state vector x i as a kth-order DA vector and evaluate the function g in the DA framework, thus obtaining the kth-order Taylor expansion of g, T k g : 2. Invert the map (20) to obtain 3. Evaluate T k x i (g) at g = 0 to compute the updated solution as 4. Repeat (1)-(3) until a convergence criterion is met or the maximum number of iterations is reached.
After convergence, the algorithm supplies x * , the solution of the system g(x) = 0, as well as the high-order Taylor expansion of the function g around x * , T k g (x * ). When solving the LS problem, we need to find the stationary point of the target function J (x). Thus, we need to solve the system of nonlinear equations ∂ J ∂ x (x) = 0. We can hence set g(x) = ∂ J ∂ x (x) in the algorithm and obtain an arbitrary-order solver of the LS problem. It will be referred to as the differential algebra least squares (DALS) solver.
The DALS solver's main advantage is its polynomial approximation of the objective function J (x). Thus, we can take advantage of the analytical expression of J (x) in the neighborhood of a minimum and analyze the nonlinear description of the confidence region. In addition, as the objective function J (x) is expanded up to an arbitrary order, the correct (full) expression of the Hessian matrix H is available. We can then check whether H (x * ) is positive definite, i.e., x * is actually a minimum. This feature is not a natural part of the differential correction algorithm, as the full expression of H is not available. However, the algorithm can be extended and the Hessian computed in order to categorize x * .
We implemented two convergence criteria: one based on the correction size, one based on the target function variation. Thus, the iterative process is halted when at least one of the two following requirements is met: where x and J are established tolerances.
In this section, we presented an algorithm that can work at arbitrary order. However, including terms above the second order did not improve the convergence rate of the algorithm while significantly enlarged the execution time. Thus, a second-order DALS solver is used in this work. Note that, in order to exploit high-order terms, it would be necessary to use step-size control mechanisms, which have not been implemented yet.
After convergence of the second-order DALS solver, a kth-order Taylor expansion of J (x) around the optimal solution can be computed.
Confidence region representation
When we deal with short observational arcs, nonlinearities in the mapping between observations and state are relevant. Thus, we need to take into account terms above 2nd-order in the expression of J (x). Due to the non-negligible high-order terms, even when the measurement noise is assumed to be Gaussian, the solution statistics are no longer guaranteed to be Gaussian and surfaces of equal probability are no longer guaranteed to be ellipsoids. In this section we show some algorithms to accurately describe the confidence region of the LS solution. For this purpose, we take advantage of the high-order representation of the target function J (x) supplied by the DALS. Such algorithms are essential in many applications, e.g., to draw samples when correlating observations or to initialize a particle filter (Simon 2006).
The algorithm exploits the kth-order Taylor expansion of J (x) around the optimal solution provided by DALS after convergence where x = x * + δx. In Eq. (24), terms up to order k are retained.
The F-test method (Seber and Wild 2003) can be used, with the assumption of Gaussian measurement noise, to determine the value of the control parameter K 2 , introduced in Eq. (8), corresponding to the desired confidence level even when high-order terms are retained in the representation of the penalty. For a confidence level of 100(1 − α)%, we have in which F α n,m−n is the upper α percentage point of the F-distribution.
Line of variation
The LS confidence region is in general described as an n-dimensional region. However, this region is sometimes stretched along one direction, which is called the weak direction and defined as the predominant direction of uncertainty in an orbit determination problem . In other words, the weak direction is the direction along which the penalty δ J is less sensitive to state vector variations. The confidence region is an ellipsoid if the purely quadratic terms are retained in the expression of δ J . Thus, sampling along the weak direction consists in sampling the semimajor axis of the ellipsoid. However, the second-order approximation may not be accurate enough to properly represent the target function. When high-order terms are retained in the expression of J , the weak direction is point dependent (i.e., we can define a local weak direction) and the resulting curve may not be a straight line. Even a very small deviation from the above-mentioned curve causes the target function to quickly increase. Thus, due to the steepness of J , the sampling process along the weak direction is not straightforward.
The graph of the function J can be thought of as a very steep valley with an almost flat river at the bottom (Milani and Gronchi 2010). Thus, when the confidence region is stretched along one direction, samples can be obtained by looking for points on the valley floor. The valley floor of a function is the line that connects points on different contour subspaces for which the gradient's absolute value is minimum. This locus of points is called a function's GE (Hoffman et al. 1986). A GE intersects every contour line where the gradient is smallest in absolute value compared to other gradient values on the same contour (Hoffman et al. 1986). The concept of GE was already used in astrodynamics to perform a mono-dimensional sampling of the LS confidence region, and it is known as LOV .
Let C S be the contour subspace, that is the nonlinear subspace defined by contour lines of J (x), where J (x) = const. Note that at every point x G E on C S the gradient is perpendicular to C S . For a point x G E to belong to the valley floor, the norm of the target function's gradient |∇ J | needs to be extremal, and therefore Let R(x) be the projecting matrix onto the space tangent to C S at x and R 0 (x) be the projecting matrix in the direction of ∇ J . Equation (26) can be written as in which we omitted the dependency of J on x to simplify the notation. The quantity ∇(∇ J ) 2 can be expressed as that is the product of the Hessian and the gradient of J . This quantity can then be decomposed in a projection parallel to ∇ J , and a projection perpendicular to ∇ J , Thus, the condition in Eq. (27) becomes Equation (31) must hold for every point on a GE. Thus, a GE is a locus of points where the gradient of J (x) is an eigenvector of the Hessian of J (x), a one-dimensional curve in an n-dimensional space (Hoffman et al. 1986). Let v i be an eigenvector of H and g = ∇ J , then the necessary and sufficient condition for a point to belong to the GE can be rewritten as Equation (32) is a system of (n − 1) conditions that define a one-dimensional curve and it is apparent that there are n different curves, corresponding to the n different eigenvectors of the Hessian matrix. In the literature, the LOV is typically the GE corresponding to v 1 , the eigenvector associated with the minimum eigenvalue of the Hessian matrix. This direction identifies the weak direction of the OD problem. It is worth remarking that n LOVs can be computed, one for each of the n eigenvectors of the Hessian matrix. However, as the length of the LOV is shorter for the eigenvectors associated with higher eigenvalues, nonlinearities are likely to be significant only in the computation of the first LOVs.
The LOV definition can be generalized to m ≤ n dimensions by allowing i in Eq. (32) to take m values. These conditions define an m-dimensional surface where gradient g at each point lies in the linear subspace spanned by m eigenvectors of the Hessian H . It is apparent that each LOV i lies totally on a higher-dimensional surface. As already explained in Milani et al. (2004), the uncertainty region with short arcs tends to a bi-dimensional set. Thus, in these cases we can extend the LOV concept to the GE surface identified by v 1 and v 2 , the two eigenvectors associated with the two smallest eigenvalues of H . We will refer to this surface as the GES.
Line of variation algorithm
We propose an algorithm to compute the LOV (along an arbitrary eigenvector), taking advantage of DA tools. The algorithm assumes that the DALS solver has been used to obtain the reference solution x i = x * of the LS problem and the kth-order Taylor approximation of δ J , T k δ J (x). Thus, the algorithm proceeds as follows: 1. Let K γ 1 be the length of the second-order ellipsoid along the eigenvector v 1 (x * ) as shown in Eq. (18), and Δx = K γ 1 h with h depending on the desired sampling rate. 2. Extract from T k δ J (x i ) the Taylor approximation of the Hessian T k H (x i ) of J and calculate its eigenvectors and eigenvalues at x i . Compute the point is the eigenvector corresponding to the minimum eigenvalue. 3. Let L(x i+1 ) be the hyperplane spanned by the eigenvectors v j (x i+1 ) with j = 2, . . . 6 and passing through x i+1 , i.e., Compute the point x i+1 , belonging to L(x i+1 ) and such that ∇ J ( This is equivalent to finding the solution of the system 4. Repeat steps (2) , the boundary of the confidence region is reached.
The output is a set of points x L OV i , with i = 1, . . . , l that describe the LOV. It is worth mentioning that the approximation T k δ J , initially provided by the DALS solver, is recomputed whenever a point of the LOV falls outside the region where the truncation error of the polynomial approximation is acceptable. The estimated truncation error is computed using the approach described in Wittig et al. (2015). Although the algorithm presented here describes the computation of the LOV 1 , i.e., the GE along v 1 , it can be run along any eigenvector direction, thus providing up to six LOVs.
Gradient extremal surface algorithm
When the confidence region is not accurately described by one LOV, it is often sufficient to adopt a 2-D description of the region. This region can be represented by the GES defined by v 1 and v 2 (the two eigenvectors associated to the smallest eigenvalues of H ). This surface has the property that, at each of its points, the gradient of J lies on the plane spanned by v 1 and v 2 . The following algorithm is proposed to compute the points belonging to this surface: 1. Run the algorithm described in Sect. 4.1.1. The resulting set of l points is referred to as x L OV 1 . 2. Take a point of x L OV 1 as initial point, Δx is a chosen length as in 4.1.1. 4. Let L(x i,k+1 ) be the hyperplane spanned by the eigenvectors v j (x i,k+1 ) with j = 3, . . . 6 and passing through x i,k+1 . Compute the point x i,k+1 , which belongs to L(x i,k+1 ) and such that ∇ J (x i,k+1 ) is orthogonal to it. This is equivalent to finding the solution of the system ⎧ ⎨
Repeat steps (3)-(4), until the value of
i.e., the boundary of the confidence region is reached. 5. Repeat the steps (2)-(5) for all points This algorithm allows us to obtain a two-dimensional description of the confidence region, even when LOV 2 is not a straight line. However, this procedure is computationally intensive as the set of nonlinear equations Eq. (35) need to be solved for every point on the surface. However, it is worth noting that Eq. (35) is a set of polynomial equations (as we are working with Taylor approximations), and this makes the algorithm viable from a computational time standpoint. Moreover, when LOV 2 approximates a straight line, the more efficient algorithm in Sect. 4.3 can be used to sample the uncertainty region.
Arbitrary direction sampling
When nonlinearities are significant only along the weak direction but the confidence region cannot be represented as a one-dimensional curve, a simplified version of the algorithm described in Sect. 4.2 can be adopted. This algorithm will be referred to as the arbitrary direction (AD) algorithm and is summarized as follows: 1. Run the algorithm described in Sect. 4.1.1. The resulting set of l points is referred to as x L OV . 2. Take one point from x L OV as initial point, 3. Select a direction v in the state vector space along which we want to sample the confidence region. This direction can be v 2 (i.e., the eigenvector corresponding to the second smallest eigenvalue of H ) or any other direction of interest, including a random one. 4. Generate a set of samples This algorithm avoids solving the system of nonlinear equations (35). However, it can be applied to accurately sample the confidence region only when the curvature along the selected directions is negligible. In addition, by generating a set of random directions, the algorithm can be used to produce samples at the confidence region boundaries for different confidence levels.
Full enclosure of the confidence region with ADS
The LOV is a one-dimensional representation of the LS confidence region. When this is not a good approximation, the GES approach enables a bi-dimensional representation. In some cases (e.g., for the computation of low collision probabilities or the initialization of particle filters in state estimation), it may be necessary to consider a full n-dimensional representation of the confidence region. We can apply automatic domain splitting (ADS) techniques (Wittig et al. 2015) to enclosure this region with a set of boxes on which the penalty function is accurately represented by multiple Taylor polynomials. These boxes can be obtained using the following steps: 1. Let H (x * ) be the Hessian of the target function evaluated at x * . Compute the eigenvectors of H (x * ) and store them column-wise in the matrix V . 2. Compute an n-th dimensional box enclosure of the LS confidence region. This is achieved by determining the box D that encloses both the second-order confidence region expressed by Eq. (18), and (when necessary, see Sects. 4-5 for details) the LOV j expressed in the eigenvector space. This last set of points is obtained by multiplying the LOV j points by V T : , for i = 1, . . . , l and j = 1, . . . , n.
3. Compute the high-order expansion of the penalty in the eigenvector space, T k δ J V (x), with x = V T x. 4. Apply ADS to the Taylor expansion T k δ J V (x) over the domain D to ensure that the truncation error is below a given threshold on D. As a result, D is split into a set of subdomains and a corresponding set of Taylor approximations of δ J V (δx) are computed.
Find the minimum of T k
δ J V (δx) over each subdomain and retain only the subdomains in which the minimum is smaller than J (x * ) + K 2 . This step is obtained by running an optimizer (e.g., MATLAB fmincon function) on each local Taylor polynomial.
The result of the algorithm is a set of subdomains that cover the nonlinear LS confidence region. Note that, as D encloses both the LOV j and the second-order confidence region, it will likely enclose the full nonlinear confidence region. In addition, it is worth mentioning that D is defined in the eigenvector space to reduce the wrapping effect. Once we have enclosed the uncertainty domain in the box D and computed the accurate polynomial representation of the penalty in this domain, we can introduce the high-order extension of the solution pdf, p k (x). By analogy with the Gaussian representation introduced at the end of Sect. 2.1, we make the assumption that p k (x) can be expressed as: Although not rigorously the pdf of x, our assumption is motivated by the fact that, in this way, surfaces of equal residuals (nonlinear confidence region boundaries) remain surfaces of equal probability and that Eq. (37) returns the normal distribution p(x) of Eq. (19) when the high-order terms in J (x) are negligible. The integral, introduced to normalize the pdf to 1 over D, is evaluated by means of Monte Carlo integration: where N is the number of samples generated according to the importance sampling distribution q(x). The normal multivariate distribution p(x) defined in Eq. (19) is selected as the importance sampling distribution to speed up the Monte Carlo integration convergence rate. Note that the integral in Eq. (38) could be approximated using DA integration tools. However, this would require the Taylor expansion of e − 1 2 T k δ J (x) , which would generate a large number of subdomains for an accurate representation.
Strategy for confidence region representation
OD problems do not always need high-order methods to describe the confidence region. Similarly, the description of the confidence region does not always require an n-th dimensional representation. In this section, we first introduce some indices to capture the main features of the uncertainty region and then we describe our strategy to describe it balancing computational effort and accuracy.
High-order index
After computing the DALS solution and the polynomial approximation of J , we want to define an index to assess the relevance of high-order terms. Recalling Eqs. (19) and (37), we define the index This index quantifies the effect of nonlinearities by measuring how much the statistics of LS solution deviate from Gaussianity when high-order terms are retained in the penalty expression. The integral in the denominator in Eq. (39) is √ (2π) n |P|, whereas the integral in the numerator is computed via a Monte Carlo method by generating a cloud of N samples distributed according to the second-order representation of J , i.e., according to p(x) = N (x * , P). After some manipulation, the index can be approximated as Γ H indicates whether high-order terms in J provide significant contribution over the entire uncertainty domain. This check is relevant, for instance, when we sample the solution pdf in the initialization of a particle filter. When the index shows that high-order terms are not relevant, we rely on a second-order representation of the LS confidence region. Otherwise, high-order analyses are performed starting with the computation of the LOV along the v 1 direction. Thus, to avoid wasting computational time on cases for which high-order terms are not relevant, the index is computed for k = 3.
LOV index
When high-order terms turn out to be relevant, we might need to compute the LOVs associated with different eigenvectors to correctly represent the confidence region's structure. However, high-order terms might only be relevant along certain directions. In particular, after sorting the covariance eigenvalues in decreasing order, if the high-order terms are neglected for specific eigenvectors, then they can be neglected for all subsequent ones. Due to high-order terms, the LOVs may depart significantly from the second-order confidence ellipsoid axes. In particular, the LOVs could be stretched and/or curved. An index to assess the effect of nonlinearities on the LOV computation, Γ L OV , can be defined as the relative error between the second-order and the kth-order representation of the pdf evaluated at the LOV points: Γ L OV can be used to assess how much the LOV departs from the confidence ellipsoid axis: the larger Γ L OV , the more relevant the curvature and/or stretching is. As mentioned earlier, this index is first computed for the LOV 1 , i.e., along v 1 . The LOV 2 and its index are only computed when the result for LOV 1 shows significant stretching or curvature. The procedure is halted when the index computed for a given LOV shows negligible effects from higherorder terms.
In the LOV algorithm, the polynomial expression of the target function J is recomputed when necessary. The need for recomputing can be quantified by substituting both the secondorder and kth-order approximations of J in Eq. (41) without recomputing the polynomial.
Dimensionality index
The full representation of the confidence region requires generating samples that accurately describe the n-th dimensional confidence region. It is apparent that a huge number of samples may be required for a six-dimensional confidence region. To alleviate this problem, it is important to understand when a lower-dimensional sampling is sufficient (Milani et al. 2004). However, determining the dimensionality is not a trivial task as it strongly depends on the problem at hand, the coordinate representation, and the units and scaling factors adopted.
Here, an index is introduced based on the fact that a variation of an orbit's semi-major axis causes its uncertainty region to quickly stretch along the orbit, making follow-up observations challenging. For this reason, we look at the impact the uncertainty along the different eigenvectors or LOVs has on the orbit semi-major axis. In particular, the index is defined as the variation of the mean anomaly M after one orbital period T due to the variation of the orbit's semi-major axis associated with the ith direction, Δa i : in which n is the mean motion, and the starred quantities indicate properties of the LS solution.
For example, an index value above one corresponds to a stretching of the uncertainty region of more than one degree after one orbital revolution along the LS solution orbit.
Summary of the algorithm
In Fig. 1, a summary of the proposed algorithm is shown: 1. Collect the observations (spread over a short arc). 2. Run the IOD and DALS solvers using all the observations acquired. As a result, the LS solution and the polynomial approximation of the target function are obtained. 3. Compute the index Γ H . If Γ H is smaller than a given threshold, the second-order description of the confidence region is adopted. Else, a high-order analysis is carried out. 4. Start from i = 1 and compute the LOV i until Γ L OV i is smaller than an established threshold. 5. Sample the region using one of the proposed algorithms.
Simulation results
For all the following test cases, optical observations (i.e., right ascension and declination) were simulated from Teide observatory, Tenerife, Canary Islands, Spain (observation code 954). Four different orbits were used as test cases: a low Earth orbit (LEO) (NORAD Catalog number 04784), a geostationary Earth orbit (GEO) (NORAD Catalog number 26824), a geostationary transfer orbit (GTO) (NORAD Catalog number 25542), and a Molniya orbit (NORAD Catalog number 40296). In Principe et al. (2016), the same objects in LEO, GEO and Molniya were considered. However, in this work all algorithms were tested with shorter observational arcs. This section is divided in two parts: in the first one we analyze the convergence properties of the DALS algorithm, whereas in the second one we apply the strategy described in Sects. 4-5 to characterize the uncertainty region of the LS solution.
DALS convergence properties
The observation strategy adopted for GEO, GTO and Molniya objects involves re-observing the same portion of sky every 40 s, which is compatible with Siminski et al. (2014) and Fujimoto et al. (2014). The measurement noise is Gaussian with zero mean and standard deviation σ = 0.5 arcsec. The object in LEO is assumed to be observed with a wide fieldof-view camera, which takes observations every 5 s and has an exposure time of 3 s. In this case, σ = 5 arcsec. In both cases, two scenarios with 8 or 15 observations are reproduced.
In the 8-observation scenario, the arc length of the observation ranged from 1.09 • for the Molniya orbit to 3.95 • for the GTO; in the 15-observation scenario, the arc length ranged from 2.14 • for the Molniya orbit to 7.44 • for the GTO. In Table 1, the observation conditions are summarized. The results discussed in this section assume the availability of an initial orbit, obtained by solving an IOD problem. In the computation of this preliminary solution, a high-order algorithm that solves two Lambert's problems between the central epoch and the two ends of the observed arc is used. For more details, the reader can refer to Armellin et al. (2016). It is finally worth noting that Kepler's dynamics are considered throughout this section, even though the proposed approach does not rely on any Keplerian assumption.
For each test case shown in Table 1, synthetic observations were generated by adding Gaussian noise to ideal observations and 100 simulations were run. The DALS solver estimated the orbit at the center of the observation window (at observation #4 for the 8-observation scenario and #7 for the 15-observation one). This approach proved to optimize both the algorithm performance and robustness. The tolerances x and J were such that convergence was reached when at least one of the following conditions was met: where m is the number of measurements and σ the standard deviation of the sensor noise. The DALS solver always converged with LEO, GTO and Molniya orbits, while with GEO the convergence rate was 92%. The solver took on average 6 iterations. Thus, the observation arcs were long enough to guarantee a good convergence rate for the DALS solver. Note that the convergence of the algorithm does not provide any information on the quality of the solution. In Table 2, the median absolute error with respect to the reference orbit in position (km) and velocity (m/s) is reported for all test cases and scenarios. The estimation errors of the DALS solution were generally lower than those of the IOD solution, proving that including all the observations can improve the orbit estimation even for short arcs. In addition, the enhancement in accuracy granted by the LS was greater for longer observational arcs. For shorter observation arcs (8 observations), the median error was up to thousands kilometers, which hardens the task of performing follow-up observations. As expected, orbit estimation is more accurate for longer observation arcs, as the median error decreased with the number of observations. As the true solution is supposed to be unknown in a real-world scenario, the solution accuracy was assessed by analyzing the absolute values of the residuals scaled by the measurements σ . The maximum median of absolute values was found for each test case among the 100 simulations and reported in Table 3. These values are compatible with the measurement statistics. Figure 2 reports the results of simulations for an 8-observation Molniya orbit scenario. The statistics of the absolute value normalized residuals are plotted and compared against the IOD solutions. The residuals of the IOD solutions vanished at observations #1,4,8, i.e., the observations used for the IOD solver. This is down to the fact that IOD solutions are deterministic and exactly reproduce the observations adopted for IOD. However, the residuals considerably went up at other observation epochs. In contrast, the LS residuals were on average smaller and more uniformly distributed. Thus, the LS solution was a better estimate of the orbit compared to the IOD solution, even when only few measurements distributed on a short arc were available.
Confidence region representation
This section is devoted to analyze the representation of the confidence region. The object in Molniya orbit is used as test case, as it is characterized by the shortest observational arc. The DALS solver was run with 8 observations 40 s apart, and σ = 0.5 arcsec. The DALS solution led to J * = 5.008. In confidence region definition, a value K = 3.1 was chosen to ensure a confidence level of 95 percent (see Eqs. (8) and (25)). The state vector was represented in Cartesian coordinates, while results are shown on the ρ −ρ plane, where the largest uncertainty was expected (Milani et al. 2004;Worthy and Holzinger 2015).
First, the relevance of high-order terms was evaluated. Using a third-order polynomial approximation of J and a cloud of 50,000 samples led to Γ H = 0.563, which means that the relative impact of the third-order terms was around 56%. This suggested that high-order terms were relevant for an accurate analysis, and a high-order description of the confidence region should be adopted. In contrast, the same test case performed with observations 420 s apart (i.e., spread over a longer arc) led to Γ H = 0.003. Next, the algorithm described in Sect. 4.1.1 was run. The polynomial approximation of J was recomputed five times to ensure an estimated truncation error of 10 −2 . The resulting LOV 1 is plotted in Fig. 3 and compared against the semi-major axis of the second-order ellipsoid. The curvature and stretching of LOV 1 led to Γ L OV 1 = 0.119. If we replace the second-order approximation of J with its 6th-order counterpart in the calculation of this index without recomputing the polynomial, we obtain a value of 0.169. This result further proved the need for recalculating the Taylor expansions to achieve accurate results. The same algorithm was run along v 2 , the second main direction of uncertainty. The resulting set of points is plotted in Fig. 4. These points mostly lay on the axis of the ellipsoid, giving Γ L OV 2 = 0.0021. Consequently, it was not necessary to run the algorithm along v 3 . In case of observations spread over a long arc, also the first LOV lay on the semi-major axis (see Fig. 5), leading to Γ L OV 1 = 0.0015. This confirmed that the second-order approximation was accurate enough in case of long observation arc.
The third step was evaluating the uncertainty set's dimensionality by computing Γ i D . The confidence region was very large along v 1 , with Γ 1 D = 481 • . Γ 2 D was 49 • , while Γ i D ≈ 0.1 • − 0.2 • for i = 3, . . . , 6. Thus, a two-dimensional description of the confidence region seemed to be appropriate. The confidence region turned out to be much smaller for the long observation arc. Along v 1 , Γ 1 D = 3.73 • , whereas along the other directions Γ i D ≤ 0.5 • for i = 2, . . . , 6. A mono-dimensional approximation of the confidence region may thus be sufficiently accurate in the case of long arc.
As the confidence region was shown to be two-dimensional in the short arc case, the methods introduced in Sect. 4 can be applied to fully characterize the uncertainty set. In Fig. 6, samples generated with a second-order approximation are plotted in the ρ −ρ plane, while the performances of the algorithms described in Sect. 4 are compared in Fig. 7. The second-order approximation did not allow us to sample the whole uncertainty set, as suggested by the value of Γ H . In contrast, both the GES and AD algorithms provided a more reliable description of the region. The two high-order methods performed equally well because Γ 2 L OV was very small, meaning that nonlinearities could be neglected along v 2 . The higher computational cost of the GES could thus be avoided. Table 4 shows the computational time of the algorithms obtained on a Windows desktop with a 3.20 GHz Intel i5-6500 processor and 16 GB The next analysis considers the representation of the full n-dimensional uncertainty region by ADS. The first step was to enclose the uncertainty domain with a box defined in the eigenvector space. This was achieved by considering the enclosure of the LOVs and the second-order ellipsoid. The ADS was then run to obtain an accurate polynomial representation of J on the entire domain, using 10 −2 as accuracy threshold of the estimated truncation error. In Fig. 8, the resulting subdomains are shown both in the ρ−ρ and v 1 −v 2 planes. Subdomains in white were discarded (minimum larger than the confidence region threshold), while colored subdomains were retained. The colormap refers to the ratio J * J i , with J i being the minimum value of J within each subdomain. The domain was only split along v 1 , the main direction of uncertainty. The LOV crossed 5 subdomains, the same number of times the polynomial expansion of J was recomputed when running the LOV algorithm.
Within the domain, the accurate representation of the target function allowed us to obtain the solution pdf using Eq. (37) once the integral of Eq. (38) is computed. In this case, we , with J i being the minimum value of J within the subdomain, while white subdomains were discared. The observed object was in Molniya orbit (NORAD Catalog number 40296) and the DALS was run with 8 observations 40 s apart and σ = 0.5 arcsec used 5 × 10 4 samples and computed D e − 1 2 T k δ J (x) d x = 4.6 × 10 −7 . The relative difference with respect to the second-order value provided by (2π) n |C| was 0.084. For the long-arc case, the relative difference reduced to 4 × 10 −12 .
Effect of state representation
The choice of state representation significantly affected the confidence region description. In Sect. 6.2, the polynomial approximation of J was expressed in Cartesian coordinates, in which the effect of high-order terms was found to be less relevant. However, the MEE representation is a more suitable choice when it comes to propagating the confidence region. MEE absorb part of the nonlinearity of orbital dynamics and, thus, bring benefits when propagating the region (Vittaldev et al. 2016). In this section, the same object as in Sect. 6.2 is analyzed. However, the state vector was expressed in MEE.
The third-order polynomial approximation of J led to Γ H ≈ ∞, meaning that the size of third-order terms was large and the accuracy of the Taylor expansion low (the approximation of J also assumed negative values within the sampled domain, which explains the very large number obtained for Γ H ). In Figs. 9, 10, the resulting LOVs along v 1 and v 2 are plotted and compared against the axes of the second-order ellipsoid on the plane ρ −ρ. The LOV 1 strongly differed from the semi-major axis of the ellipsoid, more than with Cartesian coordinates (compare Fig. 9 with Fig. 3). The corresponding value of Γ L OV 1 was 0.983. As shown in Fig. 10, the LOV 2 also diverged from the ellipsoid axis, with Γ L OV 2 = 0.686, while the effect of high-order terms along v 3 was negligible (with Γ L OV 3 = 1 × 10 −4 ). Note that the ellipsoid axes computed in MEE coordinates were not straight lines when projected onto the ρ −ρ plane due to the nonlinearities in the coordinate transformation. This is also the reason why MEE are less appropriate in confidence description.
As for the uncertainty set's dimensionality, the size of the confidence region along v 1 was such that Γ 1 D = 562 • , thus comparable to the confidence region with Cartesian coordinates. In contrast, along v 2 the confidence region in MEE was smaller, with Γ 2 D = 2.4 • . Along the other directions, Γ i D ≤ 0.3 • for i = 3, . . . , 6. Thus, also with MEE a two-dimensional approximation of the confidence region seemed to be reasonable with short arc.
The GES algorithm was more accurate than the AD algorithm in sampling the uncertainty set, due to the relevance of high-order terms along v 2 . Figure 11 compares the two resulting samplings. The GES algorithm succeeded in generating samples in the whole uncertainty region, so justifying its computational cost. It is worth noting that Fig. 11 is related to a scenario in which observations are 60 s apart rather than 40. This choice was adopted because, in the latter case where the GES algorithm was applied to the MEE representation, the solution of the nonlinear system in Eq. (35) failed to converge for some points of the uncertainty set.
In summary, the Cartesian coordinates proved to be a more suitable choice when it came to describe the confidence region of an OD solution, with nonlinearities playing a less relevant role. However, also with Cartesian coordinates, a high-order approach is recommended when accurate results are required.
Effect of observation separation
In Sect. 6.2, analysis of the confidence region with short arc was compared to results obtained when observations were spread over a larger arc, suggesting that the observation separation may significantly affect the uncertainty set. In this section, the effect of the observation separation on the indices described in Sect. 5 is analyzed. The DALS solver was run with 8 observations of the object in Molniya orbit (NORAD Catalog number 40296) and σ = 0.5 arcsec. Different angular separations were simulated. In Fig. 12, the trends of Γ H and Γ L OV 1 for different observation separations are plotted. The effect of high-order terms significantly decreased for observations spread over a larger arc and, consequently, the LOV's departure from the semi-major axis of the second-order confidence ellipsoid was less evident. The values of both indices were smaller when a Cartesian representation was adopted, meaning that Cartesian coordinates could allow us to neglect high-order terms with shorter arcs. In Fig. 12(a), Γ H tended toward infinity when MEE were used. This happened because the third-order approximation of J can turn into negative values within the domain of interest. It is a hint that we need to recompute the polynomial expression of J when running the LOV algorithm. In contrast, in Fig. 12(b), Γ L OV 1 tended toward 1 because the term e − 1 2 T 2 δ J x L OV i became negligible with short arcs. In Fig. 13, Γ 1 D , Γ 2 D and Γ 3 D are plotted. The values of these indices decreased for longer observation separations with both Cartesian coordinates and MEE, meaning that the uncertainty set shrank when observations were spread over a longer arc. Γ 3 D was also relatively small with short observational arcs, which justified the two-dimensional description of the confidence region suggested in this work. Finally, it is worth noting that, with longer arcs, Γ 2 D became small enough that also the second dimension was negligible. This behavior was more evident when MEE were used, showing that a mono-dimensional representation may be appropriate in this case, due to the alignment of one coordinate with the semi-major axis. Fig. 11 Sampling of the confidence region by means of GES algorithm and AD algorithm. The colormap refers to e −(J i −J * ) . The object was in Molniya orbit (NORAD Catalog number 40296), and the DALS was run with 8 observations 60 s apart and σ = 0.5 arcsec, using MEE
Conclusions
In this work, we focused our investigation on the OD problem when optical observations are taken on observation arcs that are long enough to solve a least square problem, but too short to accurately determine the orbits.
We formulated a classical LS problem and implemented an arbitrary-order solver (referred to as DALS solver). In doing so, we avoided the approximation of classical differential correction methods. The formulation of a LS problem and its solution via the DALS improved on average the available IOD solution. Thus, including all acquired observations in the OD process turned out to be useful even on short arcs.
We have introduced nonlinear methods in the representation of the LS solution's confidence region. DA techniques allowed us to retain high-order terms in the polynomial approximation of the target function. These terms are typically neglected by linearized the- Fig. 12 Trends of Γ H and Γ L OV 1 as functions of observation separation for both Cartesian and MEE representation. Values larger than 10 6 were omitted ories, but they can be relevant for the accurate description of the confidence region of orbits determined with short arcs.
To this aim, we have introduced four algorithms based on DA techniques to nonlinearly describe the confidence region. The first one is a DA-based implementation of the LOV. We used DA to effectively solve the set of nonlinear equations required to capture the departure of the LOV from the axis of the second-order ellipsoid. In this algorithm, the polynomial approximation of the target function is recomputed only when necessary, based on accuracy requirements. The concept of LOV was then extended to two dimensions, introducing the GES. Another approach combined the LOV with a high-order polynomial to obtain a two-dimensional sampling without the computational cost of GES. Finally, a method was proposed to fully enclose the n-dimensional uncertainty set and accurately represent the target function over it by using ADS. As high-order computations require extra computational cost, we have introduced an index to guide the choice between second-order and high-order representation of the uncertainty set. Through this index, it was shown that the effect of nonlinearities decreases significantly for longer observational arc and that Cartesian coordinates are a better choice than MEE. An additional index was introduced to determine the uncertainty set's dimensionality based on the along-track dispersion associated with uncertainties in the determination of the orbit semi-major axis. This choice is strongly connected with the possibility of acquiring follow-up observations. Analysis of the dimensionality index demonstrated that a two-dimensional representation of the uncertainty region can be sufficiently accurate, depending on the telescope properties and the adopted observation strategy. With longer arcs, the uncertainty region could even be approximated as a mono-dimensional set, in particular when MEE are used.
The methods we have introduced come at the cost of intensive computations and the loss of a closed-form representation of the state statistics. However, the accuracy gained by the retention of nonlinear terms may play a key role in the development of reliable tools for observations correlation and for the initialization of nonlinear state estimation techniques, such as a particle filter. Future effort will be dedicated to the development of a full nonlinear mapping between sensor noise and object state, which will allow us to consider measurement noise with arbitrary statistics. | 13,865.6 | 2019-08-29T00:00:00.000 | [
"Mathematics"
] |
Epithelial–Mesenchymal Transition Gene Signature Related to Prognostic in Colon Adenocarcinoma
Colon adenocarcinoma (COAD) remains an important cause of cancer-related mortality worldwide. Epithelial–mesenchymal transition (EMT) is a key mechanism, promoting not only the invasive or metastatic phenotype but also resistance to therapy. Using bioinformatics approaches, we studied the alteration on EMT related genes and its implication on COAD prognostic based on public datasets. For the EMT mechanisms, two overexpressed genes were identified (NOX4 and IGF2BP3), as well as five downregulated genes (BMP5, DACT3, EEF1A2, GCNT2 and SFRP1) that were related to prognosis in COAD. A qRT-PCR validation step was conducted in a COAD patient cohort comprising of 29 tumor tissues and 29 normal adjacent tissues, endorsing the expression level for BMP5, as well as for two of the miRNAs targeting key EMT related genes, revealing upregulation of miR-27a-5p and miR-146a-5p. The EMT signature can be used to develop a panel of biomarkers for recurrence prediction in COAD patients, which may contribute to the improvement of risk stratification for the patients.
Introduction
Colon adenocarcinoma (COAD) is one of the most frequent forms of adult's cancer type, ranking it as the third among major cancer-related death globally [1], and the most frequent form of colorectal cancer (approximately 95%) [2][3][4]. Tumor, lymph node and metastasis (TNM) staging are the standard for COAD prognostic. The prognosis for these 2 of 15 patients is related to the TNM stage and curative surgical intervention, which is pertinent only for patients with primary tumor and loco-regional lymph nodes. Unfortunately, most of the cases are discovered in the advanced stages of the disease, when the therapeutic options are limited, being associated with a high metastatic rate and recurrence [5][6][7][8].
Epithelial to mesenchymal transition (EMT) is a mechanism, characterized by the loss of epithelial features, such as cell polarity or cell-cell contact, and acquisition of mesenchymal properties, promoting increased motility [9][10][11]. The cell-cell contact is established by tight junctions, adherent junctions, desmosomes and gap junction, being interconnected with key signaling networks [12]. Additionally, during the EMT, the epithelial actin architecture is reorganized, observing an acquisition of cell motility and invasive features and an expression of matrix metalloproteinases (MMPs) that can destroy extracellular matrix (ECM) proteins [12][13][14] and can be affected by hypoxic condition [15]. EMT is also orchestrated by several intrinsic factors, including transcription factors or miRNAs [16,17].
In COAD, similar to other cancers, the EMT mechanism is related to an invasive or metastatic phenotype [18]. EMT is a mechanism regulated by the tumor's microenvironment components, in particular as an effect of hypoxic condition [18,19]. Therefore, investigation of the EMT mechanisms related to the COAD progression promotes the discovery of new coding and non-coding genes as diagnostic biomarkers and the development of potential powerful therapeutic target [20][21][22]. EMT is also related to chemoresistance in COAD [23,24].
Understanding the EMT key factors is vital for the progression of powerful therapeutic interventions [10]. The present study evaluates the prognostic value of the altered transcriptomic EMT signature, mRNA and microRNA (miRNA), using publicly available data of patients with COAD, followed by correlations with the EMT markers and with the related miRNAs that target these genes.
Materials and Methods
Differential gene expression analysis in COAD. We used expression data from The Cancer Omics Atlas (TCOA) repository database, which is an integrative resource for cancer omics data, allowing the user to run different types of analyses [25]. TCOA provides the inquiring of gene expression, somatic mutations, miRNA expression and protein expression data based on a single molecule or cancer type [25]. In the "Cancer" module, you are allowed to select a certain cancer type, and TCOA will further output the top 20 most frequently mutated genes, the upregulated and downregulated ones, all of them in association with the selected pathology and compared with the normal controls. We used Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn/ index.htm, accessed on 22 April 2021) for the representation of the expression level on different stages.
Survival analysis. For the correlation of the survival rate for the EMT genes in COAD, GEPIA online tool was used (http://gepia.cancer-pku.cn/, accessed on 22 April 2021). Our data show only those genes able to predict the overall survival outcomes (p-value ≤ 0.05) in COAD. Additional survival analysis for the most relevant miRNAs targeting key EMT genes in COAD was done using StarBase [30].
Mutational pattern evaluation. The cBioPortal (http://cbioportal.org, accessed on 18 April 2021) [31] is an open-access platform that can be used for analysis of cancer genomics datasets. The EMT gene mutation pattern in COAD was obtained according to the cBioPortal's online instructions. A mutation analysis was performed in 169 cancer studies, including mutation, amplification and deletion, based on three datasets.
Correlation among the EMT gene signature. CANCERTOOL is a friendly web-based interface that allows us to carry out gene-to-gene correlations in multiple datasets at the same time for a specific cancer subtype, including for COAD [32]. Additionally, it permits us to perform correlations among the altered genes and gene enrichment analysis. The correlation heatmap was performed based on five Affimetrix datasets (GSE44076, GSE14333, GSE33113, GSE37892, GSE39582), one Agilent dataset (GSE42284) and one RNAseq dataset. An additional correlation among the expression level of BMP5 and its related directly and indirectly interconnected miRNAs was done using the miRNA-Target CoExpression tool from StarBase (http://starbase.sysu.edu.cn/index.php, accessed on 22 April 2021) [30].
Gene and miRNAs validation in COAD samples. A total of 29 histologically confirmed COAD patients admitted were included in the study, after they signed the informant consent according to the Ethical Committee (approval number 6346/02.07.2014). Thus, the study included 17 males with age average of 70.05 ± 11.69, respectively, and 12 females with age average of 67.83 ± 11.18; these patients did not receive chemotherapy. Immediately following surgical excision, all tissue samples were snap-frozen in liquid nitrogen for RNA isolation and stored at −80 • C until further analysis. Patients' clinical data are presented in Table 1. Total RNA from normal and tumoral tissue was extracted and isolated according to the Trireagent (Ambion, Austin, TX, USA) protocol. The RNA concentration was measured by NanoDrop-1000 spectrophotometer (Thermo Scientific, Waltham, MA, USA). For gene expression evaluation, we used 500 ng RNA, meanwhile for miRNA, we used 50 ng. Gene expression protocol is based on a reverse transcription into cDNA step using a High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA), followed by a amplification step using SYBR Select Master Mix on Viia7 System, with the specific primers for target genes (BMP5: left primer: TTGTTGCCCAGGCTGGAGTG, right primer CCCAGCACTTTGGAAGGCCA; B2M: left primer CACCCCCACTGAAAAAGATGAG, right primer: CCTCCATGATGCTGCTTACATG).
The evaluation of the miRNA expression level was done using a TaqMan Based protocol and TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems). For the amplification, we used TaqMan Fast Advanced Master Mix (Applied Biosystems) and TaqMan assays (U48: 001006; U6: 001973; miR-27a-5p: 002445; miR-146a-5p: 000468) on the same instrument. The gene and miRNA expression levels were conducted using the 2 −∆∆CT method.
Results
COAD reveals a specific gene expression pattern. Gene expression analysis was done using TCOA public database, revealing 1628 altered genes (363 overexpressed and 1265 downregulated genes), selecting a cut-off value |Fold change| >2 and FDR q-value ≤ 0.05 (Table 1). Additionally, the top 20 most frequent mutations are displayed in Figure 1. Total RNA from normal and tumoral tissue was extracted and isolated according to the Trireagent (Ambion, Austin, TX, USA) protocol. The RNA concentration was measured by NanoDrop-1000 spectrophotometer (Thermo Scientific, Waltham, MA, USA). For gene expression evaluation, we used 500 ng RNA, meanwhile for miRNA, we used 50 ng. Gene expression protocol is based on a reverse transcription into cDNA step using a High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA), followed by a amplification step using SYBR Select Master Mix on Viia7 System, with the specific primers for target genes (BMP5: left primer: TTGTTGCCCAGGCTGGAGTG, right primer CCCAGCACTTTGGAAGGCCA; B2M: left primer CACCCCCAC-TGAAAAAGATGAG, right primer: CCTCCATGATGCTGCTTACATG).
The evaluation of the miRNA expression level was done using a TaqMan Based protocol and TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems). For the amplification, we used TaqMan Fast Advanced Master Mix (Applied Biosystems) and TaqMan assays (U48: 001006; U6: 001973; miR-27a-5p: 002445; miR-146a-5p: 000468) on the same instrument. The gene and miRNA expression levels were conducted using the 2 −ΔΔCT method.
Results
COAD reveals a specific gene expression pattern. Gene expression analysis was done using TCOA public database, revealing 1628 altered genes (363 overexpressed and 1265 downregulated genes), selecting a cut-off value |Fold change| >2 and FDR q-value ≤ 0.05 (Table 1). Additionally, the top 20 most frequent mutations are displayed in Figure 1.
. Figure 1. Mutational landscape, including the top 20 frequented mutated genes in COAD, mutation rates expressed as % from the total number of samples, generated using the online portal TCOA.
EMT-specific mechanisms in COAD. EMT is the key mechanism involved in many solid tumors including COAD. EMT activation is connected with an increased metastatic rate and resistance to therapy, contributing to a poor prognosis. The NCBI list for specific transcripts related to EMT mechanisms was downloaded and was then overlapped with the altered genes, emphasizing common EMT-altered transcripts in a Venn diagram.
Regarding the downregulated genes, 34 of them display a common signature with EMT ( Figure 2A), presented as a network ( Figure 2B). Emphasis was to a high degree on interconnection among a part of them; BMP5, DACT3, EEF1A2, GCNT2 and SFRP1 were EMT-specific mechanisms in COAD. EMT is the key mechanism involved in many solid tumors including COAD. EMT activation is connected with an increased metastatic rate and resistance to therapy, contributing to a poor prognosis. The NCBI list for specific transcripts related to EMT mechanisms was downloaded and was then overlapped with the altered genes, emphasizing common EMT-altered transcripts in a Venn diagram.
Regarding the downregulated genes, 34 of them display a common signature with EMT ( Figure 2A), presented as a network ( Figure 2B). Emphasis was to a high degree on interconnection among a part of them; BMP5, DACT3, EEF1A2, GCNT2 and SFRP1 were statistically significantly correlated with overall survival ( Figure 2C), part of the gene network being BMP5 and SFRP1. Then, in the case of upregulated genes, 19 were found to be common, but only two of them (NOX4 and IGF2BP3) were correlated with the overall survival rate ( Figure 2D).
Additionally, for the key EMT genes that predict overall survival rate, two overexpressed genes (NOX4 and IGF2BP3) and five downregulated genes (DACT3, EEF1A2, BMP5, GCNT2 and SFRP1) were represented, the expression level being displayed according to the stage of the disease, using online dataset analysis GEPIA, as shown in Figure 3, associated with the pathological stage in the case of NOX4, IGF2BP3, BMP5, DACT3 and EEF1A2.
To study the gene expression, and for the correlation among EMT genes, Cancertool [32] was used, as can be observed in Figure 4A for the NOX3 and EMT overexpressed genes and in Figure 4B for the IGF2BP3 and EMT overexpressed genes. The plot gene-togene correlations were calculated in the COAD datasets, presenting the type of correlation. This allowed the selection of the key EMT-correlated genes that uncover functional implications in cancer; this is the case of direct correlation of NOX4 with DACT3 and SFRP1 and inverse correlation of NOX4 and BMP5.
miRNA-EMT gene interaction is shown in Figure 5, emphasizing a direct connection between DCAT3 and EEF1A2 with the TP53, one of the most frequent mutated gene in COAD, and also DNMT1 (methyl transferase), a frequent methylated gene in cancer, and two important transcription factors (EZH2 and E2F1). miRNET targeted genes analysis, showing the EMT genes targeted by key miRNAs, but none of the miRNAs predict the overall survival rate ( Figure 5B). The additional mutational pattern displayed in Figure 5C was generated using cBioPortal online tool.
Additional expression levels for the interconnected miRNAs transcript were done using StarBase online tool, revealing let-7a-5p, let-7b-5p and miR-129-2-3p downregulation, respectively, miR-27a-3p, miR-146a-5p and miR-335-3p overexpression ( Figure 6A); none of these transcripts were able to predict overall survival rate ( Figure 6B). statistically significantly correlated with overall survival ( Figure 2C), part of the gene network being BMP5 and SFRP1. Then, in the case of upregulated genes, 19 were found to be common, but only two of them (NOX4 and IGF2BP3) were correlated with the overall survival rate ( Figure 2D). Venn diagram used to shows the common signature among the EMT gene list (downloaded from NCBI) and the downregulated genes in COAD, red letters-genes predict overall survival; (B) interaction network using String software for 34 downregulated genes common with EMT mechanisms, red circles-genes that predict overall survival outcomes; (C) COAD downregulated genes involved in EMT (DACT3, EEF1A2, BMP5, GCNT2 and SFRP1) predicting overall survival; (D) Venn diagram used to show the common signature among the EMT gene list (downloaded from NCBI) and the upregulated genes in COAD, red letters-genes predict overall survival; (E) interaction network using String software for 19 overexpressed genes common with EMT mechanisms, red circles-genes predict overall survival outcomes; (F) COAD overexpressed genes involved in EMT (NOX4 and IGF2BP3) predicting overall survival. Using StarBase, we observed a negative correlation between BMP5 and let-7a-5p, a direct target for BMP5, respectively, and a positive correlation between BMP5 and miR-129-2-3p and miR-335-3p, the two transcripts being indirectly connected in the interaction network. The BMP5 correlation with the selected miRNAs is presented in Figure 7.
• Validation of BMP5 Genes by qRT-PCR
In order to further validate the gene expression alteration in COAD revealed using the GEPIA online tool, we performed qRT-PCR for BMP5; B2M gene was used as a housekeeping gene. Gene expression analysis found downregulation of BMP5 in tumor tissues versus normal adjacent tissues ( Figure 8); the data are in agreement with those from the GEPIA database. An additional receiver-operating characteristic (ROC) curve was generated to evaluate the sensitivity and specificity of these genes; the AUC for BMP5 was 0.7154.
• Validation of Key EMT miRNAs by qRT-PCR
As an additional validation step, from the mRNA-miRNA network, miR-27a-3p and miR-146a-5p were selected; U6 and RNU48 were used as housekeeping transcripts, with the data being analyzed using the ∆∆Ct method, revealing overexpression of miR-27a-5p and miR-146a-5p ( Figure 9A). Both evaluated transcripts were proved to be overexpressed in COAD, the higher AUC value being 0.6947 for miR-27a-5p ( Figure 9B).
Additionally, for the key EMT genes that predict overall survival rate, two overexpressed genes (NOX4 and IGF2BP3) and five downregulated genes (DACT3, EEF1A2, BMP5, GCNT2 and SFRP1) were represented, the expression level being displayed according to the stage of the disease, using online dataset analysis GEPIA, as shown in Figure 3, associated with the pathological stage in the case of NOX4, IGF2BP3, BMP5, DACT3 and EEF1A2. To study the gene expression, and for the correlation among EMT genes, Cancertool [32] was used, as can be observed in Figure 4A for the NOX3 and EMT overexpressed genes and in Figure 4B for the IGF2BP3 and EMT overexpressed genes. The plot gene-togene correlations were calculated in the COAD datasets, presenting the type of correlation. This allowed the selection of the key EMT-correlated genes that uncover functional implications in cancer; this is the case of direct correlation of NOX4 with DACT3 and SFRP1 and inverse correlation of NOX4 and BMP5. miRNA-EMT gene interaction is shown in Figure 5, emphasizing a direct connection between DCAT3 and EEF1A2 with the TP53, one of the most frequent mutated gene in COAD, and also DNMT1 (methyl transferase), a frequent methylated gene in cancer, and two important transcription factors (EZH2 and E2F1). miRNET targeted genes analysis, showing the EMT genes targeted by key miRNAs, but none of the miRNAs predict the overall survival rate ( Figure 5B). The additional mutational pattern displayed in Figure Interconnection between the EMT genes, key transcription factors and miRNA, generated using miRNET; (B) TP53 predicts overall survival rate in COAD; (C) survival map generated using GEPIA online tool; (D) prediction of has-let-7a-5p and let-7b-5p binding to BMP5 3′UTRs, generated using TargetScan 3.0 (http://www.targetscan.org/mamm_30/docs/help.html, accessed on 22 April 2021); (E) analysis of genetic alterations in COAD using cBioPortal data; online plot presents the mutation frequency for the selected genes found in RNAseq data. * Not Appliable. •
Validation of BMP5 Genes by qRT-PCR
In order to further validate the gene expression alteration in COAD revealed u the GEPIA online tool, we performed qRT-PCR for BMP5; B2M gene was used as a ho keeping gene. Gene expression analysis found downregulation of BMP5 in tumor tis versus normal adjacent tissues ( Figure 8); the data are in agreement with those from GEPIA database. An additional receiver-operating characteristic (ROC) curve was g ated to evaluate the sensitivity and specificity of these genes; the AUC for BMP5 0.7154. tion among the expression levels BMP5 and let-7a-5p, let-7b-5p, miR-27a-3p, miR-129-2-3p, miR-146a-5p in COAD, generated using StarBase.
Validation of BMP5 Genes by qRT-PCR
In order to further validate the gene expression alteration in COAD revealed using the GEPIA online tool, we performed qRT-PCR for BMP5; B2M gene was used as a housekeeping gene. Gene expression analysis found downregulation of BMP5 in tumor tissues versus normal adjacent tissues ( Figure 8); the data are in agreement with those from the GEPIA database. An additional receiver-operating characteristic (ROC) curve was generated to evaluate the sensitivity and specificity of these genes; the AUC for BMP5 was 0.7154. • As an additional validation step, from the mRNA-miRNA network, miR-27a-3p and miR-146a-5p were selected; U6 and RNU48 were used as housekeeping transcripts, with the data being analyzed using the ΔΔCt method, revealing overexpression of miR-27a-5p and miR-146a-5p ( Figure 9A). Both evaluated transcripts were proved to be overexpressed in COAD, the higher AUC value being 0.6947 for miR-27a-5p ( Figure 9B).
Discussion
Alteration on EMT-associated gene expression profiles was observed to be cell type specific and correlated with the degree of progression towards mesenchymal differentiation [13,33]. Therefore, EMT gene signature may act as a prognostic biomarker in COAD, as revealed by multiomics data [33]. EMT rendering was resistant not only to chemotherapy, but also to immunotherapies [34]. EMT directly regulates the expression of PD-L1 and is associated with several other checkpoint ligands, therefore promoting checkpointdependent resistance to anti-tumor immunity [34].
In the present study, a robust EMT gene signature, clinically significant to the patients with COAD, was identified to predict survival rate. The EMT signature proved to have prognostic effects, as was observed by the overall survival analysis using GEPIA for EMT genes.
EMT is a key biologic mechanism connected to decrease cell adhesion and to increase invasiveness; therefore, it is a key component for metastasis and drug resistance in many cancers, including COAD [19,33]. EMT is a complex regulatory mechanism that affects not only the expression of the epithelial proteins, but it is also related to important alteration on cytoskeleton architecture, as can be observed by the gene enrichment analysis presented in Figures 2 and 3.
NOX4 is presented in the literature as a therapeutic target in digestive cancer [35], including in colorectal cancer [36], being correlated among others with VEGF, MAPK and PI3K/AKT [35]. NOX4 inhibition promotes the immunotherapy response by overcoming cancer-associated fibroblast-mediated CD8 T-cell exclusion [37]. Additionally, NOX4 is a key element in TGFβ and SMAD3-driven activation of EMT and migration of epithelial
Discussion
Alteration on EMT-associated gene expression profiles was observed to be cell type specific and correlated with the degree of progression towards mesenchymal differentiation [13,33]. Therefore, EMT gene signature may act as a prognostic biomarker in COAD, as revealed by multiomics data [33]. EMT rendering was resistant not only to chemotherapy, but also to immunotherapies [34]. EMT directly regulates the expression of PD-L1 and is associated with several other checkpoint ligands, therefore promoting checkpoint-dependent resistance to anti-tumor immunity [34].
In the present study, a robust EMT gene signature, clinically significant to the patients with COAD, was identified to predict survival rate. The EMT signature proved to have prognostic effects, as was observed by the overall survival analysis using GEPIA for EMT genes.
EMT is a key biologic mechanism connected to decrease cell adhesion and to increase invasiveness; therefore, it is a key component for metastasis and drug resistance in many cancers, including COAD [19,33]. EMT is a complex regulatory mechanism that affects not only the expression of the epithelial proteins, but it is also related to important alteration on cytoskeleton architecture, as can be observed by the gene enrichment analysis presented in Figures 2 and 3.
NOX4 is presented in the literature as a therapeutic target in digestive cancer [35], including in colorectal cancer [36], being correlated among others with VEGF, MAPK and PI3K/AKT [35]. NOX4 inhibition promotes the immunotherapy response by overcoming cancer-associated fibroblast-mediated CD8 T-cell exclusion [37]. Additionally, NOX4 is a key element in TGFβ and SMAD3-driven activation of EMT and migration of epithelial cells [38]. NOX family is overexpressed in colorectal cancer and correlated with the patient's prognostic [36]. NOX4 is related not only to cell proliferation and apoptosis, but also with migration and metastasis [36,39].
Other studies present NOX4 and ITGA3 as relapse risk markers with important clinical interest, in order to understand how the mechanism of tumor's progression to metastasis [40] is activated. An increased expression level for IGF2BP3 was correlated with aggressive phenotypes of colorectal cells [41][42][43]. IGF2BP3 is overexpressed in COAD samples [44], being related with adverse clinical outcome [43]. IGF2BP3 being presented as therapeutic target, especially for immunotherapy [41,43,44].
DACT3 is underexpressed in COAD and is an epigenetic regulator of Wnt/β-catenin, this gene was proven to be a prognostic factor for colorectal cancer. These emphasize the important role of epigenetic events on the modulation of response to therapy [45]. DACT3 was proven to be one of the key six hub genes related with prognostics and validated to be connected with the pathological stage in COAD [46].
The elongation factors, including EEF1A2, were studied in different cancer types, in COAD being downregulated, literature presenting them as biomarkers and therapeutic drug targets. BMP5 has an essential function in COAD initiation and development, as it is a tumor suppressor gene mutated in around 7% of the cases [47].
BMP5 is not in the top 20 frequently mutated genes, but is overexpressed in tumor tissue and correlated with overall survival rate, as our data showed. In spite of this, a previous study reveals that BMP5 genetic alteration in COAD is distinctive, and loss of BMP5 expression may be a COAD-specific event [47]. The BMP5 expression level was significantly decreased in tumors compared to adjacent normal tissues in TCGA cohort; mRNA expression level for this gene was also validated in our patient cohort. BMP5 is considered as an early event in colorectal cancer, with prognostic value, related with a coexpression pattern with E-cadherin and could be considered tissue-specific [47]. BMP5 belongs to the TGF-β/Smad signaling pathway; therefore, BMP5 expression was positively associated with epithelial markers and negatively associated with mesenchymal markers [47]. Additionally, BMP5 was proved to interact with PI3K-AKT and MAPKs signaling [48]. In our case, BMP5 was inversely correlated with NOX4 and let-7a-5p and positively correlated with miR-129-2-3p and miR-335-3p (Figures 4 and 7).
Recent studies demonstrated that epigenetic alterations also play important roles in EMT [49,50]. Our study reveals a connection between the epigenetic mechanisms and EMT, with an emphasis on a direct interconnection between the EMT gene predicting overall survivals and DNMT1 ( Figure 6), a key gene involved in the epigenetic mechanism. Other EMT-related genes with prognostic value are represented by GCNT2 [51], retrieved in the literature as methylated, correlated with lymph node metastasis of colorectal cancer [51,52].
Although the detailed role of EMT in metastatic cascade, especially in COAD, remains poorly understood [53], the alterations in the miRNA regulatory network are essential for the activation of EMT [11,54,55]. miR-27b-3p is overexpressed in COAD, as the qRT-PCR data display. A previous study revealed that miR-27b-3p is a potential indicator of efficiency of chemotherapy and a therapeutic target [56]. This transcript was proven to promote migration and invasion in colorectal cancer [57]. miR-146a-5p proved to have a dual role in metastasis and disease progression [58]. In our patient cohort, it was found to be overexpressed in tumor tissue versus normal tissue in COAD. Other studies observed an overexpression in colon cancer cells, being related to induced immune suppression and drug resistance, not only to EMT [59]. This is the case of let-7a that is related to suppressing antitumor immunity, being proposed as a potential target of immunotherapy in COAD [60]. Another study reveals that let-7a and let-7b expression is dependent on TP53, a gene frequently mutated in COAD [61] and having an important role in cancer [62].
Conclusions
In conclusion, our results showed that the EMT is not only a cellular tool for reacting to environmental changes and resistance to them, particularly those related to treatment response, but also related to patient prognosis in COAD. Our study reveals regulatory interactions between the EMT gene and miRNAs. The identified EMT signature was proved to interact with key signaling pathways, sustaining tumor progression; therefore, these genes can be considered not only as prognostic markers, but also as therapeutic targets. Additionally, it is tempting to speculate that several of the identified miRNAs can potentially serve as biomarkers with clinical implication. These results need further validation in a more enlarged study on an additional group of patients with COAD, to confirm the capacity to convey prognostic information in a pre-treatment setting. | 5,922 | 2021-05-26T00:00:00.000 | [
"Biology"
] |
Automatic Targetless Monocular Camera and LiDAR External Parameter Calibration Method for Mobile Robots
: With the continuous development and popularization of sensor-fusion technology for mobile robots, the application of camera and light detection and ranging (LiDAR) fusion perception has become particularly important. Moreover, the calibration of extrinsic parameters between the camera and LiDAR is a crucial prerequisite for fusion. Although traditional target-based calibration methods have been widely adopted, their cumbersome operation and high costs necessitate the development of more efficient and flexible calibration methods. To address this problem, this study proposed a two-stage calibration method based on motion and edge matching. In the first stage, the preliminary estimation of the extrinsic parameters between the camera and LiDAR was performed by matching visual odometry and LiDAR odometry using a hand–eye target method. In the second stage, the calibration results from the first stage were further refined by matching the image edges and discontinuous depth point clouds. The calibration system was then tested in both simulated and actual environments. The experimental results showed that this method, which did not require specially structured targets, could achieve highly flexible and robust automated calibration. Compared to other advanced methods, the accuracy of the proposed method was higher.
Introduction
With the rapid development of robots, artificial intelligence, and multi-sensor fusion perception technologies, the perception capabilities of mobile robots have greatly expanded, resulting in more convenient, intelligent, and safe equipment, such as unmanned vehicles and aerial drones [1,2].Similar to the "eyes" of robots, vision sensors play an important role in environmental perception.However, in a complex environment, the data acquired by a single sensor cannot satisfy the necessary requirements.Consequently, mobile robots are equipped with multiple sensors.As mainstream visual sensors, cameras and light detection and ranging (LiDAR) sensors complement each other in terms of providing information, so their fusion can overcome their limitations and improve their environmental perception abilities.
Robots with visual and LiDAR perception have been widely used in important fields such as target detection and tracking [3][4][5], self-driving [6][7][8][9], indoor navigation [10], and simultaneous localization and mapping (SLAM) [11][12][13][14].However, the extrinsic parameters of different sensors must be determined before they can be effectively fused.Extrinsic calibration is the process of estimating the rigid-body transformation between the reference coordinate systems of two sensors and is a prerequisite for many scene-based applications.In the field of target detection and tracking, by obtaining the extrinsic parameters between the camera and LiDAR sensor, the data from the two can be aligned in the same coordinate system to achieve multi-sensor fusion target detection and tracking tasks, thereby improving tracking accuracy.In the field of 3D reconstruction [15][16][17], obtaining an accurate spatial relationship between the two is conducive to obtaining richer and more accurate 3D scene information.Combining the high-precision 3D information from the LiDAR sensor and the RGB information from the camera ensures more efficient and reliable 3D reconstruction results.Additionally, extrinsic calibration between the camera and LiDAR sensor is an important preliminary step in visual-LiDAR fusion SLAM.For example, visual-LiDAR odometry and mapping (VLOAM) [11] combines the visual information of a monocular camera with that of the LOAM.To perform visual-LiDAR fusion SLAM, the spatial calibration between the two sensors must be determined.The extrinsic parameters between the camera and LiDAR sensor provide important prior information to the system, accelerating backend optimization, while the LiDAR sensor can supplement the scale loss of monocular vision to provide more complete spatial information and correct motion drift.
In recent years, scholars have proposed several methods for the joint calibration of cameras and LiDAR.In traditional camera and LiDAR calibration, the calibration objects are typically observed simultaneously using sensors [18,19].Moreover, these methods focus on various designs for calibrating the objects and algorithms that match them.However, extracting the corresponding features from a calibrated object can be difficult.Special calibration sites are required, which can be extremely vulnerable to artificial and noise interference.By contrast, motion-based calibration methods can automatically estimate the extrinsic parameters between cameras and LiDAR sensors without requiring structured targets; however, their calibration accuracy needs to be further improved [20].
In this study, we propose a targetless calibration method that could be applied to mobile robot sensor systems.The contributions of this study are as follows: 1. We propose a two-stage calibration method based on motion and edge features that could achieve the flexible and accurate calibration of cameras and LiDAR sensors in general environmental scenes.2. We designed an objective function that could recover the scale factor of visual odometry while achieving the initial estimation of extrinsic parameters, providing a good initial value for the fine calibration stage.3. Experiments were conducted on both simulated and real-world scenes to validate the proposed method.Compared with other state-of-the-art methods, the proposed method demonstrated advantages in terms of flexibility and accuracy.
The remainder of this paper is organized as follows.Section 2 reviews the research status of scholars in this field.Section 3 describes the workflow of the proposed method.Section 4 provides a detailed description of the research methods and optimization techniques.Section 5 introduces the experimental setup in both simulation and real-world scenarios and presents the results of various experiments and visualizations.Section 6 discusses the experimental results and their rationale.Finally, Section 7 concludes the study.
Related Work
Currently, the calibration methods for LiDAR sensors and cameras can be categorized into two groups, that is, target-based and targetless methods.Target-based methods involve the use of specific calibration targets during the calibration process to extract and match features, thereby obtaining the extrinsic parameters between the cameras and Li-DAR sensor.Zhang et al. [21] calibrated the internal parameters of cameras and LiDAR sensors using a chessboard pattern and obtained their extrinsic parameters.Because of their clear planar features and advantages-such as their high accuracy, ease of implementation, and good stability-the use of chessboard patterns has been extensively studied [22][23][24][25][26]. Cai et al. [27] customized a chessboard calibration device that provided local gradient depth information and plane orientation angle information, thereby improving calibration accuracy and stability.Tóth et al. [28] used a smooth sphere to solve the extrinsic parameters between the cameras and LiDAR sensors by calculating the corresponding relationship between the sphere centers.Beltrán et al. [29] proposed a calibration method that could be applied to various combinations of monocular cameras, stereo cameras, and LiDAR sensors.They designed a special calibration device with four holes, which provided shared features for calibration and calculated the extrinsic parameters between different sensors.In summary, the advantages of such calibration methods are their high accuracy and being able to design different calibration targets for different application scenarios.However, target-based methods require special equipment and complex processes, resulting in increased costs and complexity.
However, targetless methods do not require specific targets during the calibration process.Instead, they statistically analyze and model the spatial or textural information in the environment to calculate the extrinsic parameters between the cameras and LiDAR sensors [30].Targetless methods can be roughly classified into four groups, that is, information-theoretic, motion-based, feature-based, and deep-learning methods.Informationtheoretic methods estimate the extrinsic parameters by maximizing the similarity transformation between cameras and LiDAR sensors [20,31].Pandey et al. [32] used the correlation between camera image pixel grayscale values and LiDAR point-cloud reflectivity to optimize the estimation of extrinsic parameters by maximizing the mutual information.However, this method was sensitive to factors such as environmental illumination, leading to unstable calibration results.
Motion-based methods solve the extrinsic parameters by matching odometry or recovering the structure from motion (SfM) during the common motion of the cameras and LiDAR sensors.Ishikawa et al. [33] used sensor-extracted odometry information and applied a hand-eye calibration framework for odometry matching.Huang et al. [34] used the Gauss-Helmert model to constrain the motion between sensors and achieve joint optimization of extrinsic parameters and trajectory errors, thereby improving the calibration accuracy.Wang et al. [35] computed 3D points in sequential images using the SfM algorithm, registered them with LiDAR points using the iterative closest point (ICP) algorithm, and calculated the initial extrinsic parameters.Next, the LiDAR point clouds were projected onto a 2D plane, and fine-grained extrinsic parameters were obtained through 2D-2D edge feature matching.This method had the advantages of rapid calibration, low cost, and strong linearity; however, the estimation of sensor motion often exhibited large errors, resulting in lower calibration accuracy.
Feature-based calibration methods calculate the spatial relationship between the camera images and LiDAR point clouds by extracting common features.Levinson et al. [36] calibrated the correspondence between an image and point-cloud edges, assuming that depth-discontinuous LiDAR points were closer to the image edges.This calibration method has been continuously improved [37][38][39] and is characterized by high accuracy and independence from illumination.It is commonly used in online calibration.
Deep-learning-based methods use neural-network models to extract feature vectors from camera images and LiDAR point clouds, which can then be input into multilayer neural networks to fit the corresponding extrinsic parameters.This method requires a considerable amount of training and has high environmental requirements; therefore, it has not been widely used [40].In summary, these methods have the advantages of a simple calibration process, no requirement for additional calibration devices, and rapid calibration performance.However, in complex environments with uneven lighting or occlusions, the calibration accuracy can be diminished.
In certain application scenarios-such as autonomous vehicles-more extensive calibration scenes and efficient automated calibration methods are required.Consequently, we proposed a targetless two-stage calibration strategy that enabled automatic calibration during motion in natural scenes while ensuring a certain degree of calibration accuracy.
Workflow Overview of the Proposed Method
This section introduces a method for the joint calibration of the camera and LiDAR sensor.Figure 1 shows the proposed camera-LiDAR joint calibration workflow.In the first stage, the camera and LiDAR sensors synchronously collect odometric information and motion trajectories.Specifically, the camera motion trajectory is obtained by visual odometry based on feature extraction and matching [41], whereas the LiDAR motion trajectory is obtained from edge point-cloud registration-based LiDAR odometry [42].Based on the timestamps of the LiDAR motion trajectory, the camera poses are interpolated, and new odometry information is used to construct a hand-eye calibration model to estimate the initial extrinsic parameters.In the second stage, calibration is performed by aligning the edge features in the environment.By finding the minimum inverse distance from the projected edge 3D point cloud to the 2D camera edge image given the initial extrinsic parameters of the camera and LiDAR sensor, an optimization strategy combining the basin-hopping [43] and Nelder-Mead [44] algorithms can be used to refine their extrinsic parameters.
Methodology
In this section, we elaborate on the proposed camera-LiDAR joint calibration method based on motion and edge features.
Initial External Parameter Estimation
The camera and LiDAR sensors were fixed on a unified movable platform, their relative positions remaining unchanged.However, there was no direct correspondence between the observations of the two sensors.Consequently, the motion of each sensor had to be estimated simultaneously, the motion trajectories being used to calculate the extrinsic parameters between the camera and the LiDAR sensor.This method is a hand-eye calibration model, as shown in Figure 2. The mathematical model can be expressed as = , where denotes the pose transformation information of the LiDAR from timestamp to +1 , denotes the pose transformation information of the camera from to +1 , and denotes the transformation parameter from the LiDAR coordinate system to the camera coordinate system, that is, the extrinsic parameter.In the calibration process, the primary task is to estimate the motion of each sensor.The monocular visual odometry method in ORB-SLAM [41] was used for camera motion estimation.Its front-end includes feature extraction, feature matching, and pose estimation modules.By tracking feature points, it can calculate the camera motion trajectory in real time and return the motion trajectory in the form of rotational angles.The LOAM algorithm [42] was used for LiDAR motion estimation.By matching the edge and plane features in the point cloud, we could obtain a set of consecutive relative pose transformations and return them as rotation angles.
Because the number of frames for the camera and LiDAR trajectories do not correspond one to one, preprocessing of the two odometry datasets is required.Among them, we utilized the concept of Spherical Linear Interpolation (Slerp) to construct novel trajectories.Specifically, we assumed that the timestamp of a certain LiDAR frame was , satisfying < < +1 , where and +1 were the timestamps of two adjacent camera pose frames, with poses denoted by ( ) and ( +1 ).We used as the interpolation coefficient to estimate the new position and orientation.The interpolated camera position and orientation at time can be calculated as follows: Using Equation (1) with the LiDAR trajectory timestamps as a reference, all camera pose frames can be interpolated to obtain a new camera trajectory.Once the LiDAR trajectory ∈ { 1 , 2 , ⋯ , } and the corresponding camera trajectory ∈ { 1 , 2 , ⋯ , } are determined, based on the hand-eye calibration model, we can obtain: where and denote the extrinsic parameter matrices between the camera and LiDAR sensor.
Because monocular visual odometry has scale ambiguity, there exists a scale between and .Introducing the scale factor allows for scaling the positional relationship between the camera and the LiDAR to align it with the scale in the real world.Consequently, multiplying the translation terms in Equation (3) using the unknown scale gives: Because the correct rotation matrix is orthogonal, based on orthogonality, = 3×3 , where 3×3 is a 3 × 3 identity matrix.Using the orthogonality of the rotation matrix as a constraint, the extrinsic parameter solution can be converted into an optimization problem with equality constraints, as follows: where denotes a weighting parameter used to balance the influence between the two objective functions.The pure equality constrained optimization problem in Equation ( 5) can be solved using the sequential quadratic programming (SQP) method with the joint iterative optimization of the extrinsic parameters * , * , and scale .The essence of the SQP algorithm lies in transforming the optimization problem with equality constraints into a quadratic programming sub-problem and iteratively updating the Lagrange multipliers to optimize the objective function.This approach effectively handles equality constraints and exhibits a certain level of global convergence.When the change in the objective function value is below a certain threshold, the algorithm converges, and a relatively accurate initial extrinsic parameter ′ can be obtained. ′ can also be calculated using the inverse Rodrigues formula, providing important initial guidance for the subsequent refinement calibration stage.
Refined External Parameter Estimation
A calibration method based on motion can be realized by matching the odometry information.However, because the sensor continuously moves without loop closure detection, the pose information inevitably accumulates errors, limiting the accuracy of the calibration results.Specifically, there may also be potential errors in the mis-synchronization between visual and LiDAR odometers.To enhance the quality of the extrinsic parameter calibration, an edge-matching method was employed to further optimize the extrinsic parameters.The proposed method can be divided into two modules.In the first module, edge features of the image and discontinuous depth points from the LiDAR channel data are extracted, preparing for the subsequent edge-matching task.In the second module, discontinuous depth points are projected onto a plane using the camera's intrinsic parameter ( ) and initial extrinsic parameters.The camera's intrinsic parameter matrix , which includes the focal length, principal point, and image coordinate origin, was derived from calibration data provided by the device manufacturer.We believed that the camera's intrinsic parameter matrix was reliable.Ultimately, an optimization algorithm minimizes the distance between the edge images, thereby achieving refined extrinsic parameters.
Edge Feature Extraction
For the synchronized sample data obtained from different scenarios, a series of processing steps must be performed on the images and point clouds to extract common edge features from the scenes.Because image and point-cloud edge feature extractions are performed separately, the specific processing methods are explained below.
Edge features in images typically appear in regions with substantial changes in pixel values.Hence, the classic Canny algorithm [45] was adopted in this study to detect the image edge features using the edge extraction scheme shown in Figure 3. First, Gaussian filtering was applied to smooth regions with weak edge features.We increased the kernel size to reduce unnecessary image edge features.By doing so, our objective is to focus on the most relevant and distinctive edge features for calibration while minimizing the impact of noise and irrelevant details to the maximum extent possible.Subsequently, nonmaximum suppression and double thresholding were used to retain strong edge points with local maxima and suppress the surrounding noisy points.In this article, we set the high threshold to 1.7 times the low threshold.Finally, the broken-edge features are connected via the Hough transform to generate an edge binary mask.These steps can effectively filter out minor edges and retain significant edges, resulting in improved performance.
To encourage alignment of the edge features, a distance transform [46] can be applied to the edge binary mask to extend the radius of the edges.The closer it is to the edge, the stronger the edge energy.In subsequent edge-matching optimization, the distance-transformed image facilitates smooth pixel-wise matching and avoids becoming stuck in a local optimum.For point-cloud processing, considering the depth differences between individual Li-DAR channel points, a point-cloud depth-discontinuity method [36] can be used to extract the edge point clouds.This method detects locations with considerable depth changes in the point cloud to extract edge information.Based on the sequence number of the LiDAR channel to which each point belongs, LiDAR points are assigned to the corresponding channel ( ) in sequential order, where denotes the number of LiDAR channels.The depth value ( , ) of each point in every channel can then be calculated along with the depth difference between each point and its adjacent left and right points, as follows: = max( ,−1 − , , ,+1 − , , 0) At this point, each point is assigned a depth difference value ( ).Based on our testing experience, we consider a depth difference exceeding 50 cm as an edge point.All points identified as edges are aggregated into a point cloud, as shown in Figure 4b.
External Parameter Calibration for Edge Feature Matching
As described in the previous section, the depth-discontinuity method obtains the edge point cloud ( ).To project this onto the image plane, the initial extrinsic parameters from the first stage can be used.The projected pixel coordinates of on the image plane can be calculated as follows: [ , , 1] = ( * + * ), = {1, … , } =1 (7) where denotes the intrinsic camera matrix, * denotes the initial rotation matrix, and * denotes the initial translation vector.The pixel points obtained from Equation ( 7) can be used to generate a binary mask projected onto the image to obtain the binary image ( ).Combined with the distancetransformed edge mask ( ) described in Section 4.2.1, the average chamfer distance between the two edge features can be calculated.This converts the extrinsic calibration into an optimization problem that minimizes the chamfer distance, which can be expressed as follows: where ⨀ denotes the Hadamard product, denotes the binary mask formed by the edge point cloud, and denotes the vector form of the extrinsic parameters.
Redundancy often exists in image edges, resulting in a non-one-to-one correspondence between the 2D edges.Consequently, multiple solutions may exist for this problem.Additionally, when is far from the radiation range of , correct gradient guidance is lacking, and it can be easy to get stuck in a local minimum.In this study, we employed a hybrid optimization strategy that combines the basin-hopping and Nelder-Mead algorithms.The basin-hopping algorithm was utilized to identify the optimal region within the global context.By introducing random perturbations around the initial estimated external parameters, it enables continuous jumps and searches for the global optimal solu-tion.Then, the Nelder-Mead algorithm was used to find the local optima within the optimal region until the distance was minimized.This integrated approach effectively balances the requirements of global exploration and local refinement, thereby enhancing the precision of the optimization parameters.The extrinsic parameters at this point could be considered to be the final optimized results.
Experimental Results
In this section, we first introduce the sensor system of the movable robot platform and then list the main sensor configurations.
Virtual calibration scenarios were built on a virtual simulation platform, and simulation sensor systems were constructed based on real sensor parameters.Qualitative and quantitative evaluations of the proposed calibration method were then conducted using simulations and real experiments.We experimentally compared the accuracy of the motion-based, manual point selection, and mutual-information calibration methods.Additionally, point-cloud projection results were demonstrated under different scenarios.Using this experimental design, the flexibility of the proposed calibration method and accuracy of the results could be demonstrated.
Setup
The multi-sensor platform used in this study is shown in Figure 5a Figure 6 shows the environment and sensor settings used in the simulation experiment.In this study, the Gazebo simulation platform was used to build a calibration environment rich in objects.The simulated camera and LiDAR sensors are associated with a unified TurtleBot2 robotic platform.The parameters of the LiDAR sensor and camera were set based on the real sensor specifications.After launching the corresponding nodes, the robot could be controlled through the ROS to collect data during motion.
Simulation Experiment
The camera and LiDAR sensor were installed on the TurtleBot2 robotic platform.By controlling the robot with commands to move in different directions, this experiment performed circular trajectory motions to excite movements in multiple orientations while simultaneously recording data from the camera and LiDAR sensor [33].By extracting odometry information, 1100 frames of pose data could be randomly selected for the initial extrinsic calibration.The model converged after 20 iterations.
Figure 7 shows the alignment of the visual and the LiDAR motion trajectories.Specifically, Figure 7a depicts the original trajectories of the camera and LiDAR sensor, where pink indicates the LiDAR motion trajectory and green indicates the scale-recovered visual motion trajectory.Figure 7b shows the results of the aligned trajectories after the spatial transformation.It is evident that the two trajectories become more closely aligned after transformation, demonstrating that the calibration result obtained from the optimization iteration is fundamentally correct.This calibration result serves as the initial value for the next-stage calibration (referred to as the "initial value" below).During the motion data collection, eight groups of synchronized point clouds and image data were randomly selected from the scene for refinement calibration.After 500 iterations, the final extrinsic parameters could be obtained (referred to as the "final value" below).The calibration results and root mean square errors are presented in Tables 1 and 2, respectively.1 and 2 present the preset reference truths, initial values, and final values for both the rotation and translation parts.The results show that the initial rotation values have a root mean square error of 1.311°, and the initial translation values have a root mean square error of 8.9 cm.Although the accuracy is relatively low, using these initial values to guide the subsequent optimization is sufficient.From the final calibration results, it is evident that the rotation achieves a root mean square error of 0.252°, and that of the translation is 0.9 cm.This demonstrates that accurate calibration results can be obtained using the proposed method.
Based on our calibration results, we compared them with the results from motionbased [33], manual [47], and mutual-information methods [32], as shown in Figure 8.For the motion-based method, the data from the first stage of our method were used; for the manual method, 20 2D-3D correspondences were manually selected; and for the mutualinformation method, eight groups of data from the second stage of the proposed method were used for statistical analysis.The results show that both the rotation and translation errors of the proposed method are lower than those of the other methods.In particular, on the same dataset, the proposed method demonstrates higher accuracy than the mutualinformation method, validating the superiority of the proposed method in terms of accuracy.
(a) (b) Once the initial extrinsic parameters were obtained, the refinement calibration was executed automatically.However, more data samples led to higher computational complexity, requiring a longer computational time.To validate the impact of sample size on the calibration accuracy in refinement calibration, we collected four groups of data of different sample sizes for the experiments.Different iteration numbers and disturbances were set to the initial values.The purpose was to explore the variation trends of the calibration results with different sample sizes, as well as the effects of different iteration numbers and initial values on the final accuracy.
The experimental results are shown in Figure 9.It is evident that when the sample size increases from two to four, both the rotation and translation errors decrease markedly until the sample size reaches eight, where the trend levels off.When the sample size is small, the disturbance effect of the initial values on the calibration accuracy is greater.As shown in Figure 9b, when the sample size is two, the mean absolute translation errors after the initial value disturbance range from 2 to 5 cm.However, when the sample size is greater than or equal to four, the effects of different iteration numbers and initial value disturbances on the final accuracy are limited.This indicates that the proposed calibration method demonstrates strong robustness.From a qualitative perspective, Figure 10 provides two types of visualization results.In Figure 10b, the point clouds at different distances are colored differently, making it easy to observe the alignment between the point clouds and objects.Figure 10c shows the fusion results of the point cloud and image using extrinsic parameters.The reliability of the calibration results can be determined by observing the RGB point clouds.Taken together, the visualization results shown in Figure 10 demonstrate the high quality of the extrinsic parameters between the camera and LiDAR sensor, further validating the accuracy of the proposed method.
Real Experiment
In real-world experiments, we tested different environments.In the initial extrinsic calibration process, we collected 1400 and 3000 frames of pose data indoors and outdoors, respectively, and performed 30 optimization iterations to solve the parameters.The optimization results are shown in Figure 11, where green depicts the scaled visual trajectory and pink depicts the LiDAR trajectory.Figure 11a shows the original trajectories of the camera and LiDAR sensor.There was a scale difference between visual trajectory and Li-DAR trajectory, which can be attributed to the scale uncertainty of monocular visual odometry.Through continuous iterations, the two trajectories gradually converge, the final result being as shown in Figure 11b.The optimization results demonstrate that both calibrations are correct and reliable and can serve as initial external parameters for the refinement calibration stage.In the refinement calibration stage, based on experience from the simulation experiments, we randomly collected four, six, and eight samples of data indoors and outdoors, respectively.After 200 iterations, the final calibration parameters were obtained, the final calibration results being as listed in Table 3.As verified by the simulation experiments, when the number of samples is greater than or equal to four, the calibration results tend to stabilize for both scenarios.The rotation calibration results differ by no more than 0.4°, and the translation calibration results differ by less than 5 cm, demonstrating the excellent performance of the experiments indoors and outdoors.The standard deviation results shown in Table 4 demonstrate small fluctuations in the calibration outcomes, further validating the stability of the calibration system.As the true extrinsic parameters of the camera and LiDAR sensor were unknown, the specific calibration accuracy could not be calculated.Instead, the relative differences were compared by visualizing the calibration results.In this study, a point cloud was projected onto an image plane using the average of the calibration results to visually judge the accuracy of the extrinsic parameters.Figure 12 shows the projection results for three different scenarios compared to the motion-based [33], manual [47], and mutual-informationbased methods [32].As seen in the red boxes in Figure 12, compared with the other methods, the correspondence between the LiDAR point cloud and the image is better for the proposed method as the projected LiDAR points are closer to the image edges.
Discussion
The initial calibration stage used the hand-eye calibration model.When aligning the motion of the camera and LiDAR sensor, owing to differences in the odometry extraction accuracy and data collection synchronization deviations between different sensors, these deviations are propagated into extrinsic calibration errors, noise and error accumulation being the main causes of deviations in odometry accuracy.In particular, the scale ambiguity of monocular visual odometry results in an inability to determine the precise scale relationship between the camera and LiDAR sensor, even if the scale is optimized.The initial values exhibit an unsatisfactory calibration accuracy (Tables 1 and 2).
Consequently, in the refinement calibration, we used the edge information in the scene to improve the calibration accuracy.The initial extrinsic parameters obtained from the initial calibration reduce the search space in the subsequent refinement calibration, thus lowering the calibration difficulty and improving efficiency.This coarse-to-fine calibration strategy is effective because it makes full use of prior information and diverse features by decomposing the calibration into two subproblems.Refinement calibration considers the initial calibration results as a starting point, which effectively reduces error propagation and accumulation, resulting in a more accurate calibration.The final errors listed in Tables 1 and 2 confirm this.
As shown in Figure 8, compared to the other targetless methods, the proposed calibration method achieves the smallest errors.Manual calibration requires tedious manual operations to be introduced, making the calibration more subjective and complex.The mutual-information method estimates the extrinsic parameters by calculating the correlation between the grayscale image and point-cloud reflectance using kernel density estimation, maximizing this correlation.This method can be susceptible to environmental illumination and material reflectance, but the influence of environmental factors can be minimized by fixing the focus and exposure.Consequently, it can achieve good calibration accuracy in most environments.
During the experiments, although the proposed method exhibited superiority in various scenarios, some challenges remain under extreme conditions, such as low textures and complex environments.Once the edge features become difficult to identify or redundant, multiple local optima tend to appear, causing the calibration to fall into local optimum traps, making it necessary to improve the initial calibration accuracy to reduce the search space complexity.Similarly, selecting features from specific objects as the algorithm input can be challenging.
Conclusions
A camera-LiDAR extrinsic calibration method based on motion and edge matching was proposed.Unlike most existing solutions, this system could be calibrated without any initial estimation of the sensor configuration.Additionally, the proposed method did not require markers or other calibration aids.This process comprised two stages.First, the motion poses of the camera and LiDAR sensor were estimated using visual and LiDAR odometry techniques, and the resulting motion poses were interpolated.Furthermore, a hand-eye calibration model was applied to solve the extrinsic parameters, whose results served as the initial values for the next stage of calibration.Subsequently, by aligning the edge information in the co-observed scenes, the initial extrinsic parameters were further optimized to improve the final calibration accuracy.
The performance of the system was verified through a series of simulations and realworld experiments.Experimental evaluations validated the proposed method, demonstrating its ability to achieve precise calibration without manual initialization or external calibration aids.Compared to other targetless calibration methods, the proposed method achieved higher calibration accuracy.
However, several issues remain to be addressed in future studies: 1.Although we achieved calibration in various scenes-such as indoor and outdoor scenes-feature loss still exists in high-speed motion and weak-texture scenes.Deeper data mining is the focus of future research.2. Extracting useful semantic edges using deep-learning methods could be a solution to reduce the complexity of optimization [48].This methodology offers novel insights that can effectively overcome the limitations encountered in specific calibration scenarios.3. The extrinsic calibration between the camera and LiDAR sensor is the basis for higher-level fusion between the two sensors.However, more algorithms are required to realize feature-and decision-level fusion.We intend to further investigate the field of multi-sensor fusion technology through our ongoing research efforts.
Figure 1 .
Figure 1.Technical flow chart of the proposed calibration method.
Figure 3 .
Figure 3. Image data processing process.(a) The original image.(b) The Gaussian-filtered image.(c) The edge binary mask obtained by non-maximum suppression, bilateral threshold suppression, and Hough transform.(d) The distance transformation of the edge feature.
Figure 4 .
Figure 4. Extraction of point-cloud edges.(a) The original point cloud.(b) The detected edge point cloud.
Figure 5 .
Figure 5. Multi-sensor system of the experimental platform.(a) The mobile multi-sensor assembly robot platform.(b) The camera and LiDAR sensor to be calibrated.
Figure 6 .
Figure 6.Settings of the Gazebo virtual simulation platform.(a) A simulation of calibrating experimental scenarios.(b) A simulation of multi-sensor platforms.
Figure 7 .
Figure 7. Pink indicates the LiDAR motion trajectory and green indicates the scale-recovered visual motion trajectory.(a) For the trajectory after scale restoration.(b) The trajectory registration diagram.
Figure 8 .
Figure 8.Average absolute error between the calibration results and ground truth for different methods.(a) Rotation error.(b) Translation error.
Figure 9 .
Figure 9. Average absolute error variation with the number of samples when iteration times and initial parameters change.(a) Rotation error variation.(b) Translation error variation.
Figure 11 .
Figure 11.Pink indicates the LiDAR motion trajectory and green indicates the scale-recovered visual motion trajectory.(a) Original motion trajectories.(b) Trajectories after iterative optimization.
Figure 12 .
Figure 12.Comparison of projection results between the proposed method and three other methods in three different scenarios.Significant differences are evident in the content highlighted in the red box.
Table 1 .
Rotation calibration results of the simulation experiments.
Table 2 .
Translation calibration results of simulation experiments.
Table 3 .
Calibration results from real experiments.
Table 4 .
Standard deviations of calibration results. | 7,708.4 | 2023-11-29T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Rasch Model Analysis on the Effectiveness of Early Evaluation Questions as a Benchmark for New Students Ability
This paper discusses the effectiveness of the early evaluation questions conducted to determine the academic ability of the new students in the Department of Electrical, Electronics and Systems Engineering. Questions designed are knowledge based on what the students have learned during their pre-university level. The results show students have weak basic knowledge and this is in contrast to the results obtained during the application for admission to Year 1 of university. Thus, early evaluation questions were implemented to see the relevance in assessing the student's ability, obtained by the use of Rasch analysis, WinSteps. The findings show that the initial assessment is an effective and appropriate method to assess the ability of students, where the Cronbach-α is 0.69 and achieve the acceptable ranges of PT-Measure, Mean Square Outfit or Outfit Mean Square (MNSQ) and z-standardized values ZSTD) Outfit. This shows that Rasch analysis can be used to classify the questions and the students according to their performance level accurately and thus, reveal the true level of the students’ ability, despite the small number of samples.
Education is the main driving force for the development of the nation.The growth in higher education institution is important not only to produce a person who is knowledgeable in any particular field, but most importantly, to produce one who has excellent soft skills such as thinking skills, communication skills, team work, problem solving skills and other skills that are essential to meet up to the challenge of the 21 st century (Lee & Tan, 2003).At school, the education system employed is more text-book oriented.However, the education system at university seems to be more tailored to students who have the skills to search for, understand and analyze information critically.In addition, over the last decade, the university has shifted to more outcome-based education system, where problem-based learning has been introduced.
Another factor that contributes to the declining performance of the students is the learning environment.Back when they were at school, students were supervised and controlled by the parents or caretakers and there were also extra classes available for those could afford them.Meanwhile, for students who stayed at the hostel, they had been allocated some time to study at some specific time.This is different with the university environment, where students have more freedom in determining the kinds of activities that they want to concentrate in or do.
Therefore, with a different environment and study methods, very few students were able to adapt themselves to the university environment.Meanwhile, the rest of them faced the problem of adjusting to the new environment so much so that their academic performances were affected.
Another most important skill that has to be acquired by UKM students is the learning skills (Mohamed Amin, 2010).It is also one of the contributing factors to academic performance.It has been found that students with poor study habits are more likely to withdraw from university or to have academic performance problems during the transition from secondary school to university (Pantages & Creedon, 1975;Abbott-Chapman, Hughes & Wyld, 1992).
Pre-university student results showed marked improvement from year to year, but some of these outstanding students result plunge at the university level.Why are these students not able to continue their prior excellence?Is the scale used in assessing the results of the students different at pre-university and university?Are the methods of teaching and learning (T&L) for the two institutions vastly different?A study was undertaken to answer all these questions.
For this preliminary study, the initial evaluation questions were formulated and used to measure the level of basic knowledge and the ability of new students enrolled in the Department of Electrical, Electronics and Systems Engineering (JKEES).This is to see the actual performance of these students before going further with their university studies.This study is important to look at other factors that can contribute to the deterioration of the performance of students while studying at university.In addition, the results also give an early warning so monitoring of weak students can begin as early as possible to ensure that students are continuously motivated and do not drift in activities that could distract them from their studies.As study conducted by Hafizah, Norbahiah, Norhana, Wan Mimi Diyana and Sarifah Nurhanum (2011) in Seminar Pendidikan Kejuruteraan & Alam Bina 2011 (PeKA '11) has found that the achievements of students from the Department of Electrical, Electronic and Systems Engineering (JKEES) is on the decline and some students have been disqualified from their degree program as a result to such performance.
However, are the questions set for the evaluation enough and suit with the objective of the conducted study?Are the questions fulfilled all required analysis factors?The appropriateness and effectiveness of the initial evaluation questions can be tested using Rasch analysis, using WinSteps software.This is important to make sure the questions are ready to get useful information in analyzing the performance of the undergraduate students who are excellent in their academic before.A good assessment recognizes the value of information for the process of improvement.Assessment approaches should produce evidence that relevant parties will find credible, suggestive, and applicable to decisions that need to be made, it is a process that starts with the questions of decision-maker that involves them in the data gathering and subsequent analysis (Saidfudin et al 2007).
Rasch analysis is a 'modern' alternative method of measurement that creates a measurement platform that matches the criteria of an SI unit where it acts as an instrument with a clear unit of measurement and can serve as a good model (Saidfudin & Ghulman 2009).Analysis of these measurements is done using empirical data obtained directly from lecturers' assessment for an assignment given to students and then converted to a logit scale having the same interval.Rasch uses logit as a measurement unit.Then the results are approximated to linear correlation.Rasch enables the movement from defining the concept of reliability from the 'most compatible data line' to produce repeatable measurement instruments that can be trusted.It is more focused on building a measurement instrument than loading data to adjust to the measurement model (Osman et al. 2011).Therefore, the results produced can give an accurate picture of the designed questions as a benchmark for new students which consequently would allow for follow-up actions to be taken by department.
Method
Initial evaluation questions were formulated as a result of discussions between the members of the Student Development Committee (JPPEL) of the department.Questions varied according to the Bloom taxonomy levels set for new students.Questions will be distributed to the new students during orientation week held a week before lectures begin.Students are required to answer all questions within one hour.A total of 34 new student admissions for Semester 1 2012 to 2013 in the Department of Electrical, Electronics and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia took part in this initial assessment.
To evaluate the student performance, the question were arranged according to Bloom taxonomy level suitable for first year students as determined in the CO-PO list of the department.There are six domains from the simplest to the most complex; Tier-1: Knowledge, 2: Comprehension, 3: Application, 4: Analysis, 5: Evaluation, and 6: Synthesis: thus, everything can be measured.
Table 1 shows the learning topics and Blooms taxonomy domains assessed for each question.Part A: Self-assessment consists of Question 1 only which reflect student's ability in English language.Part B: Basic Knowledge contains 7 questions which tested the students' basic understanding of electrical and electronic engineering, whilst Part C: General Knowledge examines the students' general knowledge with 2 questions.All these questions can be found in (Norhana el at 2012)
Results
Through Rasch analysis, via the WinStep software, the responses of the Person-Item Distribution Map (PIDM) which is obtained as shown in Figure 1, the reference to 'person' as the student and 'item' refers to the examination topics/ questions that are plotted on the left on the same logit scale in line with the Latent Traits Theory.PIDM considers how someone, x, with the capability of β as their latent trait, responds to those items, i, which have the difficulty level of δ (Saifudin et al. 2008).From the figure, we can get an overview of the overall or actual achievement for a student in their final exam later on in their study.The separation distance between the location of students and the questions on this map shows the level of one's ability, where the farther the separation of the responses given by the students, the more accurate the response would be.Meanwhile, the level of difficulty of the item or question is also shown, where the higher the location of the items from the mean, Mean item (+ M) means more difficult it is compared to the questions in the lower location.Thus, Mean item acts as a threshold and is set to be zero on the logit scale (Saifudin et al. 2008(Saifudin et al. & 2010)).
Overall, the question is of uniform distribution, which is not too difficult and not too easy.Only one of the boys (M20) can answer all the questions the initial assessment.Through the admission application records, these students actually obtained very positive results during the pre-university examinations.Therefore, students will not have problems within the program and study in the department.However, girls achieve the lowest scores cluster as has been circled on the figure (bottom circle).This shows that girls only study topics in general and are less prepared when they are not given any guidance about the topics to be included in this evaluation question.Hence, these results can identify the capability of a particular student, and the members of the Student Development Committee in the department should take steps to prevent students from dropping out in their studies at university.
Figure 1 shows that almost half of the entry questions are found to be difficult by the students where item I0009 which is Question 9 Knowledge (K) -Current Knowledge is found to be the most difficult question.In this question, students were asked to describe the latest technology in the field of Electrical and Electronic Engineering.Meanwhile, item I0001 which is (Question 1 Analysis-understanding) is found to be the easiest question, as it is regarding the students themselves.This question can be ignored because it does not test the students' ability to think.Through question I0009, students' knowledge of the scenario occurring outside the scope of teaching in the university and students' attitude towards things that currently happen in the world can be assessed.This is important because the graduates of the department must meet the fourth program outcome (PO4) that is to be able to understand the professional and ethical responsibilities of environmental knowledge and contemporary issues.
Figure 2 shows the summary statistics for the category of students (people) and questions (items).From the table, the Cronbach α is 0.69, which exceeded the acceptable level of 0.6 (Saifudin et al. 2008(Saifudin et al. & 2010)).Rasch analysis finds Person (Student) Reliability is low at 0.69, while Item (Question) Reliability is high at 0.89.This means that the instrument has high reliability in measuring what should be measured.Thus, the assessment questions outlined earlier is appropriate and effective in measuring the ability of the new students.
Figure 2. Summary statistics
Statistical analysis of the questions in Figure 3 shows that the entire entry question meets all the criteria as quality questions and thus no revision is required.This is because if the Point Measure Correlation (PT-MEASURE) = x; 0.4 <x <0.8, the item shall be accepted.For item I0001 (Question 1) PT-Measure = 0.28 < 0.4.This is expected to be low because the question is about the students themselves.This question can thus be ignored and not included as the question in this assessment.It is only used as additional information to the department.Questions will only be considered to be unsuitable when they do not meet the range of PT-Measure, which are 0.5 <y <1.5 for Mean Square outfit (MNSQ) = y-value and the range -2 <z <2 for the z-standard Outfit (ZSTD) (Saifudin et al. 2008).So, it can be seen from Figure 3, the initial assessment question has achieved the three criteria, thus fulfilling the suitability in assessing students, in this case after ignoring Question 1 that do not measure students' understanding and ability to think.
Discussion
The initial assessment questions used as an instrument in this study has been shown statistically by using Rasch analysis to be appropriate and has a high reliability in measuring the ability of new students enrolled in the department.The statistics has reflected on the academic achievements of the new students and also proved that the questions designed are effective in assessing the level of students' ability.The comparison between the results at pre-university and the early assessment questions can provide guidance in predicting the performance of students while studying at the department.Consequently, this enables the next steps that can be taken by the Student Development Committee of the department and student mentors in guiding students towards excellence in teaching and learning at university.
Figure 1 .
Figure 1.Person -Item map (Student-Question map) (PIDM) for the Entry Questions
Table 1 .
. Topics and domains of Bloom Taxonomy assessed for each question | 3,050.6 | 2013-05-30T00:00:00.000 | [
"Engineering",
"Education"
] |
zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm
Background Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. Results We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models. Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. Conclusions We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
http://www.biomedcentral.com/1471-2105/14/339 order since different parts of the computation now can be handled independently. This then makes it possible to reuse computations whenever the input contains repeated substrings [10] or to parallelise the algorithm across a number of independent threads [11].
The main contribution of this paper is a software library that uses both of these ideas to greatly speed up the forward algorithm. We present a preprocessing of the observed sequence that finds common substrings and constructs a data structure that makes the evaluation of the likelihood close to two orders of magnitude faster (not including the preprocessing time). The preprocessing of a specific sequence can be saved and later reused in the analysis of a different HMM topology. The algorithms have been implemented in a C++ library, zipHMMlib, available at http://birc.au.dk/ software/ziphmm/. The library also provides an interface to the Python programming language.
Much of the theory used in zipHMMlib was also developed by Lifshits et al. [10], but while they developed the theory in the context of the Viterbi algorithm, where the preprocessing cannot be reused, we concentrate on the forward algorithm and introduce a data structure to save the preprocessing for later reuse. We furthermore extend the theory to make the computations numerically stable and introduce practical measures to make the algorithm run fast in practice and make the library accessible.
Our implementation is tested on simulated data and on alignments of chromosomes from humans with chimpanzees, gorillas and orangutans analysed with the CoalHMM framework [7,8,12], a framework which uses changes in coalescence trees along a sequence alignment to make inference in population genetics and phylogenetics and which has been used in a number of whole-genome analyses [13][14][15][16]. Using an "isolation-withmigration" CoalHMM [17], we train the model using the Nelder-Mead-simplex algorithm and measure the preprocessing time and total optimisation time. Looking at the time required to perform the entire training procedure, we achieve up to 78 times shorter wall-clock time compared to the previously fastest implementation of the forward algorithm. Even for data of high complexity and with few repetitions we achive a speedup of a factor 4.4.
Hidden Markov models
A Hidden Markov Model (HMM) describes a joint probability distribution over an observed sequence Y 1:T = y 1 y 2 . . . y T ∈ O * and a hidden sequence X 1:T = x 1 x 2 . . . x T ∈ H * , where O and H are finite alphabets of observables and hidden states, respectively. The hidden sequence is a realisation of a Markov process which explains hidden properties of the observed data. We can formally define an HMM [18] as consisting of: • H = {h 1 , h 2 , . . . , h N }, a finite alphabet of hidden states; the probability of the model starting in hidden state An HMM is parameterised by π, A and B, which we will denote by λ = (π, A, B).
The classical forward algorithm
The forward algorithm [18] finds the probability of observing a sequence Y 1:T in a model λ by summing the joint probability of the observed and hidden sequences for all possible hidden sequences: Pr (Y 1: column by column from left to right, using the recursion After filling out α, Pr (Y 1:T | λ) can be computed as
The algorithm as linear algebra
In the classical forward algorithm, we compute the columns of α from left to right by the recursion in equation (1). If we can compute the last one of these columns, α T , efficiently, we can compute Pr (Y 1: . Now let α t be the column vector containing the α t (x t )'s: let B o i be the diagonal matrix, having the emission probabilities of o i on the diagonal: and let http://www.biomedcentral.com/1471-2105/14/339 where A * is the transpose of A. Then α t can be computed using only C y t and the previous column vector α t−1 : Thus the classical forward algorithm can be described as a series of matrix-vector multiplications of length T − 1 as illustrated in Figure 1. The classical forward algorithm corresponds to computing this from right to left, but since matrix-matrix multiplication is associative the product can be computed in any order. Since repeated substrings corresponds to repeated matrix-matrix multiplications, running time can be improved by reusing shared expressions [10,11]. In the following sections we will show how we precompute a grouping of the terms based on Y 1:T in order to minimise the workload in the actual computation of the likelihood. To mimic this in the computation described above, we introduce a new C matrix:
Exploiting repetitions in the observed sequence
Now notice that we only need to compute this matrix once and substitute it for all occurrences of C o i C o j in equation (3). Hence we can save n o i o j matrix-vector multiplications by introducing one matrix-matrix multiplication, potentially saving us a large amount of work. These observations suggest that we can split the computation of the likelihood of a given observed sequence in a preprocessing of the sequence and in the actual computation of the likelihood. In the preprocessing phase we compress the observed sequence by repeatedly finding the most frequent pair of symbols o i o j in the current sequence and replacing all occurrences of this pair by a new symbol. This is repeated until n o i o j becomes too small to gain a speedup (see next section). The result is a sequence . This compression will be identical independent of the HMM, meaning that we can save it along with the observed sequence and reuse it for any HMM.
The actual computation of the likelihood is then split in two stages. In the first stage we compute C 1 and C o i for i = 1, . . . , M using (2). We then compute C o i for increasing i = M + 1, . . . , M by C o i = C l i C r i . In the second stage, we compute α T by This is illustrated in Figure 2 where the actual computation is drawn in solid black, while the saved work due to redundancy is shown in gray.
Compression stopping criterion
While the first iterations of the preprocessing procedure compress the sequence very effectively, the last iterations do not decrease the sequence length by much, since most pairs are uncommon when more characters are introduced. This is illustrated in Figure 3, where we see that the number of occurrences of the most frequent pair of symbols decreases superexponentially as a function of the number of iterations performed on an alignment of the human and chimpanzee chromosome 1. This means that we potentially save a lot of time on the likelihood computation by performing the first iterations, but as the slope of the curve increases towards 0 we risk to spend a long time on the preprocessing and save very little time on the actual likelihood computation. To overcome this problem, we do not compress the input sequence all the way down to a single character. Assume we know that the preprocessing will not be reused for an HMM with less than N min states, and let t mv be the time required for an (N min × N min ) × N min matrix-vector multiplication and t mm be the time required for an (N min ×N min )×(N min ×N min ) matrix-matrix multiplication. In iteration i of the preprocessing we replace the most frequent pair of two symbols in the current sequence Figure 1 Classical approach to the forward algorithm. The classical forward algorithm, as described by Rabiner [18]. The rectangles represent matrices and vectors. The black lines denote matrix-vector multiplications. The top row is the C oi matrices. α i is obtained from α i−1 and C oi . Note that the input sequence is inverted to illustrate that the series of matrix-vector multiplication should be carried out from right to left. http://www.biomedcentral.com/1471-2105/14/339 Figure 2 Reusing common expressions to speed up the forward algorithm. The actual computation of the likelihood of the observed sequence 10100010011001 given a specific HMM. The rectangles represent matrices and vectors and lines between them represent dependencies.
In stage 1 the C oi matrices are computed. The solid black lines show the amount of work performed, while the grey lines show the amount of work saved due to redundancy. C 2 is for example computed as the product C 1 C 0 , and this multiplication is saved three times. In the second stage α T is computed from the compressed sequence and the C oi matrices. As in Figure 1 α i is computed from α i−1 and C oi . and find the most frequent pair of two symbols in the resulting sequence. Thus if p i is the number of occurrences of the pair found in iteration i, pre i is the time required for iteration i, and e is an estimate (given by the user) of the number of times the preprocessing is going to be reused (for example in a number of training procedures each calling forward several times), then, assuming that the matrix-vector multiplications and matrix-matrix multiplications dominate the runtime of the actual likelihood computations, the amount of time that is saved by running iteration i is e(t mv p i−1 − t mm ) − pre i , as we save p i−1 matrix-vector multiplications in each likelihood computation, and we do this by introducing one new matrix-matrix multiplication. This means that the optimal time to stop the preprocessing is before iteration j, where j is the minimal value of i making e(t mv p i−1 − t mm ) − pre i less than or equal to 0. However, we do not know pre i before iteration i has been completed, but we can estimate it by pre i−1 . Thus we stop the preprocessing just before iteration j, where j is the minimal value of i making e(t mv p i−1 − t mm ) − pre i−1 less than or equal to 0. The values t mv and t mm are measured prior to the preprocessing, whereas the user has to supply an estimate, e, of the number of reuses of the preprocessing and N min . If a single value of N min can not be determined, we allow the user to specify a list of state space sizes (N 1 min , N 2 min , ...) for which he wants the preprocessing to be saved. If no N min values are provided, the compression is stopped whenever p i = p i−1 for the first time.
Numerical stability
All our matrices contain probabilities, so all entries are between 0 and 1. This means that their products will tend towards 0 exponentially fast. The values of these products will normally be stored in one of the IEEE 754 floatingpoint formats. These formats have limited precision, and if the above was implemented naïvely the results would quickly underflow.
If we can make do with log (Pr (Y 1:T | λ)), we can prevent this underflow by repeatedly rescaling the matrices, much in the same way as the columns are rescaled in the numerically stable version of the classical forward algorithm [18]. To make this work in our case, we will http://www.biomedcentral.com/1471-2105/14/339 normalise the results of each matrix-matrix multiplication or matrix-vector multiplication we do throughout the algorithm and work with the normalised matrices instead. We first take care of the rounding errors that can propagate through the first stage of the likelihood computations (depicted in the top part of Figure 3) if the dependency graph between the new symbols is deep. Let be the sum of all entries in C o i for all symbols in the original observed sequence, and and let Finally let Then Thus to handle the underflow in the first stage, we compute s o i along withC o i for i = 1, . . . M (see Figure 4) and compute the product above in the second stage of the likelihood computation. However, theC o i matrices still only contain values between 0 and 1, and their product will therefore still tend towards 0 exponentially fast, causing underflow. To prevent this we introduce a scaling factor d i for each of the T − 1 matrix-vector multiplications in (4), set to be the sum of the entries in the resulting vector. Each d i is used two times: First we normalise the corresponding resulting vector by dividing each entry by d i , and next we use it to restore the correct result at the end of the computations. Assume thatᾱ T is the result of the T −1 normalised matrix-vector multiplications. Then Notice, however, that we now risk getting an underflow when computing these products if T is big. We handle this by working in log-space. Definẽ
Practical implementation details
In our implementation of the preprocessing phase described above, we simply build a map symbol2pair, mapping each new alphabet symbol o i to its two constituents (l i , r i ). In each scan every pair of symbols is counted, and the most frequent pair in the previous round is replaced by a new symbol. The data structure being saved in the end is symbol2pair along with two other maps: nstates2alphabetsize and nstates2seq.
using the two maps created in the first stage. To obtain maximal performance, we use a BLAS implementation for C++ to perform the series of matrix multiplication. Our implementation uses O(Tk) space in the preprocessing phase and O N 2 (T + M ) space in the actual computation, where k is the number of N min values supplied by the user, N is the number of states in the HMM used in the actual computation, and M is the number of symbols in the extended alphabet corresponding to N in nstates2alphabetsize. If the preprocessed data structure is saved to disk, it will take up O(Tk) space.
We have also implemented the algorithm in a parallelised version. In this version, stage 2 is parallelised much like the implementation in parredHMMlib [11], where the series of matrix multiplications in (5) is split into a number of blocks which are then processed in parallel. Stage 1 can clearly also be parallelised by computing independentC o i matrices in parallel. However, we found that this does not work well in practice, as the workload in stage 1 is not big enough to justify the parallelization. Stage 1 is therefore not parallelised in the library. The parallelisation of stage 2 gives the greatest speedup for long sequences that are not very compressible. This is because the parallelisation in general works best for long sequences [11], and if the input sequence is very compressible then the compressed sequence will be short and more work will be done in the non-parallelised stage 1. The experiments presented in the next section have all been run single-threaded to get a clearer picture of how the runtime of the basic algorithm is influenced by the characteristica of the input sequence and model. But in general a slightly faster running time can be expected if parallelisation is enabled, especially for long sequences of high complexity.
Results and discussion
We have implemented the above algorithms in a C++ library named zipHMM. The code provides both a C++ and a Python interface to the functionality of reading and writing HMMs to files, preprocessing input sequences and saving the results, and computing the likelihood of a model using the forward algorithm described in the previous section. The library uses BLAS for linear algebra operations and pthreads for multi-threaded parallelisation.
Using the library
The library can be used directly in C++ programs or through Python wrappers in scripts.
Using zipHMM from C++
When using the library in C++ the most important objects are from the Forwarder class, which is responsible for both preprocessing sequences, reading and writing the preprocessed data structure, and for computing the likelihood of a hidden Markov model. The code snippet in Figure 5(a) shows a complete C++ program that reads in an input sequence, f.read_seq(...), preprocess it (as part of reading in the sequence), stores the preprocessed structure to disk, f.write_to_directory(...), reads in an HMM from disk, read_HMM(...), and computes the likelihood of the HMM, f.forward(...).
The sequence reader takes the alphabet size as parameter. This is because we cannot necessarily assume that the observed symbols in the input sequence are all the possible symbols the HMM can emit, so we need to know the alphabet size explicitly. It furthermore takes an optional parameter in which the user can specify an estimate, e, of the number of times the preprocessing will be reused. The default value of this parameter is 1. http://www.biomedcentral.com/1471-2105/14/339 If the preprocessed sequence is already stored on disk, we can simply read that instead like this: Forwarder f; number_of_states = 4; f.read_from_directory("example_ preprocessed", number_of_states); This will cause the saved sequence matching N min ≤ 4 to be read from the directory example_preprocessed together with additional information on the extended alphabet used in this sequence.
In the library, HMMs are implicitly represented simply by a vector and two matrices, the π vector of initial state probabilities and the transition, A, and emission, B, matrices as described in the Implementation section. These are all represented in a Matrix class, and in the program in Figure 5(a) these are read in from disk. They can also be directly constructed and manipulated in a program. In our own programs we use this, together with a numerical optimisation library, to fit parameters by maximising the likelihood.
The f.forward(...) method computes the likelihood sequentially using the preprocessed structure. To use the multi-threaded parallelisation instead, one simply uses the f.pthread_forward(...) function, with the same parameters, instead.
For completeness the library also offers implementations of the Viterbi and posterior decoding algorithms. To use these in C++ the headers viterbi.hpp and posterior_decoding.hpp should be included and the functions viterbi(...) and posterior_deco ding(...) should be called as described in the README file in the library.
Using zipHMM from Python
All the C++ classes in the library are wrapped in a Python module so the full functionality of the zipHMM is available for Python scripting using essentially the same API, except with a more Python flavour where appropriate, e.g. reading in data is handled by returning multiple values from function calls instead of pass-by-reference function arguments and with a more typical Python naming http://www.biomedcentral.com/1471-2105/14/339 convention. Figure 5(b) shows the equivalent of the C++ code in Figure 5(a) in Python.
Performance
To evaluate the performance of zipHMM we performed a number of experiments using a hidden Markov model previously developed to infer population genetics parameters of a speciation model. All experiments were run on a machine with two Intel Sandy Bridge E5-2670 CPUs, each with 8 cores running at 2.67GHz and having access to a 64Gb main memory. We compare the performance of our forward algorithm to the performance of the implementations of the forward algorithm in HMMlib [19] and in parredHMMlib [11] and to a simple implementation of equation (3) using BLAS to perform the series of matrix-vector multiplications. HMMlib is an implementation that takes advantage of all the features of a modern computer, such as SSE instructions and multiple cores. The individual features of HMMlib can be turned on or off by the user, and we recommend only enabling these features for HMMs with large state spaces. In all our experiments we enabled the SSE parallelisation but used only a single thread. The parredHMM library implements equation (3) as a parallel reduction, splitting the series of matrix multiplications into a number of blocks and processing the blocks in parallel. The parredForward algorithm was calibrated to use the optimal number of threads.
For performance evaluation we wanted to evaluate how well the new algorithm compares to other optimised forward implementations, evaluate the trade-off between preprocessing and computing the likelihood, and explore how the complexity of the input string affects the running time.
Our new implementation of the forward algorithm is expected to perform best on strings of low complexity because they are more compressible. To investigate this we measured the per-iteration running time of the forward algorithm for parredHMMlib, HMMlib and the simple implementation of equation (3) on random binary sequences (over the alphabet {0, 1}) of length L = 10 7 with the frequency of 1s varying from 0.0001 to 0.05, and divided it by the per-iteration running time for zipHMMlib (excluding the preprocessing time) to obtain the speedup factor. This experiment is summarised in Figure 6, where we note that the speedup factor decreases linearly with the complexity of the input sequence; however, speedup factors of more than two orders of magnitude are obtained for less complex sequences, and even for sequences of low complexity a (modest) speedup is obtained.
In the rest of our experiments, we used a coalescent hidden Markov model (CoalHMM) from [17] together with real genomic data for the experiments. A CoalHMM [7, 8,12] exploits changing gene-trees along a genomic alignment to infer population genetics parameters. The "Isolation-with-Migration" CoalHMM from [17] considers a pairwise alignment as its observed sequence and a discretisation of the time to the most recent common ancestor, or "coalescence time", of the aligned sequences as its hidden states. The coalescence time can change from any point to another, so the transition matrix of the CoalHMM is fully connected, and the number of hidden states can be varied depending on how fine-grained we want to model time. Varying the number of states lets us explore the performance as a function of the number of states. The performance as a function of the length of the input was explored by using alignments of varying length. Finally, to explore how the complexity of the string affects the performance we used alignments of sequences at varying evolutionary distance, since closer related genomes have fewer variable sites and thus the alignments have lower complexity. The CoalHMM model uses a Jukes-Cantor model in its emission probabilities and thus only distinguishes between if a specific site has two identical nucleotides or two different nucleotides in the alignment. We therefore also varied the complexity of the strings by compressing either the actual sequence alignment or summarising it as an alphabet of size three, For all experiments we trained the model using the Nelder-Mead-simplex algorithm and measured the preprocessing time and total optimisation time, and the http://www.biomedcentral.com/1471-2105/14/339 expected number of likelihood computations was set to e = 500. Figure 7(a) shows how the performance of zipForward changes, when the size of the model is increased. We note that the total time, as expected, depends very heavily on the number of states (the time complexities of stage 1 and 2 are qubic and quadratic in the number of states, respectively), while the time used on preprocessing, also as expected, varies very little and shows no clear pattern as the number of states is increased. Figure 7(b) shows how the runtime for training the CoalHMM with zipForward changes when the sequence length is varied. We expected the runtime to increase with the sequence length, however this is not what the results show for the shorter sequences. This is due to the optimisation procedure, which required more iterations of the likelihood computation for the shorter sequences than for the longer sequences. For the longest sequences the runtime grows sublinearly, which was expected, since longer sequences often compress relatively more than shorter sequences.
We expected alignments of sequences at short evolutionary distance to be more compressible than alignments of sequences at longer evolutionary distance, and therefore expected the training procedure to be faster for alignments of sequences at short evolutionary distance.
We recognise this in Figure 7(c) except for the sequences with M = 16, where the human-orangutan alignment was processed faster than the human-gorilla alignment. However, this was caused by the trade-off between the time for the preprocessing and the time for the actual training procedure: the preprocessing procedure took significantly longer time for the human-gorilla alignment (because it was more compressible) than for the humanorangutan alignment, but this extra time was not all gained back in the training procedure, although the compressed sequence indeed was shorter (the per-iteration running time for the human-gorilla alignment was 7.913s and 8.488s for the human-orangutan alignment).
We also expected the total time of the training procedure to increase as the number of symbols in the initial alphabet was increased, because sequences with small initial alphabets are expected to be more compressible than sequences with larger initial alphabets. But as Figure 7(c) shows, the sequences with an initial alphabet of size M = 25 were processed faster than the sequences with an initial alphabet of size M = 16. This is again caused by the optimisation procedure, which converges faster for the sequences with M = 25 than for the sequences with M = 16 (e.g. for the human-gorilla alignments, the number of evaluations of the likelihood were 860 and 1160 for M = 25 and M = 16, respectively). This may be a result of the sequences with M = 25 containing more information than the sequences with M = 16. The lengths of the compressed sequences and the per-iteration running times match our expectations, and for the sequences with M = 3 and M = 25, which contain the same data, the algorithm behaves as expected. Figure 7(d) shows the performance of the four different implementations of the forward algorithm. Each of the four algorithms was used to train the CoalHMM on an alignment of the entire human and chimpanzee chromosome 1, using an HMM with 16 states and an initial alphabet of size 3. The training procedure was finished in 7.4 minutes (446 seconds) using zipForward and including the preprocessing time. This gives a speedup factor of 77.7 compared to the previously fastest implementation using parredHMMlib, which used 9.6 hours (34, 657 seconds). It is therefore evident that zipForward is clearly superior to the three other implementations on this kind of input. The time used per iteration of the likelihood computation was 0.5042 seconds for zipHMMlib, while it was 46.772 seconds for parredHMMlib, leading to a speedup of a factor 92.8 on the actual optimization procedure (excluding preprocessing time). Repeating the same experiment on full alignments over alphabets of size 16 and 25 (not shown here), where zipForward clearly performs worse than for sequences with alphabets of size 3 (see Figure 7(c)), we still obtained total speedup factors of 4.4 for both experiments.
Conclusions
We have engineered a variant of the HMM forward algorithm to exploit repetitions in strings to reduce the total amount of computation, by exploring shared subexpressions. We have implemented this in an easy to use C++ library, with a Python interface for use in scripting, and we have demonstrated that our library can be used to achieve speedups of 4 -78 factors for realistic whole-genome analysis with a reasonably complex hidden Markov model. | 6,728.4 | 2013-11-22T00:00:00.000 | [
"Computer Science"
] |
Energy aware adaptive resource management model for software ‐ defined networking ‐ based service provider networks
Service provider networks (SPNs) seek to implement the resource management mechanisms called load balancing and energy saving to satisfy today's networking demands. Software ‐ defined networking (SDN) allows SPNs to perform agile and efficient networking operations. It is still an open issue on how to implement these resource management mechanisms in SDN ‐ based SPNs. An energy ‐ aware adaptive resource management model (ERM) for SDN ‐ based SPNs is proposed. Load balancing aims to distribute overall network traffic volume as fairly as possible among available links. Energy ‐ saving aims to turn off as many network devices and links as possible. Unlike the works in the literature, our model utilises the trade ‐ off between load balancing and energy efficiency. To do so, the controller establishes paths among edge pairs. The controller aggregates traffic load between edge pairs to as few links as possible based on a pre-defined load level. Then, unused network elements switch to sleep mode. In the case of load balancing, the controller distributes traffic load among active network resources. Experimental results show that ERM can successfully increase both energy ‐ saving and resource utilization even under heavy traffic loads.
| INTRODUCTION
Advancements in both wired and wireless networks, the proliferation of mobile devices and applications have changed the way we use the Internet usage. Within the last 2 decades, there have appeared applications and services such as e-commerce, on-line gaming, social media, cloud services, and the Internet of Things (IoT). Accordingly, the volume of Internet traffic has increased, and Internet traffic characteristic has differentiated dynamically [1,2]. Due to the lack of smart and autonomous systems, service providers (SPs) usually serve their network resources such as bandwidth in full capacity to satisfy diverse user demands and accommodate traffic bursts. Thus, network devices such as routers and switches are active all the time, even if network traffic is relatively low compared to peak times [3]. This results in 2%-7% of the World's electricity consumption in Information and Communication Technologies (ICTs) [4,5]. Besides, the energy consumption of ICTs has been growing 3% every year [6]. Thus, SPs need smart network management systems that not only increase resource utilization but also decrease energy consumption.
Software-defined networking (SDN) [7] is a new networking paradigm that emerged to satisfy today's networking demands. It separates control and data planes to provide centralised management, virtualisation, agility, programmability, and efficiency. These abilities allure the attention of SPs since they aim to increase resource utilization, decrease operational costs, and protect the environment. SDN-based resource management models proposed in the literature either perform traffic load balancing to increase resource utilization or energysaving both to decrease costs and protect the environment. The works that focus on load balancing distribute the network traffic as fairly as possible among links. Others that focus on energy saving aggregate network traffic to put as many links and switches as possible into sleep mode. Therefore, these two operations are usually considered as opposite. To the best of our knowledge, our work is the first work that focuses on utilising the trade-off between these two resource management mech-anisms in SDN-based networks. Apart from this, the SDN controller may become a bottleneck in terms of decision making and signalling as the network size, and the number of flows increases. It is still an open issue on how to perform these operations scalably in SDN-based Service Provider Networks (SPNs).
In this work, we propose an energy-aware adaptive resource management model (ERM) that utilises the trade-off between energy saving and load balancing. The proposed model is constructed upon our previous works [8,9]. Briefly, pre-established multi-paths between edge (ingress-egress) nodes abstract the physical complex network into a simple virtual one. Upon that, the controller performs routing, admission control, signalling, and load balancing in a scalable manner by taking paths into account. Apart from the works in the literature, we focus on utilising the trade-off between energy-saving and load balancing in the proposed model. To the best of our knowledge, this is the first work in literature that energy-saving and load balancing operations work in harmony. To make this happen, we introduce a trade-off value that determines the importance of these two operations against each other. If trade-off value favours energy-saving, the controller aggregates edge traffic into fewer paths and puts unutilised network elements such as links and switches into sleep mode. If the trade-off value favours load balancing, the controller saves less energy and focuses on distributing the traffic among paths. In line with the explanations above, we summarise the contributions of the paper below.
� To the best of our knowledge, this is the first work in the literature that balances the trade-off between load balancing and energy saving. Depending on the traffic volume of the network, the network administrator can adjust the level of energy saving. � Adaptive energy saving and load balancing mechanisms are proposed to provide scalable resource management. � The novel load balancing mechanism works in harmony with the proposed energy-saving mechanism. To make this happen, the controller balances traffic of edge pairs among active paths. � The controller can communicate with data plane elements over the modified OpenFlow [10] protocol to sleep and wake up them. � The controller performs routing, admission control, signalling, and resource management operations in a scalable manner exploiting pre-established multi-paths. � The proposed model can be applied to non-SDN-based SPNs by replacing switches on the edge with SDN-capable ones.
The rest of this work is organised as follows. Section 2 discusses related works. Section 3 describes background information about the proposed model. Section 4 presents design and implementation details of the proposed model. Section 5 presents the experimental results and the evaluation. Finally, the work is concluded in Section 6.
| RELATED WORKS
This section briefly explains the related works regarding to energy saving in SDN-based networks. Basically, there are two approaches that have been adopted for energy efficiency in SDN-based networks. These are link rate adaptation (LRAA) and power-down approaches. In the first approach, the link rate is adapted according to the traffic load. In the power down approach, ports, line cards, and integrated chassis of routers and switches are either turned off or put into sleep mode. The LRAA approach contributes less energy saving compared to the power down approach [11,12]. However, the power down approach causes routing oscillation and delay [13]. There are also some works that exploit both approaches.
The idea in Reference [14] is to reroute flows on existing paths to adjust link loads in a way that LRAA-based energy saving is maximised. In the scope of this study, the idea mentioned above is defined as a Mixed Linear Integer Problem (MILP). Then, three greedy algorithms and one genetic algorithm-based heuristic algorithm (GA) are proposed for redirecting flows over existing paths. Experimental results show that GA outperforms greedy algorithms. In this work, several paths are computed per pair before rerouting, whenever the controller executes proposed algorithms. Thus, the control plane and signalling scalability are limited. Although two real networks are used for assessment of the proposed algorithms, real traffic trace is not used. In addition, there are also no admission control and resource management in terms of load balancing mechanisms.
The following works utilise PDA to perform energy saving. The work [15] provides a 0-1 Integer Linear Programing (ILP) model that maximises energy saving in a global manner by taking energy consumption of integrated chassis, line cards, and ports of routers into account. It proposes two greedy algorithms namely Alternative Greedy Algorithm (AGA) and Global Greedy Algorithm (GGA) for energy saving. These two algorithms mainly reroute flows on different paths when the network has a relatively low traffic load. This work has routing and signalling scalability issues. The controller performs rerouting per flow. Besides, it communicates with all the switches along the new path per flow.
The work [16] combines IEEE802.az, that is energy efficient Ethernet and SDN-based Segment Routing (SR) to save energy. SDN controller computes link costs based on two metrics, which are EAGER and CARE. The EAGER metric takes link utilization into account. The CARE metric, on the other hand, takes both link utilization and congestion threshold into account. Two energy saving algorithms proposed in the work are tunnelling TE (TTE) and Nontunnelling TE (NTE), respectively. In TTE, the controller performs path computation per source and destination pairs in the first phase. Then, the controller establishes SR tunnels. In the final phase, the controller puts unused links into sleep mode. In NTE, the controller computes paths and decides links that will be put into sleep mode. In the second phase, there is no tunnel establishment. The controller only puts the designated links into sleep mode. Finally, controller computes ÇELENLIOĞLU AND MANTAR -89 ECMP routes between source and destination pairs using awake links. This work in fact has a similar idea that we proposed in our preliminary work [9]. Routing and signalling scalability are achieved via pre-determined paths. However, there is no contribution regarding admission control. Although it is said that load sharing is performed, authors do not mention how this is achieved.
Software Defined Green Traffic Engineering (SDGTE) [17] framework minimises active links and switches in backbone networks without knowing the future traffic demand. Authors provide an ILP definition of energy efficient routing problem. Whenever a flow arrives, the controller performs ILP based routing computation. Apart from energy-efficient routing, SDGTE reroutes flows both in under-utilised and over-utilised links to minimise energy consumption further and to reduce link utilization. In under-utilization case, links whose utilization is below the predefined threshold are determined, and flows on those links are rerouted. Then, the controller puts these links into sleep mode. In the latter case, the controller identifies the over-utilised links and re-routes the flows on those links to reduce link utilization. Apparently, the controller performs routing computation per flow whenever a new flow comes. Besides, the controller also performs re-routing in case of over and under link utilization.
Authors in [18] propose a multi-objective routing approach in a multi-controller control plane. To achieve these objectives, a multi objective evolutionary algorithm, called SPEA2 is developed. SPEA2 performs routing in a way that energy consumption and traffic delay is minimised without degrading the performance. Meanwhile, it takes both controller to switch and switch to switch loads into account. Whenever the controller receives a flow request, SPEA2 calculates a path taking data and signalling load into account. Subsequently, the controller establishes a path for the requested flow. The control plane in this work has multiple controllers. Thus, it satisfies routing and signalling scalability at some level.
Green Application Layer is an ETSI standard [19], which is a framework for exchanging information between control and data planes. The work [20] integrates GAL with SDN to exchange information regarding power management of data plane entities with a controller. This model adopts both PDA and LRAA. In that sense, it proposes an ILP model for the allocation of resources optimally based on network load and actions of flow tables. Experimental network in this study is small. The controller may become the bottleneck as network size, number of flows, and actions in flow tables of switches increase. Besides, the signalling scalability issue appears as flows are rerouted.
Authors in [21] aims to minimise active links and adapt discrete link rates to traffic load for saving energy. Firstly, they provide a mixed-integer programing definition of the problem. Secondly, they propose a heuristic algorithm, which identifies most energy consuming flows and reroutes them on alternative paths for reducing energy consumption. First, the proposed algorithm computes the shortest path for each flow. Subsequently, it calculates the energy consumption of the whole network. The value found at this stage is the upper bound for energy consumption. In the third stage, the proposed algorithm removes each flow one by one to identify its impact on energy consumption. To do so, a weighted graph is generated for each flow. Finally, the controller computes k-shortest paths and the with the least energy consumption is selected as a route. Obviously, the scheme has routing and signalling scalability issues since the controller performs per flow routing and rule installation.
Researchers in [22] propose two algorithms for allocation of Virtual SDN (VSDN) in a reliable and energy efficient manner. Relative Disjoint Path (RDP) generates two trees based on a redundancy factor. Then it merges the two trees to obtain a graph where links and nodes exist in both. In this way, there is only one path if the redundancy factor is 0. Similarly, there are two disjoint paths if the redundancy factor is 1. Otherwise, one or more links are shared among two paths. State-Aware of Bandwidth and Energy Efficiency (SA-BEE) algorithm allocates VSDNs based on available bandwidth and energy consumption factor. It adaptively increases or decreases energy consumption factor based on the network state at first. Then it generates a weighted graph for a source node to all other nodes. Finally, it looks for lower energy consuming paths between the source node and all other nodes.
In GreSDN [23], the controller performs energy-efficient routing without re-routing of flows. It maintains two topologies. The first topology is the physical topology, which contains all the links and switches. The second topology is the virtual topology, which contains only awake links. The controller performs per-flow routing bearing the two topologies in mind. If there is an inactive link along the path, the controller sends a signal to the corresponding switch to awake the link. Meanwhile, the device management module within the controller periodically checks the state on links. If a link is not used, it puts the link into sleep mode. The authors propose two routing algorithms; Constant Weight Greedy Algorithm (CWGA) and Dynamic Weight Greedy Algorithm (DWGA). Both algorithms perform path computation on two graphs. These graphs are generated per request. At the time of graph generation, the controller removes any link where the summation of the current load with requested bandwidth exceeds its capacity on both physical and virtual graphs. Finally, CWGA and DWGA perform routing computation based on static and dynamic link costs, respectively. Apparently, the controller performs per-flow routing. Thus, it suffers from a severe routing scalability issue. Besides, the controller must communicate with all the switches along the path during the path establishment process. This also results in the signalling scalability issue.
In the work [24], authors propose a heuristic algorithm (ETALSA) for energy saving via only powering up/down links. ETALSA that runs on a controller takes energy prices into account apart from other works. During the execution of ETALSA, the controller iteratively selects switches from the one with the highest energy cost to the lowest. Then, it computes the utility value per link, which is connected to the selected switch. The utility value is computed based on the connectivity of switches, traffic demand, and energy prices.
Finally, the controller powers a link down based on the utility value if there becomes less energy consumption. This work resembles our preliminary work [9] in a way that preestablished multi-paths exist both for scalability and resource management. The main differences are that the work [24] powers only the links down and it takes energy prices into account.
Consequently, the works in the literature mostly perform either load balancing or energy saving. Our work differentiates itself from the works in the literature, performing energy saving in compliance with load balancing.
| BACKGROUND
ERM relies on pre-established paths (PEPs) that connect ingress and egress nodes. The controller performs resource management operations, which are energy-saving, load balancing and path capacity updates, based on PEPs. Routing, admission control, and signalling scalability are also achieved in favour of PEPs as presented in our previous work [8]. Apart from the previous work, we propose ERM for SDN-based SPNs. The proposed model exploits the trade-off between energy-saving and load balancing. We propose a novel energysaving algorithm that is adaptive to the traffic load. Additionally, previously proposed load balancing and resizing algorithms are revised so that controller performs these operations in harmony with the energy-saving operation.
For the purpose of clarification, Tables 1 and 2 summarise main acronyms and notations used in this work, respectively.
| Architectural overview
The proposed model is illustrated in Figure 1. The data plane resides at the bottom and consists of provider edge and core switches. Ingress SDN switch (ISS) and Egress SDN switch (ESS) are the switches where traffic enters and leaves the network, respectively. An edge switch is both ISS and ESS at the same time because traffic is bidirectional. In addition to edge switches, there are core switches, which we name Core SDN switches (CSSes). A set of CSS connects edge switches and solely perform packet forwarding. In the upper layer, the proposed controller resides. It is logically centralised and mainly responsible for resource management (i.e. load balancing, active capacity resizing, and energy saving), routing, and admission control operations. To do so, the controller abstracts the single physical network into multiple virtual networks using PEPs between edge switches. All operations are performed based on these paths. The two layers in Figure 1 communicate with each other via a signalling protocol (e.g. OpenFlow [10]).
The proposed model has PEPs between each edge pair (EP), such as ISS 1 -ESS 1 in Figure 1. These paths are virtual and simplify the complex physical network. A simpler virtual network consists of edge switches and PEPs. The controller performs resource management operations, routing, admission control, and signalling based on the virtual network. Briefly, controller aggregates flow into less number of paths adaptive to the traffic load and then puts unused links and switches into sleep mode for energy saving. It also equalises the cost of paths that belong to the same EP to achieve load balancing. Controller, additionally, adjusts the capacity size of PEPs concerning traffic load to improve resource utilization more.
Notice that the controller does not perform routing and admission control operations per flow. In case of routing, the controller simply assigns a flow to a path. In the case of admission control, the controller decides based on the state of paths, which are stored in both link state database (LSDB) and path state database (PSDB). PEPs also allow the controller to manage the whole network with less signalling. This is because the controller does not communicate with each node along the path per flow in routing, admission control and resource management operations. It only communicates with edge switches, instead. Therefore, scalable routing, admission control and signalling are achieved in favour of PEPs.
| Network abstraction
The physical network is represented by a graph G = (V, E). Network administrator makes the decision of which pair has how many PEPs. In the scope of this work, we assume that PEPs are already exist in the network. Paths between EPs can be established using Virtual LAN (VLAN), MPLS or SR. In compliance with this, we do not develop any multi-path computation algorithm as there are already several algorithms used in the literature, such as [25][26][27]. |P s,d | per (V s , V d ) is defined by network administrator. Note that PEPs can be established regarding to any cost value such as residual bandwidth, latency, etc.
Notice that any two paths where each of which belongs to a different pair may share one or more physical links. We name such paths and pairs as neighbour paths and pairs, respectively. For the sake of simplicity, the controller virtualises links based on the number of neighbour paths per link. We denote a virtual link which serves to a path P s;d k as L s;d;k i . Capacity of a physical link, denoted as φ(E i ), is portioned out to its virtual links based on the load of neighbour paths. In compliance with this, a path capacity φðP s;d k Þ becomes the capacity of its virtual link which has the minimum capacity (minðφðP s;d;k i ÞÞ). For example, there are two paths on the physical link E 6 in Figure 1
| Controller design
The controller has two databases, four routing, admission control and resource management modules and three helper modules. The two databases are maintained in the background without interfering operations performed by modules. Routing, admission control and resource management modules have main tasks to handle but they can cooperate with other helper modules.
LSDB keeps track of physical network elements that are links and switches. Some of them are identifier of E i , corresponding switch datapath identifiers (DPID) and port numbers that E i connects, maximum transfer rate, power consumption and link load. Note that link load is not a static value unlike others because it changes in time regarding to the network load. PSDB to keep track of PEPs and other information regarding to them. This database is the virtualized version of the physical network. Controller mostly use this database for network management purposes. PSDB contains both static and temporary data. Some of the static information are path identifier (PID), datapath identifiers of edge nodes (i.e. ISS and ESS), links that form the path. Some of the dynamic data are path capacity and path load.
Routing and admission control module (RACM) allows flows to be routed over the associated edge nodes. In routing, the controller assigns the incoming flow to a path of the corresponding pair. In admission control, the controller checks available resources before making a decision regarding the routing. This way, it prevents the overloading of resources. If there is an available resource, the controller performs routing of the flow and follows the standard routing procedure.
ÇELENLIOĞLU AND MANTAR
Otherwise, it simply rejects the flow. Load balancing module (LBM) basically equalises the cost of paths per pair. At first, it computes the path costs of the corresponding pair and calculates the equalisation cost. Afterwards, it shifts flow in overloaded paths to under-loaded paths. Path resizing module (PRM) updates the capacity of paths depending on loads of neighbour paths and pairs. Whenever load of path exceeds a certain threshold, the resizing procedure is invoked. Energy saving module (ESM) aggregates flows into less number of paths. Then, it puts network elements that are not needed into sleep mode.
Path management module (PMM) establishes, updates or removes paths PEPs. Information collection module (ICM) collects data from underlying switches and performs necessary calculations on them to extract the information required for routing, admission control, load balancing, capacity resizing, and energy-aware resource management. The controller maintains all the information in databases. Inter controller communication module (ICCM) is responsible for communicating with other controllers in the control plane. This module prevents the network from a single point of failure problem. When the single controller stops working, all the routing, admission control, and resource management operations stop. In this work, we do not address this issue as there are proposed works mentioned in [28] regarding this. A simple solution for our model is that there can be two or more controllers within the control plane. One of them would be the master, and the other controllers, a.k.a. backup controllers, periodically check if the master controller is alive. In case of failure of the master controller, one of the backup controllers takes over. As the proposed model maintains network state in databases, the backup controller can directly connect to LSDB and PSDB to manage the whole network. Thus, there is a minimal overhead of taking control in the case of controller failure.
| Energy saving
There occurs a trade-off between optimal energy saving and computation overhead. In today's highly dynamic Internet traffic, the controller may become a bottleneck while computing optimal energy saving. Additionally, the optimal energy state may not last long due to the traffic fluctuations. Thus, we prioritise scalability over optimal energy saving. To do so, our controller makes EP-based energy-saving computation. Upon completion of pair-based optimization, the controller aggregates pair traffic into designated paths. Subsequently, it deactivates idle links and switches in the whole network. This local energy saving converges to a global energy saving in time.
To be clearer about how energy can be saved with traffic aggregation over less number of paths, consider the network illustrated in Procedure 1. The paths P 1;1 2 and P 2;1 1 are neighbours because they share the links E 6 and E 9 . Assuming that these paths have large enough capacity to carry their pair loads, the controller aggregates whole network traffic to P 1;1 2 and P 2;1 1 . Then, it deactivates the remaining four core switches and six links to save energy.
The first step of energy-aware path management is determination of the most energy efficient subset of paths of a pair P s,d while total capacity of P s,d (φ(P s,d )) is greater than or equal to its load (ϑ(P s,d )). Time complexity of the brute force solution [generation of all combinations (θ(2 n )) and searching for the best combination (θ(n))] are exponential. In this regard, we propose a polynomial time algorithm called energy-aware path management algorithm (EPMA) 1.
Let . PP is a predefined utilization ratio (e.g. 70%). It allows the controller to aggregate the desired amount of load to the path as much as possible. For instance, defining PP as 70% forces the controller to make the path utilization around 70% but at the same time prevent it from overutilization. According to the Equation 1, PDV (P s;d k ) changes exponentially from the distance between path utilization and PP. The reason is to force the controller to reach the desired utilization as fast as possible.
Time complexity of EPMA is Oðn � |φðP s;d Þ − ϑðP s;d Þ|Þ. Notice that we design EPMA for a single pair. Therefore, total time complexity for the whole network becomes Oðn � |φðP s;d Þ − ϑðP s;d Þ| � BÞ where B is number of pairs. As soon as the controller determines future active and passive paths, it may need to shift flows from the paths that will be passive to the paths that will remain active or just have been activated. In such cases, the controller determines current and new states of the paths first. To be clear, there are four states which are defined in Table 3. If there are paths in State 1, the controller shifts the flows in these paths to the paths that are either in State 0 or State 2.
After the shifting process, the controller tries to deactivate all the network devices along the paths that do not share switches and links along their way with other paths. Since the whole path is passive, energy-saving increases dramatically. If this is not the case, the controller tries to deactivate the switches and links along its way that does not carry traffic. Before deactivating a switch, the controller makes sure that all the links coming and leaving the switch does not carry traffic. In addition to this, two new configuration values can be added to the configuration field of OFPT_PORT_MOD message for activating or deactivating ports.
EPMA finds the optimum path combination per pair assuming that all the paths are active. However, some of the paths may be in passive mode in reality and their capacity may be taken by neighbour paths. To handle this problem, ESM computes future capacities of passive paths because a passive path can be selected as a result of EPMA. To do so, it calculates the unused capacity of each link by subtracting load from its capacity and shares this unused capacity equally among the passive paths that the link serves. The capacity of the passive path becomes the capacity of the link which has the minimum capacity compared to other links that serve the same passive path. Let us show this procedure in an example for clarification purposes. Suppose that ESM executes EPMA for the pair ISS 1 -ESS 1 as illustrated in Figure 1. Also, suppose that capacities of all links are 1Gbps and paths except P 1;1 2 and P 2;1 1 are active. The controller must find the future capacity of Path2 in case EPMA selects it. There are one, two and two passive paths on links E 2 , E 6 and E 9 , respectively. Load of E 2 , E 6 and E 9 are all 0 because P 1;1 2 and P 2;1 1 do not carry traffic. Unused capacities of E 2 , E 6 and E 9 are 1 Gbps for each. In this case, capacity share of P 1;1 2 for the links E 2 , E 6 and E 9 are 1, 0.5 and 0.5 Gbps, respectively. The minimum of these capacities is 0.5 Gbps. Therefore, the future capacity of P 1;1 2 becomes 0.5 Gbps.
| Adaptive load balancing
Procedure 2: Active balancing algorithm The controller performs load balancing adaptive to the traffic for two reasons. First of all, the controller has limited computational resources. Thus, computationally intensive actions may degrade the performance of the controller such as optimal (network-wide) load balancing. Second, optimum load balancing may not last a long time because traffic is highly dynamic. Due to the aforementioned reasons, the controller balances the load of a pair among its active paths by equalising their costs (i.e. load, utilization, congestion level). Although pairs are virtual and individual, pair-based load balancing converges to network-wide load balancing. This is because cost computation is based on the cost of physical links.
Active balancing algorithm is presented in Algorithm 2. In the first step of load balancing, the controller calculates the cost of active paths and pair equilibrium cost (μ(V s , V d )). The pair equilibrium cost is the cost of all the paths converges to it after the load balancing process. As a path consists of links, we define path cost as the summation of costs of links which form the corresponding path. We denote the cost of a path P s;d k and cost of one of its link E s;d;k i as λðP s;d k Þ and λðE s;d;k i Þ, respectively. In the second step, the states of paths are identified based on μ (V s , V d ). A path whose cost is greater than μ(V s , V d ) is classified as overloaded. Similarly, a path whose value is less than μ (V s , V d ) is classified as under-loaded. If a path has the same cost as μ(V s , V d ), it is already balanced. In the third step, the algorithm obtains information such as ISS, ESS, and PID regarding each flow in unbalanced paths. Finally, some of these flows are shifted from overloaded paths to under-loaded paths. The controller selects which flow to be shifted in a greedy manner (e.g. heavily to lightly loaded flow). Flow shifting occurs via updating the PID of flows defined in forwarding rules in edge switches.
The computation of pair equilibrium cost takes Θ(n). Sorting flows by size takes Oðf � logf Þ where f is number of flows. Shifting flows to underutilised paths takes Oðn � logf Þ. Thus, time complexity of the active load balancing algorithm is Oðn � ðf � logf þ n � f ÞÞ in the worst case where all paths are active. The controller performs load balancing for all pairs. Therefore, time complexity for overall network load balancing becomes OðB � n � ðf � logf þ n � f ÞÞ.
| Active capacity resizing
Traffic load of some active paths may be higher than other active paths even when load balancing exists. This is because pair based load balancing takes some time to balance overall network load. As paths are actually virtual, and their capacities are re-sizeable. Dynamically adjusting active path capacities prevents the network from congestion and increases resource utilization. Note that, this process is performed on PSDB, not in data plane. For this reason, the controller does not explicitly alter rule of paths on switches.
As aforementioned, neighbour paths may share physical links. A physical link is split to one or more virtual links where each of which serves to a different path of different pairs. We Although the computation of new path capacity is simple, the controller iterates over all virtual links and paths twice. Thus, the time complexity of the resizing algorithm becomes Oðð|E| � |P|Þ 2 Þ. This indicates that resizing operation is a heavy process. For this reason, PRM does not perform resizing frequently unless the overall network is heavily loaded (e.g. 80% of its total capacity). In a heavier loaded network (e.g. 95% of its total capacity), the controller performs consecutive resizing operations because the threshold is exceeded even after the resizing operation. This infers that active capacity resizing must be avoided in such cases. To handle this problem, the controller stops resizing if the number of consecutive resizing operations in a predefined time interval exceeds a threshold (e.g. 3). In this case, PRM backs off so that ESM takes over to expand the capacity, if possible. Therefore, PRM and ESM work in harmony to increase resource utilization. Procedure 3: Active capacity resizing algorithm
| EXPERIMENTAL RESULTS
In this section, we present the experimental results of our model. The proposed model is implemented using Floodlight [29] as a controller, Mininet [30] as a network emulator, Open vSwitch [31] as SDN capable switch, TG [32] as a traffic generator, and OpenFlow [33] as a signalling protocol. Several scenarios have been created and various tests have been carried out in order to evaluate the proposed model using different topologies and traffic tracks [8].
The test scenarios are performed to analyse the effect of path number and traffic load on energy-saving performance, the effect of node connectivity on energy-saving performance, and the comparison of the proposed model with existing work in terms of energy saving.
| Effects of path number per pair and traffic load on energy saving performance
In this test case, we have investigated the energy-saving performance of the proposed model for a various number of paths per pair under light, moderate, and heavy traffic loads. Path number per pair is 2 (P2) and 5 (P5). Net-M is chosen as the test topology. There are three test runs. During each run, active link and switch ratios are recorded, and the average of each run is calculated. Figures 2 and 3 illustrate energy-saving performance of P2 and P5 in terms of link and switch.
We have observed that active link ratio drops down to around 50%-60% in P2 for all traffic loads as illustrated in Figure 2. This experiment results in lower power consumption in P5 for light and moderate traffic loads since the active link ratio drops down to around 35%-45% at the end of the experiment. Note that, the energy-saving performance of both P2 and P5 is the closest to each other under heavy traffic load because the controller needs more resources to carry the heavy load. Also, notice that the active link ratio of P2 decreases to around 60%-70% in almost 50 s compared to P5. This ratio does not change much since then. The reason is that the controller has less choice in terms of path number per pair. However, the energy-saving performance of P5 is better after 300 s. The difference between the active switch ratio of P2 and P5 is much more apparent as shown in Figure 3 for all traffic loads. P2 shows poorer performance even under light traffic load. In case of heavy traffic load, the controller fails to make any switch passive. However, this is not the case for P5. The active switch ratio of P5 drops down to around 60% for light and moderate traffic loads. Although its energy-saving performance under heavy traffic load is not as good as others, the controller successfully makes a few amount of the switches passive.
| Effect of connectivity on energy saving performance
In this test case, we have investigated the energy-saving performance of the proposed model for different topologies. The average degree of connectivity for NSFNET, Net-M, and Net-L are 3, 6, and 3.53, respectively. Path number per pair is two, and a moderate traffic load is applied. There are three test runs. During each run, active link and switch ratios are recorded, and the average of each run is calculated.
We have observed that the degree of connectivity does not affect the energy-saving performance of the proposed model on links as illustrated in Figure 4. The active link ratio in all topologies is close to each other. On the other hand, this is not the case for switches. NSFNET has the best performance among other topologies because paths in this topology have more common links and switches. Net-L has the worst performance among others because paths in Net-L are more diverse than others due to the high connectivity. This test case shows that the establishment of paths affects the energy-saving performance of the proposed model.
| Comparison of energy saving performance
In this test case, we have investigated the energy-saving performance of the proposed model by comparing it with the approach defined in Reference [16]. In our model, as aforementioned, we define path energy consumption as the multiplication of PNR and PDV. In Reference [16], authors define energy consumption based on link utilization and a congestion threshold, that is 80%. We call this model the CARE model. In this test case, we run EPMA both for the proposed method and CARE model. Path number per pair is 2 (P2) and 5 (P5). Net-M is chosen as the test topology. Moderate traffic is applied in all three test runs. During each run, active link and switch ratios are recorded and the average of each run is calculated. Figures 5 and 6 illustrate energy-saving performance of the proposed and CARE models in terms of link and switch.
On one hand, the two models have almost the same performance for P2 in terms of active link ratio case as depicted in Figure 5. On the other hand, the proposed model outperforms the CARE model in the P5 case. Similarly, the results of the active switch ratio test resemble active link ratio test. The two models have almost the same performance for P2, and the proposed model outperforms the CARE model for P5.
| Comparison of load balancing performance
In this test case, we have investigated the load balancing performance of the proposed model by comparing it with the previous model SRRM [8]. In this model, load balancing is performed among active paths. Path number per pair is 2 (P2) and 5 (P5). Net-M is chosen as the test topology. Moderate traffic is applied in all three test runs. During each run, load rates are recorded and the average of each run is calculated. Figure 7 illustrates load balancing performance of both models.
ÇELENLIOĞLU AND MANTAR
In both P2 and P5, the utilization of links is higher in ERM compared to SRRM. This is because the same load is distributed over less capacity due to passive paths in ERM.
Additionally, utilization increases in line with energysaving. In previous test cases, we have observed that energysaving performance increases conforming to path number per pair. Hence, load utilization of P2 7a is lower than P5 7b. The trade-off between energy saving and load balancing indicates that high energy saving may cause over-utilization. Thus, this may result in significant network congestion.
Finally, this test case shows that an adequate path number per pair and PDV selection allows efficient usage of network resources in terms of both energy saving and resource utilization.
| CONCLUSION AND FUTURE WORKS
To put in a nutshell, this study proposes an ERM for SDNbased SPNs. In the proposed model, multi-paths are preestablished between each EP. This turns a complex physical network into a simple virtual one. The controller performs energy-saving, load balancing, and capacity updates based on this network. To save energy, the controller aggregates flows that belong to an EP into less number of paths, while taking neighbour paths and load level into account. Pair based power saving converges to global power saving in time. Apart from energy saving, the controller also performs adaptive load balancing by equalising path costs of individual pairs and adjusts path capacities based on network congestion even among active paths for further resource utilization.
In future work, we aim to increase the energy-saving performance of the proposed model using deep learning techniques. Additionally, we also aim to increase load balancing performance by estimating incoming traffic. | 9,472.4 | 2021-01-07T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Advances and Trends in Real Time Visual Crowd Analysis
Real time crowd analysis represents an active area of research within the computer vision community in general and scene analysis in particular. Over the last 10 years, various methods for crowd management in real time scenario have received immense attention due to large scale applications in people counting, public events management, disaster management, safety monitoring an so on. Although many sophisticated algorithms have been developed to address the task; crowd management in real time conditions is still a challenging problem being completely solved, particularly in wild and unconstrained conditions. In the proposed paper, we present a detailed review of crowd analysis and management, focusing on state-of-the-art methods for both controlled and unconstrained conditions. The paper illustrates both the advantages and disadvantages of state-of-the-art methods. The methods presented comprise the seminal research works on crowd management, and monitoring and then culminating state-of-the-art methods of the newly introduced deep learning methods. Comparison of the previous methods is presented, with a detailed discussion of the direction for future research work. We believe this review article will contribute to various application domains and will also augment the knowledge of the crowd analysis within the research community.
Introduction
Crowd or mass gatherings at various venues such as entertainment events, airports, hospitals, sports stadiums, theme parks are faced by the individuals on almost a daily basis. The activities are quite diverse and range from social and cultural to religion. Unlike social and sports related events, the crowd situations experienced by the people on important religious events like Hajj and Umrah may not be possible to avoid. It is therefore important to have an intelligent Crowd Monitoring System (CMS) to ensure the safety of the public, maintain high throughput of pedestrians flow to prevent stampedes, provide better emergency services in case of crowd-related emergencies and to optimize the resources for providing good accessibility by avoiding congestion.
•
People counting in dense populated areas: Population of the world is growing day by day. Maintaining public order in certain crowded places such as airports, carnivals, sports events, and railway stations is very essential. In crowd management system, counting people is an essential factor. Particularly in smaller areas, increase in the number of people create problems such as fatalities, physical injury etc. Early detection of such kind of a crowd avoid these problems. In such sort of crowd management, counting the number of people provide accurate information about certain conditions such as blockage at some points and so on. Instead of large-scale research work, counting methods are still facing various challenges such as varying illumination conditions, occlusion problems, high cluttering, and some scale variations due to various perspectives. Due to a lot of development in the design of CMS, difficulties of people counting are now reduced to some extent. Some excellent works are proposed in [15][16][17][18][19], which address people counting through an efficient CMS. • Public Events Management: Events such as concerts, political rallies, and sports events are managed and analysed to avoid specific disastrous situations. This is specifically beneficial in managing all available resources such as crowd movement optimization and spatial capacity [20][21][22]. Similarly crowd monitoring and management in religious events such as Hajj and Umrah is another issue to be addressed. Each year millions of people from different parts of the world visit the Mosque of Makkah for Hajj and Umrah. During Hajj and Umrah, Tawaf is an essential activity to be performed. In specific peak hours, crowd density in Mataf is extremely intense. Kissing the Black Stone in Hajj and Umrah is also a daunting task due to a large crowd. Controlling such a big crowd is a challenging task during Hajj and Umrah. An efficient real time crowd management system is extremely needed in such occasions. Some works which propose Hajj monitoring system can be explored in the papers [4][5][6][7][8][23][24][25].
•
Military Applications: The number of fighting jets, soldiers, and moving drones and their motion etc. are estimated through proper crowd management systems. Thus the strength of the armed forces can be estimated through this system [26][27][28].
•
Disaster Management: There are various overcrowding conditions such as musical concerts and sports events etc., where when a portion of crowd charges in random directions, causing life-threatening conditions. In past, large numbers of people died due to suffocation in crowded areas in various public gathering events. Better crowd management can be made in such events to avoid accidents [29][30][31]. • Suspicious-Activity Detection: Crowd monitoring systems are used to minimize terror attacks in public gatherings. Traditional machine learning methods do not perform well in these situations. Some methods which are used for proper monitoring of such sort of detection activities can be explored in [32][33][34][35]. • Safety Monitoring: A large number of CCTV monitoring systems are installed at various places such as religious gatherings, airports, and public locations which enable better crowd monitoring systems. For example, [36] developed a system which analyze behaviors and congestion time slots for ensuring safety and security. Similarly, [37] presents a new method to detect dangers through analysis of crowd density. A better surveillance system is proposed which generates a graphical report through crowd analysis and its flow in different directions [38][39][40][41][42][43].
Motivations
Efficient crowd monitoring and management contributes to various applications having further potential for computer vision (CV) paradigm; however, crowd management in real time is far from being solved, particularly in the wild conditions and still facing many open challenges. The literature also reports some success stories, and some convincing research work has also been reported, especially in the constrained conditions. However, under uncontrolled scenarios, the task of crowd management is still open for research community. Several factors contribute to a robust real time CMS and also affect the performance of an accurate CMS. Some of the factors include occlusions, changes in illumination conditions, noise in various forms, changes in facial expressions and head poses, etc. Moreover, the number of publicly available datasets for crowd management is minimal. There are only a few datasets available for research work. We summarize some of these challenges as follows:
•
When two or more than two objects come close to each other and as a result merge, in such scenarios, it is hard to recognize each object individually. Consequently, monitoring and measuring accuracy of the system becomes difficult.
•
A non-uniform sort of arrangement of various objects which are close to each other is faced by these systems. This arrangement is called clutter. Clutter is closely related to image noise which makes recognition and monitoring more challenging [43]. • Irregular object distribution is another serious problem faced by CMS. When density distribution in a video or image is varying, the condition is called irregular image distribution. Crowd monitoring in irregular object distribution is challenging [44].
•
Another main problem faced in real time crowd monitoring systems is aspect ratio. In real time scenarios, normally a camera is attached to a drone which captures videos and images of the crowd under observation. In order to address the aspect ratio problem, the drone is flown at some specific height from the ground surface and installation of the camera is done such that the camera captures the top view of the crowd under observation. The top view results in properly addressing the aforementioned problem of aspect ratio.
In machine learning tasks that are based on specific model learning paradigms, the availability of data for training and testing is of crucial importance and an essential requirement of the success of a particular task. The unavailability of a public dataset is one major problem towards the development of an efficient and mature real time CMS. Although datasets are available for counting purposes, but very few datasets are available for behavior analysis and localization research. In addition, over the last 10 years, some excellent methods have been introduced and developed for CMS; however; research community still need immense efforts to contribute and develop an optimal and accurate real time CMS. Such issues, factors, and variables in SOA motivate us to address the crowd management area with interest and analyse the approaches, developments, applications, and future directions in the crowd management domain. Moreover, the shift from traditional to deep learning approaches motivates us for a comprehensive and an up-to-date review, which will help researchers and also contribute to numerous applications and domains.
Contributions
In this paper, we present a detailed review of crowd management systems, focusing on methods for both controlled and uncontrolled environmental conditions. We present merits and demerits of SOA approaches by focusing on seminal work, and then culminating SOA methods that are based on deep learning frameworks. A comparison of the previous methods leads us to the potential future directions for research on the topic. We believe that such a single review article will recap and contribute to various application domains, and will also augment the topic knowledge of the research community.
Our proposed article is combining literature on the topic over the last 10 years. We focus particularly on SOA CMSs that have been introduced over the last 10 years. We also focus on the shift occurring in SOA towards the new paradigm of the deep learning methods from traditional machine learning methods.
We organize the rest of the paper as follows: Section 5 provides a description of different databases available for CMS. Section 6 presents the crowd management and monitoring methods reported so far. Section 7 gives a detailed comparison of SOA methods reported to date. Finally, we conclude the paper in Section 8 with a fruitful discussion and potential future directions.
Databases
The performance of the CMS is evaluated with available crowd datasets. Crowd management is a relatively less explored area with less publicly available data. Most of the datasets have one or sometimes two scenes, hence cannot be used for generic crowd understanding. In this section, we discuss the available crowd monitoring databases for the topic. The datasets are available in the form of videos and images. A summary of the datasets is presented in Table 1. WWW [51]: The dataset Who Do What at Some Where (WWW) is particularly designed for densely crowded scenes. This dataset is collected from very diverse locations such as shopping malls, parks, streets, and airports. The WWW consists of 10,000 videos captured from 8257 different scenes with eight million frames. The dataset contains data from almost all real world scenarios. The authors of the paper further define 94 attributes for better elaboration of the data. Specific keywords are used to search for videos from different search engines including YouTube, Pond, and Getty Images. Mall [52]: The Mall dataset is collected through surveillance cameras which are installed in a shopping mall. The total number of frames is the same as in University of California at San Diego (UCSD, whereas the size of each frame is 320 × 240. As compared to UCSD, little variation in the scenes can be seen. The dataset has various density levels and different activity patterns can also be noticed. Both static and moving crowd patterns are adapted. Severe perspective distortions are present in the videos, resulting in variations both in appearance and sizes of the objects. Some occlusion is also present in the scene objects such as indoor plants, stall etc. The training and testing sets are defined in the Mall dataset as well. The training phase consists of first 800 frames whereas remaining 1200 frames are used for testing. • PETS [53]: It is comparatively an old dataset, but is still used for research due to its diverse and challenging nature. These videos are collected through eight cameras which are installed in a campus. The dataset is used for surveillance applications, consequently complex videos can be seen. The dataset is mostly used for counting applications. Labelling is provided for all video sequences. PETS contains three kinds of movements and further each movement includes 221 frame images. The pedestrian level covers light and medium movements. • UCSD [54]: The UCSD dataset is the first dataset which is used for counting people in a crowded place. The data in UCSD are collected through a camera which is installed on a pathway specified for pedestrians. All the recording is done at the University of California at San Diego (UCSD), USA. Annotation is provided for every fifth frame. Linear interpolation is used to annotate the remaining frames. To ignore unnecessary objects (for example trees and cars etc.), a region of interest is also defined. The total number of frames in the dataset is 2000, whereas the number of pedestrians is 49,885. The training and testing sets are defined, the training set starting from indices 600 to 1399, whereas testing set contains remaining 1200 sequences. The dataset is comparatively simple and an average of 15 people can be seen in a video. The dataset is collected from a single location, hence less complexity can be seen in the videos. No variation in the scene perspective across the videos can be noticed.
Approaches
Counting of crowd provides an estimate about the number of people or certain objects. Counting does not provide any information about the location. Density maps are computed at different levels and also provide very weak information about a person's location. On the other hand, localization provides accurate information about the location. However, due to sparse nature, it is comparatively a difficult task. Therefore, the best way is to handle all the three tasks simultaneously, employing the fact that each case is related to the other.
We discuss various methods that are used to address crowd controlling and management system in this section. We do not claim any generic taxonomy for CMS; instead, we organize each real time CMS based on the fundamental method that underlines its implementation. We also discuss sufficient references where these proposed methods are previously used. We present discussion regarding the merits and demerits of each method as well. A summary of all the methods reported by literature is presented in Figure 5.
We make three categories of crowd monitoring including; localization, behaviour, and counting. Then each of these categories are further divided.
Localization
We divide localization into two sub categories including localization and counting and anomaly detection. Rodriguez et al. [55] propose a method for localizing crowded scenes using density maps. The authors of the paper optimize the objective function which prefers those density maps which are generated on specific detected locations, almost similar to the estimated density map [56]. Better precision and recall values are obtained with this approach. A Gaussian kernel is placed at the location of detection and the density map is generated. A density map is obtained by Zheng et al. [57] through sliding window over the image [56]. In the later stage, integer programming is used for localizing objects on density maps. Similarly, Idrees et al. [43] present a method for crowd analysis, addressing all the three terms including counting, density estimation, and localization through composition loss function. The formulation in [43] work is based on an observation that all the three tasks are related to each other which makes the loss function for better optimization of a DCNNs decomposable. As localization needs comparatively better quality images, a new dataset known as UCF-QNRF is also introduced by the authors. Some papers recently introduced addressing anomaly detection can be addressed in the references [58][59][60]
Crowd Behaviour Detection
Behaviour analysis of large crowd has become the primary part for peaceful events organization [61]. In video processing particularly, behaviour analysis and identification is of crucial importance [10]. The researchers proposed various algorithms from time to time. The authors in [10,62] use optical flow to detect the behaviour of crowd. Another method in [63] use optical flow along with support vector machine (SVM) for crowd behaviour analysis. Similarly, [64] uses a deep learning method with optical flow for crowd behaviour detection. Some additional methods which use Isometric Mapping [65], spatio-temporal [66] and spatio-temporal texture [44] can also be explored for details.
Counting
Gathering of people for some specific reason such as political gathering, religious occasion, and sports event is called crowd. Estimating the number of people in videos or images is called crowd counting. We divide crowd counting into two types, known as supervised and unsupervised counting. In the first type of counting, the input data are normally labeled and then some machine learning tool is used for prediction. In unsupervised crowd counting, the data and labels are unknown. A machine learning tool is used for categorization. These two categories are further divided into other types as shown in Figure 5. The supervised crowd counting is further divided into the following types: • Supervised learning based methods: -Counting by detection methods: A window of suitable size slides over the entire scene (video/image) to detect people. After detection, researchers came up with various methods using the concepts of histogram of oriented gradients (HOG) [67] , shapelet [68], Haar features [69], and edgelet [70]. Various machine learning strategies are exploited by researchers [71,72], but most of these methods fail over highly crowded scenes. An excellent 3D shape modeling is used by Zhao et al. [73], reporting much better results as compared to SOA. The same work is further enhanced by Ge and Collins [74]. Some papers addressing counting by detection methods can be explored in the references [75][76][77].
These methods fail when the density of crowd is high. Similarly, the performance of detection-based methods drop when a scene is highly cluttered.
-Regression based method: The high density and cluttered problem faced by the aforementioned method is excellently addressed by this method. Regression based methods work in two steps: feature extraction and regression modelling. The feature extraction methods include subtraction of background, which is used for extracting the foreground information. Better results are also reported while using Blobs as a feature [39,54,78]. Local feature include extracting edge and texture information from data. Some of the local features used are Gray level co-occurrence matrices (GLCMs), Local binary pattern (LBP), and HoG. In the next stage mapping is performed from the extracted features through regression methods including Gaussian process regression, linear regression, and ridge regression [79]. An excellent strategy is adapted by Idrees et al. [43] by combining Fourier transform and SIFT features. Similarly, Chen et al. [39] extract features from sparse image samples and then mapping it to a cumulative attribute space. This strategy helps in handling the imbalanced data. Some more methods addressing counting problem can be explored in [15][16][17]39,80].
The occlusion and cluttering problems faced by the initial two methods are solved with regression based methods. However, these methods still face the capitalized spatial information issue.
-
Estimation: A method incorporating the spatial information through linear mapping of local features is introduced by Lempitsky et al. [56]. The local patch features are mapped with object density maps in these methods. The authors develop the density maps by a convex quadratic optimization through cutting plane optimization algorithm. Similarly, Pham et al. [40] suggest a non-linear mapping method through Random Forest (RF) regression from patches in the image. The lastly mentioned method solve the challenge of variation invariance faced previously. Wang and Zou's [38] work explores the computational complexity problem through subspace learning method. Similarly, Xu and Qiu [81] apply RF regression model for head counts. Some more algorithms which are estimation based methods can be explored in [56,82].
We divide the density-level algorithms into three more categories: Low-level density estimation methods: These algorithms include methods such as optical flow, background segmentation method, and tracking methods [83,84]. These methods are based on motion elements. These elements are obtained from frame by frame modeling strategy, which is paving the path for object detection. Some more low density methods can be explored in [85][86][87].
Middle-level density estimation methods: At this mid level of density estimation, the patterns in data become dependent upon the classification algorithms.
High-level density estimation methods: In high level density estimation techniques, dynamic texture models are utilized [88]. These methods are dominant crowd modeling methods. [98]. Similarly Xu et al. [81] utilize the information at much deeper level for counting in complex scenes.
• Unsupervised learning based methods: -Clustering: These methods rely on the assumption that some visual features and motion fields are uniform. In these methods, similar features are grouped into various categories. For example, the work proposed in [18] uses Kanade-Lucas-Tomasi (KLT) tracker to obtain the features. The extracted features are comparatively low level. After extracting the features, Bayesian clustering [99] is employed to approximate the number of people in a scene. Such kind of algorithms model appearance-based features. In these methods, false estimation is obtained when people are in a static position. In a nutshell, clustering methods perform well in continuous image frames. Some additional methods are in the references [18,[99][100][101].
Crowd counting and abnormal behavior detection are among the hottest issues in the field of crowd video surveillance. In the SOA, several articles discuss abnormal behavior detection in the crowd. To the best of our knowledge, it can be divided into two main categories, which are the global representation and local exceptions. The authors in [102] report two novelties for abnormal behavior detection. First, the texture extraction algorithm based on the spatial-temporal is developed. The second novelty is the approach for motion patterns of the crowd for identifying the unusual events in the crowd. These are termed as the signatures. An enhanced gray level co-occurrence matrix is employed for these signatures. The authors report superior performance compared to other approaches. For a crowd, abnormal events detection, the research in [103] considers both the appearance and motion flow information. Swarm theory-based Histograms of Oriented Swarms (HOSs) is introduced as a novelty. The HOS creates a signature for the crowded environments dynamics. The features of motion and appearance are employed only for local noise suppression, performance increase for non-dominant detection of local anomalies, and lowering the processing cost. As such, the approach gets an increased accuracy for pixel-based event recognition in the crowd. Ref. [104] proposes a Point-based Trajectory Histogram of Optical Flow (PT-HOF) for abnormal event detection in crowded environments. The (PT-HOF) captures the temporal and spatial info for the point trajectory in the scenes of crowd. It encodes the relevant features using the deep learned model. The work in [15] proposes the Markov Random Field (MRF), taking into account the space-time peculiarities. The local regions in video sequences are represented by the nodes in the graph of the MRF. The links in the MRF graph correspond to the neighbouring nodes in space-time. For normal and abnormal activities, the authors employ the optical flow, taking advantage of the probabilistic PCA. The model thus optimally captures the normal and abnormal actions locally and globally. The authors present an integrative pipeline approach in [16]. The approach integrates the output of the pixel analysis and the trajectory analysis for the normal and abnormal events differentiation. The normal and abnormal behaviours are detected based on the trajectories and speeds of objects, taking into account the complex actions in sequences. The work in [17] presents three attributes for localized video-based approaches for anomaly detection in sequences. Firstly, augmenting the dynamics and appearance of the scene and its detection ability. Second and third, are temporaland spatial-based abnormal events. The approach is demonstrated to outperform existing methods. In [18], local motion-based video descriptors are used for feature extraction for abnormal events modeling, achieving superior accuracy in localization tasks, and video abnormal events detection. The work in [19] uses the motion history for consecutive frames in sequences for anomalies detection.
These motion histories are termed as the Short Local Trajectories (SLTs). The SLTs are extracted from the super-pixels of the foreground objects in the scene. The SLT thus encodes the temporal and spatial information of the moving subjects. The authors report the feasibility of the approach on three datasets. Concerning the global anomalies, the authors in [4] present a framework that takes into account the Spatio-temporal structure of the sequences. The framework thus exhibits an optimal decision rule. For the local anomalies, the local optimal decision rules are extracted. This optimal local decision rules even work when the behavior has spatial, global, and temporal statistical properties and dependence. For abnormal and normal events differentiation, the authors in [5] present the Sparse Reconstruction Cost (SRC). By using each basis before weight, the SRC provides a robust generalization of the vents in normal and abnormal classes. In [7], a novel approach in three aspects is demonstrated. For modelling of crowded scenes, the approach uses the particle trajectories. Secondly, for crowd motion capturing and modelling, the authors introduce chaotic dynamics. Finally, for abnormal events detection, a probabilistic model is formulated. The results show that the proposed approach efficiently model, recognize, and differentiate normal and abnormal events in sequences.
Crowd video surveillance is not limited to crowd counting and anomaly detection, and many new directions have been expanded, such as salient detection, congestion detection, etc. Saliency detection refers to the process of imitating the human visual system while using computer vision methods. Nguyen et al. [105] use the knowledge-driven gaze in human visual system to find the saliency in crowd. They used CNN using self-attention mechanism so as to find the salient areas in human crowd images. Similarly, Zhang et al. [106] were able to detect salient crowd motion using direction entropy and a repulsive force network. The frames of the crowd video sequence are evaluated by an optimal flow technique. This is followed by the calculation of the crowd velocity vector field. The authors worked on three video sequences from the Crowd Saliency dataset such as a train station scene, marathon scene, and Hajj pilgrimage scene. Retrograde and instability areas of a crowd were identified. In the paper by Lim et al. [107], the authors discuss how the temporal variations in the flow of a crowd could be exploited to identify the salient regions. The salient regions have high motion dynamics and are found in different scenarios such as occlusions, evacuation planning at entry and exit points, identification of bottlenecks. In an irregular flow, the motion dynamics of people differ from one another. For Mecca, their method identified the salient regions produced by the bottlenecks which were observed near Black Stone and the Yemeni corner. Furthermore, their method does not need tracking each object separately or prior learning of the scene. Lim et al. [108] were able to identify the salient regions in crowd scenes using an unsupervised algorithm. Their approach identified the crowding sources and sinks corresponding to areas in a scene where the people in a crowd enter and exit respectively. They detect the salient motion regions through ranking the intrinsic manifold obtained by similarity feature maps. Khan studied the individuals struck in congested areas of a crowd [109]. Such individuals experience lateral oscillations and are unable to move in a free manner. The pedestrians trajectories are used to determine the oscillation feature. An oscillation map is used to find the critical locations and congestion in videos. Furthermore, a novel dataset consisting of 15 crowd scenes to evaluate congestion detection methods was proposed.
Quantification of Tasks
• Counting: We represent estimation of count for crowded image i by c i . This single metric does not provide any information about the distribution or location of people in a video or image, but is still useful for various applications such as predicting the size of a crowd which is spanning many kilometres. A method proposed in [110] divides the whole area into smaller sections, which further finds the average number of people in each section, and also computes the mean density of the whole region. However, it is extremely difficult to obtain counts for many images at several locations, thereby, the more precise integration of density over specific area covered is permitted. Moreover, cartographic tools are required for counting through aerial images which map the crowd images onto the earth for computing ground areas. Due to its complex nature, mean absolute error (MAE) and mean squared error (MSE) are used for evaluation of a crowded scene for counting.
The two evaluation metrics MAE and MSE can be defined as; In Equations (1) and (2), N represents the number of test samples, x i the ground truth count, and x i the estimated count for the ith sample. • Localization: In many applications, the precise location of people is required, for example, initializing a tracking method in high density crowded scene. However, to calculate the localization error, predicted location is associated with ground truth location by performing 1-1 matching. This is performed with greedy association and then followed by computation of Precision, Recall, and F-measure. Moreover, the overall performance can also be computed through area under the Precision-Recall curve, also known as L-AUC.
We argue here, precise crowd localization is comparatively less explored area. Evaluation metrics of localization problem are not firmly established by researchers. The only work which proposes 1-1 matching is reported in [43]. However, we observe that the metric defined in [43] leads to optimistic issues in some cases. No penalizing has been defined in over detection cases. For instance, if true head is matched with multiple heads, the nearest case will only be kept while ignoring the remaining heads without receiving any penalty. We believe that for a fair comparison, the discussed metric fails to be acknowledged widely. We define all the three evaluation metrics as: where t p represents true positive and t n represents false negative. For crowd localization task, normally box level Precision, Recall, and F-measure is used. • Density estimation: Density estimation refers to calculating per-pixel density at a particular location in an image. Density estimation is different from counting as an image may have counts within particular safe limits, whereas containing some regions which will have comparatively higher density. This may happen to some empty regions located in a scene such as sky, walls, roads etc. in aerial cameras. The metrics which were used for counting estimation were also used for density estimation, however, MAE and MSE were measured on per pixel basis.
Data Annotation
Tools: Annotation is a process of creating ground truth data for a machine learning task. The data may be in the form of images, video, audio, text etc. The ground truth data are used by a computer to recognize patterns similar in an unseen data. The annotation categories are different such as line annotation, 3D cuboids, bounding box annotation, landmark annotation, and dot annotation. In crowd counting scenarios, dot annotation was the initial step which created ground truth and was carried through different tools such as LabelMe, RectLabel, LabelBox etc.
An online annotation tool was developed based on Java, HTML, and Python. This tool creates ground truth data for labelling head points. The tool normally supported two kinds of labels, bounding box and point. Each image was zoomed to label head with desired scales and was then divided into small patches of size 16 × 16. This size allowed annotators to create ground truth under five different scales (2 i , i = 0, 1, 2, 3, 4) times original image size. This tool prompted the annotation process with good speed and much better quality. For more information, we would request the readers to explore the paper in [43].
Point wise annotation: The annotation process could be divided into two sub-stages, labelling and then performing refinement. Normally, some annotators were involved in the labelling process. This method of creating the ground truth data was a time consuming task, since a single person was involved in all labelling. After creating ground truth, some other individuals did the the preliminary annotation which took comparatively lesser time.
Annotation at box-level: The box-level annotation was performed in three steps. First, for each image, normally 10-20% points were typically selected to draw a bounding box. Secondly, for those points which were without a box label, a linear regression method was adapted to obtain its nearest box and box size as well. In the last stage, manual refining of the predicted box labels was performed.
In a nutshell, creating ground truth labels were mostly produced through a manual process. This labelling was performed without automatic labelling tool. Such a kind of labelling was totally dependent on the subjective perception of a single individual who was involved in this labelling task. Hence providing an accurate ground truth label in the image was very difficult and a time consuming task.
Comparative Analysis
We performed comparison of the existing SOA approaches on crowd management datasets. All results are summarized in Tables 2 and 3. We summarize some concluding remarks in the following paragraphs.
•
In the last few years, significant research work has been reported in the area of crowd analysis. This can be seen from Tables 2-4. Many datasets have been introduced. However, most of these datasets address the counting problem. Less focus has been given to localization and behaviour analysis. The only datasets having sufficient information about localization and behaviour analysis are UCF-QNRF and NWPU-crowd. Therefore, there is still a lot of space regarding publicly available datasets in crowd analysis.
•
Most of the labelling for creating ground truth data was performed manually. Commercial image editing softwares were used by the authors for creating ground truth data. In such kind of labelling process, no automatic tool was used. This labelling was totally dependent on subjective perception of a single participant involved in labelling. Hence, chances of error exist. Differentiation of certain regions in some cases was difficult.
•
As compared to counting and behaviour analysis, localization is a less explored area. Some authors report 1-1 matching [43]. However, we believe that the metric defined in [43] leads to some optimistic problems. In this metric, no penalizing strategy has been defined in cases where multiple head detection occurs. Hence, still a proper performance metric has not been defined for behaviour analysis.
•
Crowd analysis is an active area of research in CV. Table 4 shows a summary of the research conducted on crowd analysis between 2010 to 2020. A more detailed picture is presented in Tables 2 and 3, as more detailed results are shown. The MAE, MSE, Precision, Recall, and F-1 measure values are reported from the original papers. As can be seen from Tables 2 and 3, all the metric values were improved on the standard database, particularly with recently introduced deep learning method. • Some papers report that a more detailed look into the crowd counting, localization, and behaviour analysis reveal that traditional machine learning methods perform better in some cases as compared to newly introduced deep learning based methods. Through this comparison, we do not claim that the performance of hand-crafted features is better than deep learning. We believe that better understanding of the deep learning based architectures is still needed for crowd analysis task. For example, most of the cases of poor performance while employing deep learning were limited data scenarios, a major drawback faced by deep learning based methods. The performance of conventional machine learning methods was acceptable with data collected in simple and controlled environment scenes. However, when these methods are exposed to complex scenario, significant drop in performance was observed. Unlike these traditional methods, deep learning based methods learn comparatively higher level of abstraction from data. As a result, these deep learning based methods outperform previous methods by a large margin. These methods reduce the need of feature engineering significantly. However, these deep learning based methods are also facing some serious concerns from the research community. For example, deep learning is a complicated procedure, requiring various choices and inputs from the practitioner side. Researchers mostly rely on a trial and error strategy. Hence, these methods take more time to build as compared to the conventional machine learning models. In a nutshell, deep learning is the definitive choice for addressing the crowd management and monitoring task properly, but till date the use of these methods is still sporadic. Similarly, training a deep learning based model for crowd monitoring with different hidden layers and some filters which are flexible is a much better way to learn high level features. However, if training data are not sufficient, the whole process may under perform.
•
We notice that DCNNs model with relatively more complex structure cannot deal with multi-scale problem in a better way, and still improvement is needed. Moreover, the existing methods have more focus on the system accuracy, whereas the correctness of density distribution is ignored. From the results, we notice that the reported accuracies are more close to the optimal ones, as the number of false negative and false positive are nearly the same.
•
We argue that most of the existing methods for crowd monitoring and management are using CNNs based methods. However, these methods employ the pooling layer, resulting in low resolution and feature loss as well. The deeper layers extract the high level information, whereas the shallower layers extract the low level features including spatial information. We argue that combining both information from shallow and deep layers is the better option to be adapted. This will reduce the count error and will generate more reasonable and acceptable density map.
•
Traditional machine learning methods have acceptable performance in controlled laboratory conditions. However, when these methods were applied to datasets with unconstrained and un-controlled conditions, significant drop in performance is noticed. However, deep learning based methods show much better performance in the wild conditions. • Crowd analysis is an active area of research in CV. Tremendous progress has been seen in the last 10 years. From the results reported till date, it is clear that all the metrics (MAE, MSE, F-1 measure) are improved. We present a summary of all the papers published in Tables 2 and 3. Noting the fast trends of the CV developments moving very rapidly towards recently introduced deep learning, progress in crowd analysis is not satisfactory. Given the difficulty of the training phase in deep learning based methods, particularly crowd analysis, knowledge transfer [111,112] is an option to be explored in future. In knowledge transferring strategy, benefits from the models already trained are taken. We also add here that a less investigated domain in transfer knowledge is heterogeneous strategy adoption considering deep learning based techniques for crowd analysis, the keywords are temporal pooling, 3D convolution, LSTMs, and optical flow frames. Similarly, better managed engineering techniques are also needed to improve SOA results. For instance, data augmentation is another possible option to be explored. [50] 80.9 77.5 -2020 Xue et al. [121] 82.0 81.5 - Table 3. CMS performance in the form of mean absolute error (MAE) and mean squared error (MSE).
Year Reported Paper Apporach Used Task Performed
Fradi et al. [65] deep learning counting Rao et al. [66] detection counting Zhang et al. [50] deep learning counting Jackson et al.
Summary and Concluding Remarks
Crowd image analysis is an essential task for several applications. Crowd analysis provides sufficient information about several tasks including counting, localization, behaviour analysis etc. Crowd analysis is extremely challenging when data are collected in the wild conditions. However, some good research work particularly in the last 5 years reveals many achievements. Due to a diverse range of applications, we believe that crowd analysis in the present stage is far beyond the grasp, therefore, we call all researchers to improve the existing methods presented in Section 6.
One major problem crowd analysis is facing is the unavailability of a database for some tasks such as crowd localization and behaviour analysis. We expect from the research community of CV some contribution in the form of challenging datasets on the topic. We are also expecting excellent evaluations of the deep learning techniques, particularly, data collected in the un-constrained conditions in the form of future work. If an efficient crowd analysis system is introduced, the system will have profound effects on very large scale applications of crowd image monitoring systems.
We present a detailed survey on the crowd analysis methods, including details about all available databases. We also investigate various aspects of the already existing solutions for crowd analysis. We started from a hand crafted representation and moved towards newly introduced deep learning based techniques. Lastly, we provide comparative analysis of the obtained results so far for crowd image analysis. We also identify some open problems in crowd analysis and present an outlook into the future of crowd image analysis.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 9,444.4 | 2020-09-01T00:00:00.000 | [
"Computer Science"
] |
Synthesis, Crystal Structure, and DFT Study of Two New Dinuclear Copper(I) Complexes Bearing Ar-BIAN Ligands Functionalized with NO 2 Groups
: Two new bis(aryl-imino)-acenaphthene, Ar-BIAN (Ar = 2,4,6-trimethylphenyl = mes) ligands, bearing the NO 2 group in the naphthalene moiety of the iminoacenaphthene at para-(5-NO 2 ) and meta-(4-NO 2 ) position, of formulations 1,2-bis(mes-imino)-5-nitroacenaphthene, 1 , and 1,2-bis(mes-imino)- 4-nitroacenaphthene, 2 , were synthesized. Their respective di-nuclear iodide bridged copper(I) complexes [Cu 2 (μ-I) 2 (mes-BIAN-5-NO 2 ) 2 ], 3 and [Cu 2 (μ-I) 2 (mes-BIAN-4-NO 2 ) 2 ] , 4 , were obtained in good yields by treatment with an equimolar amount of CuI. All compounds were characterized by elemental analysis,
Introduction
The design and synthesis of copper complexes is a subject of current interest since they can be applied in a large variety of metal-mediated transformations. [1] Moreover, copper is a cheap, abundant and non-toxic metal. d 10 transition metal complexes have been extensively studied because of their unique photophysical and photochemical properties which led to applications for light emitting devices, sensing devices, solar cells, and artificial photosynthesis. [2] Among those, 3d 10 copper(I) com-single-crystal X-ray diffraction, 1 H-NMR, 13 C-NMR, FTIR, UV/Vis spectroscopy. DFT calculations helped to understand the different molecular structure observed in the crystals of 3 and 4 and the determining role of packing forces. TDDFT revealed that the absorption bands in the visible were essentially MLCT (Metal to Ligand Charge Transfer), with some n→π* character (intra ligand). The shift to the red compared to the spectrum of the Cu(I) complex analogue without the NO 2 group, [Cu 2 (μ-I) 2 ](mes-BIAN) 2 ], 6, could be explained by the stabilization of the ligand unoccupied π* orbitals in the presence of NO 2 .
plexes were studied since the last century to explore their photophysical properties. [3] The well-known {Cu 2 (μ-X) 2 } core (X = halide) can coordinate to different types of ligands to form a wide variety of complexes, resulting in tetracoordination around the Cu(I) atom. Their general formulation is [Cu 2 (μ-X) 2 L 4 ], in which L are mainly N or P ligands, either monodentate (L), or bidentate chelating ligands (L-L). [4] The synthesis and photophysical properties of Cu(I) compounds of this type, bearing chelating chiral bis(phosphines), has been reported. [5] α-Diimine ligands are well known and have been extensively used due to their ability to stabilize organometallic complexes. [6,7] Elsevier et al. [8] described the synthesis and full characterization of a new family of rigid chelating bidentate ligands of the type Ar-BIAN (bis(aryl)acenaphthenequinonediimine) by condensation of acenaphthenequinone with two equivalents of an appropriate aryl-amine. Many late transition metal complexes bearing α-diimine ligands have been extensively employed in several catalytic reactions. [9][10][11][12][13][14][15][16] Using this synthetic route, we can easily vary the backbone and the aryl substituents, enabling thus to tune the steric and electronic effects at the metal centre. We have been engaged during the last decade in the synthesis of α-diimine transition metal compounds, either for structural studies [17][18][19][20][21] or catalytic applications. [22][23][24][25][26][27] Furthermore, copper(I) complexes bearing Ar-BIAN ligands have been reported by us [18,[22][23][24]26] and other authors. [28] The excellent redox properties, stereo and electronic tunability of the bis(aryl)acenaphthenequinonediimine (Ar-BIAN) li-
Chemical Studies
The first step in the synthesis of the ligands is the functionalization of acenaphthenequinone by the NO 2 group, which has been described in the literature under different experimental conditions, at room temperature, 0°C and 80°C, using NaNO 3 or HNO 3 as nitration agents. [38,39] Although the authors claimed that the nitro-acenaphthenequinone held the NO 2 group at the para-position, our attempts to repeat the synthesis following the three described methods afforded in all cases a mixture of products functionalized at para-and meta-position in roughly 1:2 meta-:para-proportion, from now on we will indistinctly name para-position to 5-NO 2 and meta-position to 4-NO 2 (Scheme 1).
The second step is diimine formation, and there are two major synthetic strategies to obtain Ar-BIANs: (i) the template method using ZnCl 2 or (ii) using an organic acid as a catalyst. The second method using EtOH as solvent and formic acid as catalyst revealed to be quite efficient since it allowed us to separate the two isomers due to their different solubilities. The two separated Ar-BIANs (Scheme 2), one with the NO 2 group at the para-position, 1, which precipitates from ethanol in 83 % yield, and another with the NO 2 group at the meta-position, 2, remains dissolved in ethanol from which the ligand is isolated after workup in 78 % yield.
Formation of ligands can be confirmed by FTIR spectroscopy. No C=O stretching vibrations of the starting diketones, in the 1700-1800 cm -1 region, are observed, discarding thus the formation of monosubstituted species. Characteristic vibrations of NO 2 group are observed for free ligands in the range of 1524-1536 cm -1 for symmetric stretching and in the range of 1329-1338 cm -1 for the asymmetric stretching of N-O bond. The C= N stretching frequency in the free ligands cannot be assigned unambiguously because of the presence of C=C stretching frequencies from the naphthalene backbone in this region, [8,40] however it is reported in the literature [41] that two bands in the range of 1617-1675 cm -1 can be assigned to C=N stretching of Ar-BIANs.
Both ligands 1 and 2 are asymmetric molecules regarding the functionalized naphthalene moiety, 1 H-NMR characterization of ligands 1 and 2 was performed in CDCl 3 . The main differences between the two compounds are in the naphthalene moiety bearing the NO 2 substituent. In the case of ligand 1, with the para-NO 2 , we observe two doublets, for protons at the position 3 and 4 at 6.83 and 8.31 ppm, respectively. As for ligand 2, with the meta-NO 2 , two singlets corresponding to protons at 3 and 5 positions, were observed with a chemical shift of 7.39 ppm and 8.86 ppm, respectively.
When we look at the protons of the aryl moieties of both compounds, we observe that in the case of ligand 1 they are not strongly affected by the presence of the NO 2 group. Just one singlet, integrating for four protons, at 6.99 ppm, was observed, similar to the unsubstituted ligand 5. In the case of compound 2, we see that the protons of the aryl group spatially closer to the NO 2 substituent, undergo some influence from the nitro group. Two singlets, both integrating for two protons at 7.04 ppm and 6.99 ppm were observed. For ligand 2, the two para-methyl groups are not equivalent, contrary to ligands 1 and 5, showing a clear influence of the meta-NO 2 group.
Indications of the presence of possible (E,Z) isomers were found in the 1 H-NMR spectra. In order to confirm their presence, we performed variable temperature 1 H-NMR experiments in C 6 D 6 .
In solution, the Ar-BIAN ligands may appear in two (E,E) and (Z,E) stereo isomeric forms, since steric repulsion prevents (Z,Z) from existing. Ragaini et al. [42] show that the ratio of the two isomers depends on both temperature and solvent, the complete assignment of signals being possible to make by bidimensional NMR experiments (COSY, NOESY, Figures S1-S10 in Supporting Information, SI). While asymmetric Ar-BIAN (Ar,Ar′-BIAN) molecules appear in both forms in solution, symmetric ones seem to occur only as the (E,E) isomer. 1 and 2, despite holding the same Ar (2,4,6-trimethylphenyl), are asymmetrical molecules regarding the functionalized naphthalene moiety, so they can exist as (E,E), (Z,E) and (E,Z) forms in solution (see computational studies). While the presence of the isomers is not visible in CDCl 3 , when the solvent was changed to C 6 D 6 , (Figures S11 and S12 in SI) all three isomers could be observed, particularly in the case of 1 (S11 in SI). At room temperature in C 6 D 6 the three isomers coexist, while at higher temperatures the isomers rapidly change and converge to one set of peaks (see Figure 1).
Complexes 3 and 4 were synthesized using the same strategy adopted for complex [Cu 2 (μ-I) 2 (Mes-BIAN) 2 ] 6, [25] (see Scheme 3), by adding CuI to the stoichiometric amount of ligand 1 or 2 in CH 3 CN and refluxing the mixture for 3 h. After removal of the solvent under vacuum, dark solids were isolated, washed with pentane and dried under vacuum. Suitable crystals for single-crystal X-ray structure determination were obtained by slow diffusion of pentane in a CH 2 Cl 2 solution, yielding compound 3 and 4 in 66 % and 59 % respectively.
Two different isomers can be formed. In one of them, the ligands are in trans conformation (isomer A) and in the other one the ligands are in cis conformation (isomer B) (see Scheme 3). In the synthesis of the Cu(I) dimers with ligands 1 and 2, only isomer A could be isolated as revealed by singlecrystal X-ray diffraction (see below). Characteristic vibrations of the NO 2 group are observed for complexes 3 and 4 at 1530 and 1536 cm -1 , respectively, for symmetric stretching, and for the asymmetric stretching of N-O bond at 1331 and 1339 cm -1 , respectively. A band at 1643 cm -1 for complex 3 and two bands, at 1649 and 1647 cm -1 for complex 4, can be assigned to C=N stretching. 1 H-NMR and 13 C-NMR studies of complexes 3 and 4 show similar patterns to those of the ligands, only differing on the chemical shifts (Figures S13-S24 in SI). The highest difference is for the methyl groups in the ortho-position of the aryl moieties, that are high field shifted in comparison to the free ligands, at 2.15 ppm for 3, 2.15 and 2.17 ppm for 4. For complex 4, not only the two para-methyl groups of the aryl are nonequivalent, like in ligand 2, but we also observe two peaks assigned to the four ortho-methyl groups by opposition to only one peak in complexes 3 and 6. The para-methyl groups appear as two singlets, at 2.44 and 2.36 ppm, and the four ortho-methyl groups appear as two singlets at 2.17 and 2.15 ppm.
Crystallography
Crystals of 1-4 suitable for single-crystal X-ray diffraction were obtained as described in the synthetic procedures. Ligands 1 and 2 crystallize in the monoclinic system, space groups P2 1 /n and C2/c, respectively. The molecular structures of 1 and 2 are depicted in Figure 2 and selected structural parameters are listed in Table S1 (SI). Structural parameters of the non-substituted 2,4,6-Me 3 C 6 H 2 -BIAN analogue, 5, reported in the literature, [41] are also presented in Table S1 for comparison. The molecular structures of 1 and 2 show that the bis(imino) fragment exhibits the (Z,E)-configuration as some other Ar-BIAN compounds, [18,43] instead of the more common (E,E)-configuration. [44][45][46][47] Compound 1, with a nitro substituent in the paraposition, presents a more planar bis(imino)acenaphthene skeleton than the non-substituted 2,4,6-Me 3 C 6 H 2 -BIAN, 5, as evi- The position of the nitro substituent in the naphthenic rings seems to influence the referred deviation from planarity, possibly due to C-H···O and C-H···π weak hydrogen bonds, as observed in the packing diagram ( Figure S25, SI). The angle between the two mesityl rings is about 78°in compound 1 and 92°in compound 2. The imine C=N bonds N(1)-C(29) in 1 and N(2)-C (19) in 2 are longer than the corresponding bond lengths in 2,4,6-Me 3 C 6 H 2 -BIAN, 5, (1.280(5) Å and 1.276(3) Å vs. 1.2662(16) Å), but overall they are comparable with those observed in other Ar-BIAN compounds. [4][5][6][7]47] The breaking of symmetry of the NO 2 substituents seems to disfavour the (E,E) configuration observed when there is symmetry.
The molecular structures of 3 and 4 are depicted in Figure 3 and Figure 4, respectively, and selected structural parameters are listed in Table S2 (SI). Structural parameters of the related {CuI(Ar-BIAN)} 2 , 6, with non-substituted 2,4,6-Me 3 C 6 H 2 -BIAN ligand, 5 ( Figure S26 in SI), [26] previously reported by us, are also presented in Table S2 (SI) for comparison. Compound 3 crystallizes in the monoclinic system, space group C2/c, with one half molecule of 3 and one co-crystallized molecule of CH 2 Cl 2 in the asymmetric unit. Compound 4 crystallizes in the monoclinic system, space group P2 1 /n, with one molecule in the asymmetric unit. In both structures the Cu(I) atoms are bridged by two iodide ions and coordinated by two imine nitrogen atoms of the BIAN ligands presenting distorted tetrahedral geometry. The I-Cu-I angles in compound 4 are wider (about 115°) than in compound 3 (106.7(1)°). As a result, the Cu···Cu distance is particularly shorter in compound 4 (2.651(1) Å vs. 3.070(1) Å in 3) and it is also shorter than the sum of the van der Waals radii of two copper atoms (2.80 Å) suggesting a "cuprophilic" interaction in compound 4. [29,48] Another difference between compounds 3 and 4 is that the CuI dimer is perfectly planar in 3 while in 4 it displays a butterfly conformation with an angle of 160.9(2)°between the triangles formed by Cu(1)-I(1)-Cu (2) and Cu(1)-I(2)-Cu(2) (Figure 4). This distortion of the Cu-(I) 2 -Cu core has been reported in the literature for other dimeric Cu compounds and it has been related mainly to packing forces. [49,50] The structural differences between compounds 3 and 4 may be either due to steric constraints caused by the nitro meta-substituent or to packing arrangement. The packing diagram of 4 shows that all the molecules are arranged in the same direction, with the naphthalene moieties lying on parallel planes ( Figure S27, SI, right). For compound 3, the crystal presents molecules packed perpendicularly to each other (Figure S27, SI, left) similar to the packing arrangement observed for the analogue compound 6 (with non-substituted ligand) {CuI(Ar-BIAN)} 2 . [26] In fact, the latter compound presents two crystallographically different molecules in the asymmetric unit (molecules a and b in Figure S26, SI) whose structural differences may only be attributed to packing forces. In the case of compound 3, there are no structural differences between the molecules despite the perpendicular arrangement. Interestingly, the structural parameters of 3 are very similar to those of molecule a of {CuI(Ar-BIAN)} 2 ( Figure S26, SI) while the structural parameters of 4 are very similar to those of molecule b of {CuI(Ar-BIAN)} 2 ( Figure S26, SI) as can be seen in Table S2 (SI). In light of the discussion above, it is plausible to attribute the structural differences between compounds 3 and 4 to packing forces rather than to steric constraints caused by the position The Cu-N and Cu-I bond lengths are in agreement with those observed in the two related {CuI(Ar-BIAN)} 2 compounds found in the literature. [26,28]
Computational Studies
The geometries of the ligands and the complexes were optimized using DFT calculations as implemented in Gaussian16, [51] with the PBE1PBE functional, a 6-31G** basis set for the light atoms, and LANL2DZ with polarization for I and Cu, considering the solvent (chloroform) effect and dispersion corrections (more details in Experimental). The ligand 5 without substituents may adopt two conformations, (E,E) and (Z,E). The (E,E) is slightly more stable (ΔG = 0.6 kcal mol -1 ). When nitro substituents are introduced, there is another conformation, (E,Z). The three are shown, with the more relevant distances and relative energies, in Figure S28, SI. The two ligands have similar energies, the lowest energy form of 2 (E,E) being ca. 2.4 kcal mol -1 more stable than the lowest energy form of 1 (E,Z). The most stable conformation is different for the two ligands. When they bind to a metal, they must rearrange to the (E,E) conformation.
The optimization of the geometry of complexes 3 and 4 was more difficult and it required taking into account dispersion corrections in order to improve the agreement of the geometry of the {Cu 2 (μ-I) 2 } core with the experimental one, since M-M interactions in d 10 -d 10 systems are difficult to reproduce.
The optimized geometry of 3 is shown in Figure 5 (top) with relevant distances (values are only given for non-equivalent bonds). The {Cu 2 (μ-I) 2 } core is planar and the Cu-I and Cu-N bonds are very similar to the experimental ones (Table S2), the only significant difference being observed in the Cu-Cu distance, calculated as 2.566 Å, but determined as 3.071 Å. To check this aspect, the Cu-Cu distance of the {Cu 2 (μ-I) 2 } in the core of the complex 3, was varied from 2.566 Å to 3.071 Å in several steps and the energy increased by 2.9 kcal mol -1 (electronic energies, see also below). Considering the size of the molecule and the small energy difference (very flat potential energy surface), it might be possible, but resources consuming, to modify the methodology to get a better agreement.
The optimized geometry of complex 4 ( Figure 5, bottom) is very similar to the experimentally determined. Even the Cu-Cu distance is very close, and the I-CuCu med -I butterfly angle was calculated as 166.1°(experimental 166.8°). CuCu med is the point in the centre of the line joining the copper atoms. Complex 4, with NO 2 in the meta-position, is more stable (6.5 kcal mol -1 ) than complex 3 with this group in para-position, following the order of stability of the ligands.
The two complexes exhibit different {Cu 2 (μ-I) 2 } cores, namely flat with a long Cu-Cu distance as seen in 3 and puckered with a Cu-Cu bond 4. Also, the phenyl groups of the two Ar-BIAN ligands are parallel in 3 ( Figure 3) and are not parallel in 4 ( Figure 4). One should be tempted to assign the preference to the substitution pattern, since the NO 2 group occupies para and meta positions in 3 and 4, respectively. It is not that simple because the analogous complex without substituent, 6, has been structurally characterized and both forms are also present. When the geometry is optimized, starting from any of them, the energy minimum corresponds to the flat {Cu 2 (μ-I) 2 } core, though the Cu-Cu distance is too short as seen above for 3. Other distances are very comparable to those in 3 and 4. Since there are no substituents, the origin must be found elsewhere. A closer look at the two independent molecules present in the crystal structure of 6, emphasizing the position they occupy (Figure 6), shows on the left the flat core (Cu-Cu distance 3.08 Å) with parallel phenyl rings, while the unit on the right is puckered with a Cu-Cu bond and not parallel phenyl groups (not so easily seen in this view). Also, the BIAN moiety of the right-hand side unit is sandwiched between two of the phenyl groups of the left unit. In order to maximize the interaction, the arrangement must be planar. The packing forces the geometry to adapt, distorting the {Cu 2 (μ-I) 2 } core, which requires a small amount of electronic energy (2.9 kcal mol -1 , see Figure S29 in SI), compensated by intermolecular interactions. The packing arrangement of 3 and 4 is probably responsible for the molecular structures observed. Indeed, in complex 3 a very similar π-π stacking with three aryl rings is observed, reinforcing the previous interpretation.
A related question is prompted by the representation of a Cu-Cu bond in the crystal structure of 4, but not in the structure of 3 (Figure 3 and Figure 4). In complexes 3, 4, and 6, iodine atoms bridge the copper ones. As distances are not reliable indicators to detect bonds, we calculated Wiberg indices (see Computational studies), which scale as bond strength indicators. For calibration a structure with a non-supported Cu(I)-Cu(I) was searched in the CSD [52] (see molecular representation in Figure S30, SI). The Wiberg index (WI) is 0.189 for the experi-
Absorption Spectra
Ligands and complexes absorb in the visible and UV regions, but the complexation leads to a shift of the lower energy bands to longer wavelengths, especially when the NO 2 substituents are present, as shown in Figure 7 and Table S3 (SI). (See also Figure S31 in SI for comparison purposes).
TDDFT calculations were used to obtain the absorption spectra of all the compounds. The frontier orbitals of the complexes were influenced by the pattern of substitution, as can be seen in Figure 8, where the relative energies of orbitals between HOMO-2 and LUMO+2 are shown, with a 3D representation of the HOMO and the LUMO.
In complex 6, the HOMO is mainly localized in the {Cu 2 (μ-I) 2 } core, being antibonding between all atoms. There is a very small participation of the nitrogen lone pairs which are also Cu-N antibonding. The LUMO, on the other hand, is localized in the right side of the BIAN ligand. Note that the nitrogen atoms also participate in this π-system. The LUMO+1, not shown, looks the same, but is localized in the left side, and these two levels have practically the same energy (Figure 8). The introduction of the NO 2 substituents leads to a stabilization of the LUMO, since the π-orbitals extend to the NO 2 groups, and this effect is more pronounced in 3 (p-NO 2 ). The effect on the HOMOs is not so significant and is very similar for 3 and 4.
The HOMO still consists of the σ network [Cu 2 (μ-I) 2 N 4 ] and remains antibonding. The main contribution of the unoccupied orbitals is from the π-orbitals ranging from the nitrogen atoms to the NO 2 groups over the whole ligand.
The TDDFT calculated lower energy absorption bands for the three complexes 3, 4 and 6 are listed in Table S4 in SI, and all the orbitals are depicted in Figures S32, S33 and S34 in SI. The nature of the transitions is more clearly seen in the Electron Energy Difference Maps (EDDM) which are shown in Figure 9 and Figure 10 for complexes 3 and 4, respectively. All the transitions are essentially metal-to-ligand-charge-transfer (MLCT) from orbitals including the {Cu 2 (μ-I) 2 } cores augmented by the nitrogen lone pairs (σ), represented in red in all figures to the ligand π-orbitals, represented in light blue. The participation of the nitrogen atoms in occupied (σ, n) and unoccupied (π*) orbitals adds to the essentially MLCT transitions an n→π* character (intra ligand, IL). The nature of the {Cu 2 (μ-I) 2 } σ and the ligand π* change with the transition. As the energy increases, the {Cu 2 (μ-I) 2 } σ orbitals will become more stable and less antibonding, the opposite being observed for the ligand π* orbitals (higher energy, more antibonding) as is shown in the molecular orbitals of the three complexes in Figures S32, S33 and S34 in SI.
The EDDM plots show that the low energy transitions involve the {Cu 2 (μ-I) 2 } cores and the ligands, but as the energy increases Figure 9. TDDFT simulated electronic spectrum of complex 3 (red) and experimental spectrum (black). The EDDM plots are also shown, red and blue corresponding to a decrease and increase in electron density, respectively.
(entries 5 and 6, Table S4) the participation of the phenyl substituents increases, adding a π→π* character (intra ligand, IL). The calculated spectra are shifted relative to the experimental ones, but their outline is very comparable.
The same type of plot is shown in Figure 10 for complex 4. The EDDM plots are very similar to those of complex 3, showing MLCT transitions, with some n→π* character (intra ligand, IL) in the lower energy bands, with added π→π* character (intra ligand, IL) in the higher energy bands.
The spectra of complex 6 are shown in Figure S35 in SI and display the same features, being mostly MLCT with added n→π* character. As expected, the π orbitals extension is smaller, owing to the absence of NO 2 substituents. On the other hand, the participation of phenyl contribution to the orbitals remains very reduced compared to what was observed for 3 and 4 in similar energy range, so that IL π→π* is practically non-existent.
In order to study the possible emission, complexes 3, 4 and 6 were excited in the lower lying bands (620-630 nm) and also in higher lying bands (330-350 nm), but no steady state emission was observed. All complexes appear not to be luminescent in solution neither in solid phase.
Conclusions
Two dimeric new Cu(I) complexes [Cu 2 (μ-I) 2 (LL) 2 ] of two new bis(aryl-imino)-acenaphthene (LL), Ar-BIAN (Ar = 2,4,6-trimethylphenyl = mes) ligands, bearing the NO 2 group in the naphthalene moiety of the iminoacenaphthene at para-(3) and meta-(4) position were synthesized. Despite the similarities, the single-crystal X-ray diffraction structures showed that the {Cu 2 (μ-I) 2 } cores displayed different arrangements, being planar for 3 and puckered for 4. Although it was possible to reproduce the main features of the geometries of these complexes using a DFT approach (and adding Grimme3 corrections to the functional), the fact that the analogous complexes with unsubstituted Ar-BIAN ligands also showed the same isomerism, led us to propose that packing effects should play a relevant role in determining the molecular structure. Indeed, the arrangement of the Ar groups is similar between the two puckered and the two planar complexes but differs between the two groups.
The nature of the TDDFT calculated absorption spectra in the visible is MLCT, in agreement with the experimental behaviour, but there is a n→π* character, resulting from transitions involving the N atoms lone pairs and their p orbitals present in Synthesis of Ligands 1 and 2: To a suspension of nitro-Acenaphthenequinone (1.14 g, 5.02 mmol) in ethanol, 2 equiv. of 2,4,6-methylaniline (1.44 mL, 10.04 mmol) were added together with 0.5 mL of formic acid. The mixture was stirred at R.T. overnight. A dark red solid precipitated, and it was separated from the solution by filtration affording 1.28 g (83 % yield) of 1. The solvent of the remaining red solution was evaporated until dryness affording a red oil which was redissolved in CH 2 Cl 2 (15 mL) and washed with HCl 0.1 M (4 × 5 mL) in order to remove the unreacted free aniline. The organic phase was separated, and the solvent removed under vacuum, yielding 0.60 g of a red solid, ligand 2 (78 % yield). Crystals (s, 12H) ppm. 13 Crystallographic Data: Crystals of 1, 2, 3 and 4 suitable for singlecrystal X-ray analysis were grown as described in the synthetic procedures. Selected crystals were covered with Fomblin (polyfluoroether oil) and mounted on a nylon loop. The data were collected at 110 (2) [53] software packages. All nonhydrogen atoms were refined anisotropically and all the hydrogen atoms were inserted in idealized positions and allowed to refine riding on the parent carbon atom. The crystals of 3 and 4 presented disordered solvent molecules and the PLATON/SQUEEZE [54] routine was applied as it was not possible to obtain a good disorder model. The crystals had poor diffracting power leading to poor quality data. The molecular diagrams were drawn with ORTEP-3 for Windows [55] and Mercury, [56] included in the software package. Table S5 contains crystallographic experimental data and structure refinement parameters.
Deposition Numbers 1979445-1979448 contain the supplementary crystallographic data for this paper. These data are provided free of charge by the joint Cambridge Crystallographic Data Centre and Fachinformationszentrum Karlsruhe Access Structures service www.ccdc.cam.ac.uk/structures.
Computational Studies: DFT calculations were performed with Gaussian16, [51] using the double-basis set augmented with an f polarization function, LANL2DZ, for copper and extra p and d polarization function for iodine, with the associated effective core potential (ECP), [57] all downloaded from the EMSL Basis Set Library. [58,59] For the remaining elements, the standard 6-31G** basis set, comprising polarization functions, was employed. The hybrid PBE1PBE functional (also known as PBE0), [60] was used with a Grimme D3 dispersion correction. [61] The dispersion correction was necessary to reproduce relatively well the experimental structures, which were taken as starting geometry guesses. Geometry optimizations were performed without symmetry constraints in chloroform with the PCM solvation method. [62] TDDFT calculations, as implemented in Gaussian16, were performed to calculate the absorption spectra in chloroform. Wiberg indices were calculated with the NBO implementation in Gaussian16. [63][64][65][66] The reference compound with a non-supported Cu-Cu bond [67] was retrieved from the CSD. [52] The electron density difference maps (EDDMs) were obtained from scripts in the GaussSum package. [68] Molecular structures orbitals and electron density were drawn using Chemcraft. [69] ASSOCIATED CONTENT Supporting Information (see footnote on the first page of this article): CCDC reference numbers 1979445-1979448. NMR spectra for all compounds (Fig. S1-S24). Mercury packing diagrams of 1 and 2 (Fig. S25) and 3 and 4 (Fig. S27). Molecular Structure of 6 ( Fig. S26). Selected bond lengths and angles for 1, 2 and 5 (Table S1) and for 3, 4 and 6 (Table S2). Crystal data and structure refinement for 1, 2, 3 and 4 (Table S5). Optimized geometry of ligands 1 and 2 in the three possible conformations (Fig. S28). UV/Vis spectra of 5 and 6 (Fig. S31). Maxima and shoulders in the visible and near-UV absorption spectra of the ligands 1, 2 and 5 and complexes 3, 4, and 6 in chloroform (Table S3). DFT calculated change in energy associated with lengthening the Cu-Cu distance from 2.566 to 3.071 Å in complex 3 (Fig. S29). The X-ray structure of a complex with a non-supported Cu(I)-Cu(I) bond (top) and a view of the ligand (bottom) (Fig. S30). TDDFT calculated excitation energies and oscillator strengths (OS) in the visible absorption spectra of the complexes 3, 4, and 6 in chloroform (Table S4). Frontier molecular orbitals of complexes 3, 4 and 6 ( Fig. S32-S34). TDDFT simulated | 6,951 | 2020-07-13T00:00:00.000 | [
"Chemistry"
] |
Synthesis and Crystal Structure of 1-(3-fluorophenyl)-3-(3,4,5- Trimethoxybenzoyl)thiourea
The title thiourea was synthesized by reaction of 3,4,5-trimethoxybenzoyl isothiocyante with 3-fluoroaniline. The 3,4,5-trimethoxybenzoyl isothiocyante was produced in situ by reaction of 3,4,5-trimethoxybenzoyl chloride with ammonium thiocyanate in dry acetonitrile. The structure was confirmed by the spectroscopic, elemental analysis and single crystal X-ray diffraction data. It crystallizes in the monoclinic space group P2 1 /c with unit cell dimensions a = 13.
Synthesis of title thiourea was carried out in continuation of our interest in the synthesis of thioureas as intermediates towards synthesis of novel heterocycles and for the systematic study of their bioactivity and complexation behavior.In the IR spectrum absorptions were observed at 3350 cm -1 , 3280 cm -1 for free and associated NH, at 1638 cm -1 for carbonyl and at 1240 cm -1 for thiocarbonyl, at 1586 cm -1 for C=C and 1150 cm -1 for C-N stretchings respectively.The characteristic broad singlets at δ 9.17 and 4.61 for HN (1) and HN(3), were observed in 1 HNMR.The carbonyl and thiocarbonyl peaks were observed at 176.3 and 178.2 respectively, in 13 CNMR.In the mass spectrum the molecular ion peaks and base peak derived from aroyl group were observed.
Crystal Structure Determination
Bond lengths and angles of the title compound are in the usual ranges.The dihedral angle between the two aromatic rings is 89.9°.Two of the three methoxy groups lie in the plane of the ring to which they are attached [torsion angles: C27-O2-C23-C22 0.12(19) o , C29-O4-C25-C26: 13.69(18)°], whereas the third one is twisted out of the ring plane [C28-O3-C24-C25 76.18( 15 The molecular structure of the title compound 1 along with the atom-numbering scheme is depicted in Figure 1.
Experimental Section
Melting points were recorded using a digital Gallenkamp (SANYO) model MPD BM 3.5 apparatus and are uncorrected. 1H NMR spectra were determined as CDCl 3 solutions at 300 MHz using a Bruker AM-300 spectrophotometer.FT IR spectra were recorded using an FTS 3000 MX spectrophotometer, Mass Spectra (EI, 70 eV) on a GC-MS instrument.All compounds were purified by thick layer chromatography using silica gel from Merck.
X-ray data collection and structure refinement
Crystallographic data were recorded on a STOE IPDS-II diffractometer [13] using Mo Kα radiation (λ = 0.71073 Å) at T = 173 K.An absorption correction was applied using the MULABS [14] option in PLATON [15].The structure was solved by direct methods [16] and refined by full-matrix least-squares using SHELXL-97 against F 2 using all data [16].All non-H atoms were refined anisotropically.H atoms were positioned geometrically at distances of 0.95 Å (aromatic CH) and 0.98 Å (methyl groups) from the parent C atoms; a riding model was used during the refinement process and the Uiso(H) values were constrained to be 1.2 Ueq(aromatic C) or 1.5 Ueq(methyl C).The H atoms bonded to N were freely refined.
Crystal data.
Conclusions
Synthesis, characterization and crystal structure of a novel thiourea derivative has been carried out which is an intermediate step towards a diversity of heterocyles.
Figure 1 .
Figure 1.Perspective view of the title compound.
Figure 2 .
Figure 2. A packing diagram of the title compound 1 with view onto the ab plane.H atoms bonded to C omitted for clarity.Hydrogen bonds drawn as dashed lines. | 750.4 | 2011-05-09T00:00:00.000 | [
"Chemistry"
] |
The Study on the Text Classification for Financial News Based on Partial Information
The goal of this paper is to conduct the study on the text classification for financial news based on partial information. By a fact that an indispensable step for the efficient use of topic information embedded in financial news is the text classification, a new neural network called “All Dataset based on CharCNN (Character Convolutional Neural Networks) and GRU (Gated Recurrent Unit)” (in short, AD-CharCGNN) which extracts a part of the financial article and incorporates both time domain and spatial domain to classify financial texts is proposed. In the study of this paper, we first build a character level vocabulary by reading all characters of the financial dataset, part of each financial text which will be classified is mapped to a high-dimensional spatial vector based on the vocabulary. Then, the vectors are convoluted in the spatial domain to get the text local features, and next, the features are processed by the gated recurrent units to get the features contained time information. Finally, the features which contain spatial and time information will be classified through softmax function to get the text classification results. Our results on the experiments confirm that the network proposed in this paper works effectively with the accuracy of 96.45%, and it seems that the text classification algorithm with the feature by taking only partial text part is more suitable for the application of the practice. Meanwhile, for the input with character level vector, the network is not only suitable for Chinese but also for other languages.
I. INTRODUCTION
We know that there is an interplay between financially related news and the financial market [1], and thus it has become an indispensable step to classify the massive internet financial news.
It is well known that the classification of financial news is the most fundamental step to help financial individuals or financial institutions to make decisions [2]. Especially for professional financial experts, after classification, detailed and effective financial texts can master the current advanced research and possible future directions, and comprehensively understand the financial information in network. The goal of this paper is to study and propose a The associate editor coordinating the review of this manuscript and approving it for publication was Alberto Cano . new neural network called ''All Dataset based on Char-CNN and GRU (AD-CharCGNN)'' which extracts a part of the financial text and incorporates both time domain and spatial domain to classify financial news. In this paper, with 65,000 Chinese financial news crawled as a data set, the classification is accomplished with an accuracy of 96.45%.
In studies of the traditional classification algorithms, Kalra and Prasad applied the classifier Naive Bayes (NB) to categorize the financial news text. They proposed a daily prediction model and used the historical data and news articles to predict the stock market movements [3]. By combining Naive Bayes with the decision tree C4.5 algorithm, Lungan et al. proposed multiple model hybrid methods to consider different models for different text structures [4]. By combining a one-vs-one (OVO) strategy with the Support Vector Machine (SVM), Liu et al. proposed an optimized method to classify multiple emotions [5].
At the same time, traditional classification methods based on machine learning have also encountered many difficulties, especially the long learning time duration and the lack of computing power. For example, SVM requires exponential learning time and memory as the volume of data increases [6]. Due to the complexity of the text itself, there are problems in the corpus, such as the lack of universality and the no uniqueness of evaluation criteria.
Meanwhile, the classification method based on deep learning has been widely used because of its excellent extraction ability for the text features [7]. In the modern economic and social era, where the volume of data becomes large and the number of financial texts is growing rapidly, it may become a heavy task to improve the efficiency of classification and digest a large amount of financial professional texts published on the Internet. So, Kanung-sukkasem et al. introduced a topic model based on latent Dirichlet allocation (LDA) to discover features from news articles and financial time series. The financial LDA is applied in data mining for financial time series prediction and gets better results than the common LDA [8]. Shi et al. presented DeepClue, a system built to bridge text-based deep learning models and end users through visually interpreting the key factors learned in the stock price prediction model. The DeepClue predicted the stock price with financial news and company-related tweets from social media [9]. Now the amount of information on the Internet is huge, it is a common phenomenon that the financial text contents are uneven. For the information quality of financial texts is not uniform, or text information is not complete, it becomes extremely crucial to consider extracting features from part content of each article. However, according to the majority of current deep learning methods, the entire content of an article is taken as the training set and the testing set of classification, leading to less attention to the part of each financial text. In particular, so far, few studies using part of each article for the classification of financial text are developed. Thus, a new method proposed by this paper will fill in this gap. A new neural network for the text classification called ''All Dataset based on CharCNN and GRU (AD-CharCGNN)''. The network extracts features in the article parts via convolution operations and GRU, and classifies features via fully connected layer and softmax functions. The experiments confirm that the network works effectively with high accuracy. This network which takes full advantage of the text part is more suitable for practical application and fills in the gap about classification based on part content of each financial text.
The innovations and contributions of this paper are shown as below: • In this paper, a text classification based on partial text is proposed. At present, on the Internet, it is a common phenomenon that the information quality of financial texts is not uniform, and text information is not complete. So, the text classification based on the partial text can simulate the incomplete information scene, and that means it is more suitable for real situations. Compared with the text classification of the whole text, the text classification of partial text is more difficult and challenging.
• In this paper, a neural network AD-CharCGNN combined charCNN and GRU is proposed. That means the AD-CharCGNN can obtain information from time domain and spatial domain. The AD-CharCGNN is based on character level. It is a frequent phenomenon that texts contain Chinese, English, numbers, or special characters. The network can handle all above characters, and character level network can read data set directly without processing it by deleting stopwords.
• In this paper, a Chinese data set in the financial field is built. Crawler technology was used to grab 65,000 pieces of text as the dataset from two authoritative financial websites in China. Compared with English text classification, Chinese text classification is more difficult because of the complexity of Chinese. At the same time, the text classification in this paper is a subclass classification within the same professional field data set. As we all know, the differences among subclasses in the same field are smaller than the differences among the large classes in different fields. This does make the text classification difficult, but it is beneficial to individuals or institutions in the financial field.
A. DEEP LEARNING MODEL FOR TEXT CLASSIFICATION
With the development of deep learning, the related model of text classification is also emerging. It is well known that the Convolutional Neural Networks (CNN) model has made outstanding achievements in image processing [10], [11], target detection [12], [13], and speech recognition [14], [15]. TextCNN algorithm proposed by Yoon Kim applied CNN model to text classification [16]. Using convolutional calculations, textCNN can obtain the local features and extract the key information similar to n-gram from a sentence. In spite of the outstanding performance in text classification, CNN's fixed filter size prevents CNN from capturing the full range context of the article. Hence, recurrent neural networks (RNN) have become one of the most popular architectures used in NLP problems because their recurrent structure is very suitable to process the variable length text [17]. The architectures of deep learning are diverse in text classification, but there is also some common ground among them. For example, most of these involve preprocessing the data sets and cleaning the text content via text segmentation and stopwords deleting. The feature words are mapped into a high dimensional spatial vector model, with word frequency as an important indicator of text classification. The importance of feature words is expressed through characteristic weight calculation, which means the whole text is mapped into a matrix to be processed by CNN. VOLUME 8, 2020 These above models are classified at the word level. Inspired by the pixel level in the computer vision field, the Yann LeCun team proposed a model that is based entirely on charCNN to classify text [18]. The charCNN retrains the neural network from a character perspective. Experiments show that when the training set is large enough, the convolution network can achieve excellent results. At the same time, the network does not need the information on the word level, nor the grammar structure of the language. The charCNN can be applied to text classification of different languages because any language is made up of characters.
B. LSTM AND GRU METHOD
Long Short-Term Memory (LSTM) network, which is better at processing sequence than general RNN, can take the full context of the sentence into account. LSTM can be applied not only in text classification [19], but also in many scenes such as image segmentation [20], speech recognition [21], trajectory prediction [22] and so on. An example of the LSTM structure shown in Fig. 1 describes that the results generated by the last word are sent into a fully connected layer and to be classified by softmax function. Bidirectional LSTM (BiLSTM) is a variant of LSTM for many applications, such as emotional analysis [23], entity recognition [24], relationship extraction [25], and so on. Liu et al. unified BiLSTM, attention mechanisms, and convolution layers into a network [26]. The network forms a two-way of long-term storage which captures both local characteristics of phrases and the global semantics of sentences. The text classification BiLSTM is as shown in Fig. 2. Firstly, the texts are mapped to vectors in the embedding layer, and then, features of vectors are extracted in the two-way LSTM layer to generate the last sequence. Finally, the last sequence will be classified in the fully connected layer with a softmax function.
Gated Recurrent Unit (GRU) is another variant of LSTM whose applications are also quite extensive [27]. Zhao et al. applied GRU to monitor machine health [28], Yuan et al. applied GRU to speech recognition [29], and Zhang et al. combined RNN with GRU to identify Chinese characters [30]. There are three types of gates in the LSTM: the forgetting gate, the inputting gate, and the outputting gate. GRU combines the forgetting gate and the inputting gate to form a separate ''update gate''. Owning exists update gate and reset gate only, the computational complexity of GRU is simpler than that of LSTM. The GRU architecture is shown in Fig. 3. The gate_u and gate_r in Fig. 3 are update gate and reset gate respectively. The h t−1 is the hidden state from the previous time and h t is the hidden state of the current time. The x t is the input of the current time. The degree of state information updating from the last time to the current time is controlled by the update gate. The reset gate determines how much state information last time is ignored.
So, the forward propagation and update state of GRU are as follows: The state of gate_u is obtained by h t−1 and x t : where [ ] indicates that two vectors are connected, W u is a parameter that needs to be learned and σ is a sigmoid function. The formula converts the input data to a gate state value in the range of 0-1. The closer u is to 1, the more ability to memory the gate_u owns. The closer u is to 0, the more ability to forget the gate_u owns.
The state of gate_r is also obtained by h t−1 and x t : where the meaning of operators is the same as in formula (1) and W r is also a parameter that needs to be learned. h t−1 represents the remain information of h t−1 after reset: where is the multiplication of corresponding elements between the two matrices.
The new proposed h represents the hidden memory information of current input x t . By tanh activation function, the value of h can be reduced to a range of -1 to 1, which is: where W r is also a parameter that needs to be learned. Update stage. Using the previously obtained state u, the update formula is as follows: where + represents the add operation of vectors. The operation u h t−1 implements the selective forgetting of the previously hidden state information, that is, it can forget some unimportant information of h t−1 . The operation (1 − u) h implements the selective memorizing of the current hidden state information, where it can forget some unimportant information of h . Combined with the above, the calculation of this step is to give the remain information of h t−1 and h to the currently hidden state h t .
In summary, as is shown in Fig. 4, CNN is an unbiased model that can only obtain the most significant features of input texts in the spatial domain, whereas RNN is a bias model that can only describe the output of continuous state in the time domain. Different from a single CNN or RNN, the network proposed in this paper, AD-charCGNN, combines charCNN with RNN containing GRU and classifies financial professional texts in both spatial and time domains. In addition, the model does not require data cleaning for the preprocessing of the dataset. Financial texts from the Internet may contain Chinese, English, numbers, or special characters, which also means that AD-charCGNN should be more suitable for the actual scene.
III. THE AD-CharCGNN NETWORK A. THE DESIGN OF NETWORK
The network architecture of AD-charCGNN is shown in Fig. 5. A professional Chinese financial text dataset is employed in this network. In the text processing stage, characters only needed to be classified are read into the vocabulary in charCNN, whereas AD-charCGNN reads all characters in the dataset and fills them into the character level vocabulary. Characters are converted into numbers in the vocabulary. This step makes it easy to map texts into high dimensional spatial vectors. After the text processing stage, a part of each text is loaded into the network, which is the process from text vector to a part of the text vector in Fig. 5. The features of the text are obtained through a convolutional neural network. The width of the convolution filter should be the same as the width of the text vector. To be specific, the example in Fig. 5 is a 3 * 5 size convolution filter, and the filter acts on a 6 * 5 size text vector. For the one-dimensional convolution, the result of convolution is a feature map of 4 * 1 size. Multiple feature maps are gotten from multiple filters in the same way. Next, the max-pooling layer is used to obtain a significant feature of the part text. The feature map in Fig. 5 marked ''MAX'' is the maximum value of the map. The max-pooling layer can automatically determine which feature in the text classification process plays a more crucial role. Then, the GRU structure is used to obtain important contextual information. Finally, the classification task is completed through the softmax func B. DATA PROCESSING 1) TEXT PROCESSING According to the above, in order to achieve the part text classification through the AD-charCGNN network, the first step is to process the text and map it into high dimensional space in the form of vector.
All texts in the entire dataset are read character by character (as is shown in Fig. 6), including Chinese characters, English characters, punctuation symbols, special characters, and so on. Then the frequency of each character appeared would be counted. The mapping relationship based on the frequency would be formed. For example, in Fig. 6, the character '','' is the most frequently occurring in the dataset, so its id is the smallest. The higher the frequency of the character, the smaller the number corresponding to the id. The mapping relation f between character c and number id is: where → is a map symbol, that means the transformation from character to number. The id is also used as an index to establish the vocabulary. The texts in the dataset are read again, and the characters of each text are converted into numbers according to the mapping relationship f , as follows: where A n is the nth text and c nm is the mth character of the nth text. The text number list A n is then processed by the embedding layer. Embedding is a transformation method in NLP. It is well known that one-hot encoding is a common encoding transformation method, but the matrix obtained by one-hot is a sparse matrix, which will take up a lot of resources when the encoding content is too much. Embedding turns a sparse matrix into a dense matrix through some linear functions. This dense matrix uses some features to represent all characters and turns independent vectors into relational vectors with internal connections. Therefore, the embedding layer converts integers in A n to high dimensional relational vectors of fixed size. In this paper, the financial text classification only intercepts a part of each text as the input A of the convolution layer: where n denotes the part text vector of the nth article in high dimension and ⊕ is the vector connector.
2) CONVOLUTION CAPTURES LOCAL INFORMATION
Convolution operation can not only capture the local information of the vectors but also reduce the dimension of the vectors and the computational cost of the model. The convolution layer applies a one-dimensional convolution core. As is shown in Fig. 7, The width of the convolution filter should be the same as the width of the text vector. Thus, a one-dimensional convolution core is used to convolute by row and detect features at different locations. The output after the convolution is: where F is the convolution template, b is the offset, and ⊗ is the convolution operation. Multiple convolutions outputs A * can be generated via multiple convolutional core filters to capture more feature information. The output after convolution will be the input of the max-pooling layer. If the number of convolution cores is, the output T after pooling is: At this time, T is a feature vector that passed through convolution kernel l and is connected after max pooling. It is also an input of the gated recurrent unit in the next RNN.
3) INTRODUCTION OF GRU METHOD
After the convolution layer, the context feature information is read by the GRU to make dynamic modeling. The GRU hidden state is activated by the previous state in a certain period time. Therefore, among the two gates in the GRU, the degree of state information updating from the last time to the current time is controlled by the update gate. The reset gate determines how much state information the last time is written to the current candidate set.
Meanwhile, the GRU also contains three kinds of values, that are input value, output value, and candidate set. The particular GRU architecture about AD-charCGNN is shown in Fig. 8.
The T t is defined as the input T at time t, and the calculation for updating gate_u t and resetting gate_r t are as follows: where u t and r t are the state of gate_u t and gate_r t respectively. σ represents the sigmoid function. W uh , W ut , W rh , and W rt are parameters to be learned, h t−1 is the state passed from the previous time, b u and b r are offsets of update gate and reset gate respectively, and ⊕ is still a vector connector. Candidate seth t of time t can be calculated according to The activation function uses the tanh function. Wh is the parameter that the candidate set needs to learn, bh is the offset of the candidate set, and the symbol * represents the product of the matrix.
At the same time, h t will be updated: Finally, the output Y t of the output layer at time t is: where W y is the parameter to be learned and b y is the offset in the output layer.
In this AD-charCGNN model, the last output sequence of GRU is selected as the output Y . Y is activated by dropout and relu function to obtain Y * , which can be used as input information of the output layer.
4) CLASSIFIER FOR CLASSIFICATION
After the previous part of the network processing, the feature vectors need to be classified by classifiers to predict the classification probability of financial texts. The classification function selected is the softmax function, which can convert the output values of classification into relative probabilities and predict the probability of each class. The calculation formula is as follows: Finally, the maximum probability item in y is selected as the predictive classification label for the final output. VOLUME 8, 2020
IV. EXPERIMENTAL COMPARISON AND ANALYSIS A. THE EFFECTIVENESS EXPERIMENT OF THE AD-CHARCGNN NETWORK
As we all know, the deep learning model will lead to different results due to various reasons such as hardware environment, parameter adjustment, and so on. Therefore, first of all, in order to confirm the effectiveness of the model proposed in this paper, and also to ensure that other state-of-art comparison models can achieve the optimal effect, we conducted a comparison test with some public data sets, and compared with the optimal accuracy of the original reference of the comparison model.
The public dataset used in the effectiveness experiment is shown below: (1) Reuters-21578: It is a corpus which is often used for text classification or other related researches. Documents in Reuters-21578 were marked up with SGML tags, and a corresponding SGML DTD was produced. It is available from David D. Lewis' professional home page, currently: http://www.research.att.com/∼lewis.
(2) MR: Movie Review Dataset [31]. This dataset contains movie reviews and their associated binary mood polarity labels. No more than 30 reviews are allowed for any given movie throughout the series, as reviews of the same film often have a relevant rating. The overall distribution of labels is balanced.
(3) SST: Stanford Sentiment Treebank Dataset [32]. The dataset was published by the NLP group at Stanford University. It is a standard emotion dataset, mainly used for emotion classification, in which each node of the sentence analysis tree has fine-grained emotion annotation.
The state-of-art comparison models used in the effectiveness experiment is shown below: (1)CNN: Convolutional Neural Network [16]. The model where all words are randomly initialized and then modified during training.
(2)DCNN: Dynamic Convolutional Neural Network [33]. This network can use dynamic k-max pooling, a global pooling operation over linear sequences and can also handle input sentences of varying lengths.
(3)RNN: Recurrent Neural Network [17]. The multitask learning framework is used to jointly learn across multiple related tasks.
(4) LSTM: The LSTM model uses the last hidden state as the representation of the whole text [34].
(7)TextGCN: Text Graph Convolutional Network [34]. It is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents.
(8) DBN: Deep Belief Network [36]. After the feature extraction with DBN, softmax regression is employed to classify the text in the learned feature space.
With accuracy (%) as the experiment standard, the comparison results are shown in Table 1.
For the data set Reuters, the network TextGCN has the best effect, with an accuracy rate of 97.07%, while the network proposed in this paper has an accuracy rate of 96.31%, which is 1.76% lower than TextGCN. We can analyze the reasons according to the data set. In the data set Reuters, the text is divided into 8 classes, but the number of texts in each class is unbalanced. For example, the amount of texts labeled ''grain'' is fewer than 100. However, the network proposed in this paper is complex and its structure is deep, which inevitably leads to the poor learning of the network when the amount of data is small. This is a common shortcoming of complex neural networks.
The same problem shows up on SST. SST is an emotion data set. SST-1 means that the data set is divided into 5 classes, and SST-2 means that the data set is divided into 2 classes. As is shown in Table 1, the accuracy of charCGNN in SST-1 is 47.44%, which is 2.16% lower than RNN. This result is as we expected. The reason is still the small amount of data. When SST is divided into two classes, both positive and negative labels have sufficient data. It can be seen from the Table 1 that the accuracy of charCGNN in SST-2 is 88.06%, higher than other models.
As for the data set MR, the data set is also divided into positive and negative labels. The labels are evenly distributed and the data is enough for the network to learn. Therefore, the accuracy of charCGNN is 77.82%, which is also higher than other models. When the data is insufficient, we do not deny that the network does have incomplete learning. But when there is enough data, the network can perform well.
To sum up, the network proposed in this paper is valid. Then, this effective network is used to classify financial texts in more detail.
B. CONSTRUCTION AND DIVISION OF FINANCIAL DATASET
As we all know, in the data set, the differences between large classes in the different domains are obvious, which makes it easy to classify texts. However, the classification of small subclasses in the same domain is barely mentioned. On the one hand, the differences among subclasses in the same field are smaller than the differences among the large classes in different fields, so it is not easy to classify. On the other hand, most of the text classification at present is aimed at the public data sets of the large classes, and few people focus on the classification within a specific subject area. Subcategory text classification in a specific subject area can help people who are in the area to understand different kinds of information. In the case of the financial field, subcategory text classification can help financial individuals or financial institutions master more detailed news and make the right decisions. Especially for professional financial experts, after classification, detailed and effective financial texts can master the current advanced research and possible future directions, and comprehensively understand the financial information in network.
In view of the fact that the Internet does not contain publicly available financial datasets, financial news of 10 subclasses is crawled from SouthMoney and Hexun as the dataset. The website of SouthMoney (http://www. southmoney.com) is a well-known comprehensive financial and economic website in China. It covers the financial field with its authoritative industry analysis and multidirectional information. The site has 30 million users and is growing by 10,000 a day. The website of Hexun (http://www.hexun.com) is the first vertical website of financial information in China, which is the representative of the professional, high-end, and quality financial website. It works with lots of banking institutions, fund companies, and media. It has also worked with Thomson Reuters, which is the world's largest provider of financial information data and analytics.
The financial dataset contains 10 subclasses: (1) Insurance: It is an important pillar of the financial system and the social safety net. It contains insurance industry dynamics, as well as related views and comments.
(2) Stock Market: It is one of the main long-term credit instruments in the capital market and an indispensable part of the financial area. It includes the expert interpretation of the stock market, also includes the individual stock, state shares, and other kinds of news about the stock.
(3)Companies: It includes the company's hot news, the company's operations, and the impact of national policies on the company.
(4) Funds: It includes fund information, fund analysis, fund knowledge, fund assessment, and other fund news.
(5) Futures: It includes oil, coal, gold, and other futures trend analysis news, also includes all kinds of futures operation strategy, market preview, and other relevant news.
(6) Automobiles: It includes dynamic news of all kinds of auto brands, such as the release of new cars, the sales volume of cars, and so on.
(7) Foreign Exchange: It includes the currency situation of each country, including but not limited to RMB, us dollar, pound sterling, euro, yen, and so on.
(8) Trust: It is a crucial part of the modern financial system. It contains the information of trust products, the dynamics of trust industry, the comments of the trust study, and other relevant news.
(9) Banks: It includes the deposit interest rate of each bank, loan analysis, industry policy, and other relevant contents.
(10) Bonds: It includes all kinds of news related to bond buybacks, bond knowledge, bond prices, and so on.
The overall distribution of labels is balanced. There are 6500 texts of each class, for a total of 65,000. The specific division of the dataset is shown in Table 2.
C. THE SETTING OF FINANCIALLY EXPERIMENTAL PARAMETERS
The dataset used is a professional financial dataset mentioned above in this financial experiment. According to the web crawler results, the financial texts that exist on the Internet are varied in length. The texts can reach up to 1000 words, but at least only about 100 words. First of all, in order to distinguish from short text (100 to 200 words) classification, 250/300/ 350/400 words of text parts were selected for the test. Test results show that text classification accuracy of 400 words is the highest, and the accuracy of 250 words is the lowest. Therefore, the following parameters debugging are the 250 words texts, which are made as a part of each article in the experiment. The number ''0'' is filled in the absent word place when the length of the article less than 250 words.
The parameters of the AD-charCGNN network are set as Table 3 after several parameter tunings. The ''Max of Total Batch'' is used to prevent overfitting. If the accuracy of the validation set is not improved for a long time, that is, it is not improved more than 1000 rounds, then the training is terminated in advance to prevent overfitting.
The neural network uses the cross-entropy error function as the loss function. The cross entropy is often used as a loss function in the classification of neural networks. The function calculates the cross entropy between the predicted category probability and the real category.
D. COMPARISON OF FINANCIALLY EXPERIMENTAL RESULTS AND ANALYSIS
The dataset used is the Chinese financial text dataset. The inputting data are also text parts. The baselines for comparison experiments are as follows: (1) LR: Logistic Regression. Logistic Regression measures the relationship between variables which dependent on category and one or more independent variables.
(2) NB: Naive Bayes. A classification technique based on the Bayesian theorem assumes that a particular feature in a class has nothing to do with other existing features.
(3) RF: Random Forest. It is an integrated learning method based on Bagging.
(4) Xgboost: It is another tree-based integration model. It can upgrade weak learners to strong learners.
With Accuracy (ACC) as the experiment standard, the comparison results are shown in Table 4.
In Table 4, ''Count Vectors'' represents that the feature engineering for text processing is counting vectors. ''Word Level TF-IDF'' represents that the feature engineering is a word-level frequency. ''N-Gram Vectors'' represents that the feature engineering is an n-gram level word frequency. As can be seen from Table 4, it proves that the neural network proposed in this paper can effectively classify subclasses in the financial field. Compared with traditional machine learning methods, AD-charCGNN performs well with an accuracy of 96.45%, which is 1.33% higher than charCNN. No matter AD-charCGNN or charCNN, compared with the traditional algorithm, it can capture more key features through convolution, so as to achieve higher accuracy than the traditional algorithm. In particular, when the input content is a part of the text, the traditional algorithm cannot extract crucial information in the case of incomplete text. We know that charCNN and AD-charCGNN are both neural network models. However, the essence of charCNN is still a convolutional neural network, whose filter window is fixed. The fixed window limits the ability to capture full context information. AD-charCGNN combines CNN and GRU to perform convolution operation in the spatial domain, and also take the context belonging to the time domain into consideration. In other words, the neural network proposed in this paper convolves the input text with a convolution template to extract important features, which is an operation belonging to the spatial domain. At the same time, because each text has input before and after the current time, the GRU treats the context of the text as a time series, which is an operation belonging to the time domain. So, the network can obtain information in a relatively complete way and achieve a better classification effect.
Then, the next analysis is carried out by visualizing the accuracy value and loss value. As the number of iterations increases, the accuracy of the AD-charCGNN model training set is shown in Fig. 9. The loss change of the training set is shown in Fig. 10. The accuracy of the validation set is shown in Fig. 11. The loss change of the validation set is shown in Fig. 12. It can be found that there is no overfitting in this model. As is known to all, if the model can obtain superior fitting on the training data, but it cannot fit the data well on the data set outside the training data, then the phenomenon of overfitting can be considered. As is shown from Fig. 9 to Fig. 12, it can be observed that the accuracy of the training set and the validation set both fluctuate at about 96%, and the loss value fluctuates around 0.14. Compared with the training set, the accuracy value and the loss value of the validation set are smoother and more stable. So, it is certain that the model has not been overfitted.
At the same time, the experiment also uses precision, recall, and F1-score to evaluate the performance of AD-charCGNN in each class [37], as is shown in Table 5 and Fig. 13.
As is shown in Fig. 13, the network performs well in the classes of Insurance, Stock Market, Foreign Exchange, and Trust. The values of subclass Companies are relatively low. We can analyze the reasons according to the specific news content. Take the news of Stock Market as an example, the focus of this news is very clear, that is to say, when the news is written from the perspective of Stock Market, the content is all around the stock, such as the rise and down of Stock Market. And the focus of Companies can sometimes be unclear. The news about a company, for example, will include the information about what has happened to the company's funds and whether the company has good credit with the bank. As a result, the Companies subclass may be mistakenly assigned to other subclasses. But on the whole, the precision, recall, and F1-score of these classes are all above 0.9, and the neural network proposed in this paper has achieved good results.
V. CONCLUSION
The classification of financial texts is an indispensable step to take advantage of financial news. But the quality of Internet financial texts is uneven. To classify incomplete information, the AD-charCGNN network is proposed.
Firstly, all characters in the financial texts are read into the network to build a character level vocabulary of the financial dataset created by ourselves. Secondly, part of the text which will be classified is mapped to a high-dimensional spatial vector based on the vocabulary. Then, the vectors are convoluted in the spatial domain to get the text local features, and next, the features are processed by the gated recurrent units to get the features contained time information. Finally, the features will be classified through softmax function to get the text classification results. The text classification algorithm which takes advantage of the text part is more suitable for practical application. For financial texts, which may contain characters of Chinese, English, digital and other types of characters, the network makes it easier to process the various type of character text.
The dataset applied in the paper is based on the Chinese financial text. In the future, the dataset will be extended to other fields or other languages of professional text, which is available to build a cross-language text classification model. WENJIE ZHAO was born in Tangshan, Hebei, China. She received the B.E. degree from the College of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, Shanxi, in 2014. She is currently pursuing the master's degree with the Shanghai University of Engineering Science, Shanghai, China. Her major research interest includes natural language processing. | 8,807.4 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Role of pulsatility on particle dispersion in expiratory flows
Expiratory events, such as coughs, are often pulsatile in nature and result in vortical flow structures that transport respiratory particles. In this work, direct numerical simulation (DNS) of turbulent pulsatile jets, coupled with Lagrangian particle tracking of micron-sized droplets, is performed to investigate the role of secondary and tertiary expulsions on particle dispersion and penetration. Fully developed turbulence obtained from DNS of a turbulent pipe flow is provided at the jet orifice. The volumetric flow rate at the orifice is modulated in time according to a damped sine wave, thereby allowing for control of the number of pulses, duration, and peak amplitude. Thermodynamic effects, such as evaporation and buoyancy, are neglected in order to isolate the role of pulsatility on particle dispersion. The resulting vortex structures are analyzed for single-, two-, and three-pulse jets. The evolution of the particle cloud is then compared to existing single-pulse models. Particle dispersion and penetration of the entire cloud are found to be hindered by increased pulsatility. However, the penetration of particles emanating from a secondary or tertiary expulsion is enhanced due to acceleration downstream by vortex structures.
I. INTRODUCTION
Particle transport by turbulent free-shear jets plays a crucial role in many engineering and environmental applications. For example, atomization of liquid fuels leads to complex droplet size distributions and dispersion patterns that strongly influence internal combustion engine efficiency 1 while pyroclastic density currents, generated by large density differences between gas-particle mixtures, feeds explosive volcanic eruptions. 2 Of particular importance during the COVID-19 pandemic is the transmission of liquid droplets and aerosols (referred to interchangeably as particles herein) due to coughing, sneezing, or continuous speech. 3,4 Recent studies considering the airborne transmission of COVID-19 have largely determined that the prescribed social distancing rules may not be sufficient to protect against host-to-host transmission. [5][6][7][8] Additionally, when incorporating environmental factors, such as temperature, humidity, and wind speed, the traveled distance and dispersion of aerosols are seen to travel far beyond the typical 6 ft social distance guideline. 9,10 Accurately describing particle dispersion from expiratory events is a critical aspect to defining physics-informed guidelines for social distancing best practices. While remarkable insight has been gained from analytical, 11,12 experimental, 13,14 and computational [14][15][16][17][18] works, the vast majority of studies are restricted to single expulsion events. However, realistic coughing is often characterized by multiple expulsions that lead to vortex-vortex interactions, which can have significant consequences on particle dynamics 19 (see Fig. 1).
Experimental measurements have demonstrated that realistic coughs are pulsatile, involving a sequence of coughing events, sometimes referred to as "cough epochs." 21,22 The flow rate associated with a typical human cough is shown in Fig. 1. Multiple pulses are observed over a duration of approximately 1 s, with the peak amplitude occurring at 0.1 s. 20 Gupta et al. 23 experimentally characterized the flow dynamics of coughs from human subjects, showing that the flow rate variation of a cough with time can be defined as a combination of gamma probability functions. While single-pulse expiratory events have been well-studied, the influence of pulsatility on both particle and fluid physics has received significantly less attention.
To understand the influence of pulsatility, it is important to first consider the modes of particle generation and sites of origination 24 during speech, coughing, or sneezing-bronchiolar (droplet size <1 À 5 lm); laryngeal (5 À 20 lm); and oral (>50 lm). 25 If the site of severe infection is deeper in the lungs (bronchiolar), the expelled aerosols/droplets are generated by a "fluid-film burst" mechanism 26 during the collapse and reopening of the small airways, resulting in small size particles. Since several respiratory infections, including H5N1 and SARS-CoV, replicate primarily in the bronchioles and alevoli, 24,27 the aerosols/droplets generated in the lower airways are likely to contain higher doses of virus particles. In this case, secondary and tertiary pulses of a multipulse cough will expel the volume of air originating predominantly from deeper within the lungs and are expected to contain a higher concentration of virus particles. As such, based on the sites of severe infection in the respiratory tract, we hypothesize that the volume of air expelled by secondary and tertiary pulses could contain a higher viral load (illustrated in Fig. 1) and that the resulting vortex-vortex interactions could significantly influence the dispersion of these more infectious particles.
It is now well established that interactions between turbulence and particles can give rise to preferential concentration, which describes the accumulation of particles away from highly vortical regions of the turbulent flow. [28][29][30][31][32] When the Stokes number, defined as the ratio of particle-to-fluid time scales, is near unity, particles are directed by coherent vortical structures to create nonhomogeneities in concentration and the onset of clusters. Large-scale velocity gradients present in free-shear flows affect the transport of small (Kolmogorovscale) heavy particles and the clustering process at small scales. 33,34 Gualtieri et al. 35 showed that free-shear flows generate anisotropic velocity fluctuations which, in turn, arrange particles in directionally biased clusters. In the presence of gravity, preferential concentration by turbulence has been observed to cause particles to further accumulate near the downward moving side of vortices, referred to as preferential sweeping. [36][37][38] The gravitational settling of aerosol particles can be enhanced by this mechanism by as much as 50%. 37 Turbulent transport in statistically stationary, axisymmetric, free jets has been well characterized experimentally [39][40][41] and numerically. [42][43][44] Chein and Chung 19 demonstrated that particles with relatively small Stokes numbers disperse laterally at approximately the same rate as fluid particles, while particles with larger Stokes numbers exhibit significantly less dispersion. In particular, particles with intermediate Stokes numbers are transported laterally farther than fluid particles due to enhanced entrainment by vortex structures. Shortly after, Longmire and Eaton 45 showed that particles become clustered in the saddle regions downstream of vortex rings and are propelled away from the jet axis by the outwardly moving flow. More recently, direct numerical simulations (DNS) of particle-laden round jets by Li et al. 46 showed that all particles, regardless of their size, tend to preferentially accumulate in regions with larger-than-mean fluid streamwise velocity. Particle dispersion was found to be directly associated with three-dimensional vortex structures. While incredibly valuable, the aforementioned studies are restricted to jets with inflow characteristics that remain constant in time. By contrast, the transient characteristics of turbulent pulsatile jets are far less understood.
In this work, a realistic human cough is investigated computationally through DNS of pulsatile, turbulent, particle-laden jets. Fully developed turbulence is provided at the orifice exit (mouth) using data obtained from an auxiliary simulation of turbulent pipe flow. The flow rate of the incoming turbulence is modulated in time according to a prescribed profile that controls the number of pulses, its duration, and peak amplitude. Particles are seeded in the flow with diameters sampled from a lognormal distribution informed by experimental measurements from the literature. Two-phase statistics, in particular fluid entrainment and particle evolution, are then reported for each case, with emphasis on the effect of pulsatility on the resulting vortex structures and particle dispersion.
II. SIMULATION DETAILS A. Flow configuration
The present work considers a three-dimensional pulsatile jet laden with liquid droplets expelled into an ambient surrounding. Particles are considered to be well characterized as water droplets, and thus their density is held constant q p ¼ 998 kg/m 3 . The fluid is considered to be air with a density of q ¼ 1:172 kg/m 3 and kinematic viscosity of ¼ 1:62 Â 10 À5 m 2 /s. The diameter of the orifice exit (mouth) is taken to be D ¼ 0.02 m. A Cartesian domain with length in the x (streamwise), y (spanwise, gravity-aligned), and z (spanwise) directions are L x ¼ 40D, L y ¼ 20D, and L z ¼ 20D, respectively (see Fig. 2). The domain is discretized using N x ¼ 1024 and N y ¼ N z ¼ 420 grid points, with exponential grid stretching in the y-and zdirections. The spanwise grid spacing varies between 4:98 Â 10 À4 m Dy; Dz 2:1 Â 10 À3 m such that the minimum grid spacing at the jet centerline is D=40. Previous work has shown this level of resolution is sufficient for free-shear jets at similar Reynolds numbers. 46 A Dirichlet boundary condition is enforced at the jet inlet, a convective outflow is enforced at the downstream boundary, and all other boundaries are treated as slip walls. To prevent fluid recirculation within the computational domain, a coflow is introduced along the positive xdirection with a velocity magnitude 0.32 m/s. The coflow is $7% of the peak inflow velocity U 0 and was observed to have a negligible effect on the particle dynamics.
B. Pulsatile inflow
Fully developed turbulence is fed into the jet orifice using an auxiliary simulation of a turbulent pipe flow. The auxiliary simulation was performed using 256 grid points across the diameter with a bulk velocity of U 0 ¼ 4 m/s (a typical peak velocity associated with expiratory events 13,47 ) corresponding to a bulk Reynolds number Re b ¼ U 0 D= ¼ 4938. Further details on the pipe flow simulation are provided in Appendix. Here, we note that the turbulent pipe simulation is statistically stationary and evolves to a constant bulk velocity U 0 defined by an imposed pressure gradient. To obtain a pulsatile turbulent inflow in the main simulation, the fluid velocity at the jet inlet uðx ¼ 0; y; z; tÞ is adjusted dynamically to control the volumetric flow rate in time. Building upon the experimental observations of Gupta et al. 23 we propose a self-similar profile for the bulk velocity resulting from multiple expulsions.
The proposed functional form for the pulsatile volumetric flow rate Q(t) is given by a damped sine wave according to (1) is the area of the orifice exit. The fluid velocity is read in from the auxiliary pipe flow simulation and rescaled such that Ð uðx ¼ 0; y; z; tÞ Á n dydz ¼ QðtÞ, where n ¼ ½1; 0; 0 T is the outward surface normal. In the present study, we consider three profiles corresponding to one, two, and three pulses (see Fig. 3). The relaxation time is chosen to be s ¼ ½0:63; 0:42; 0:36 s, and the frequency is x ¼ ½7:18; 10:77; 12:57 s À1 . For each case, the maximum velocity of exhaled airflow occurs at approximately 100 ms, consistent with measurements of coughing from human subjects. 23 The total duration of each profile varies but the inputs are defined to yield the same volume of expelled air so a fair comparison can be drawn between cases with different numbers of pulses.
C. Particle injection
To accurately characterize the particle size distribution generated by coughing, we employ a lognormal distribution fit to the experimental measurements of Duguid. 48 The particle diameter ranges between 1 d p 100 lm with a mean of 24 lm and a standard deviation of 17.9 lm as shown in Fig. 4. At each simulation time step, particles are introduced at the inflow plane by assigning them a random position within the orifice and a diameter that is sampled from the aforementioned lognormal distribution. The number of particles per time step
FIG. 4.
Lognormal size distribution used to sample particle diameters in the DNS for the pulsatile jet (red line) and experimental measurements () by Duguid 48 for droplets generated in realistic coughs. Color shading denotes three size ranges corresponding to small (green) intermediate (blue) and high (red) Stokes numbers.
is adjusted dynamically to achieve the same mass flow rate used for the fluid. Given the prescribed expulsion volume and particle size distribution, approximately 15 000 particles are generated at the end of a coughing spell, representative of the typical quantity observed in experiments. 49 For a three-pulse case, this corresponds to roughly 8200, 4600, and 2200 particles being injected during the first, second, and third pulses, respectively.
The turbulence Stokes number, St g ¼ s p =s g , may be utilized to gauge the role of particle inertia, where s p ¼ q p d 2 p =ð18qÞ is the particle response time, s g ¼ ð=eÞ 1=2 is the Kolmogorov timescale of the fluid phase, and e is the turbulence dissipation rate. When analyzing the results in Sec. III, particles are demarcated into three size ranges: d p 2 ½1; 30; ½30; 60, and ½60; 100 lm, which yields the following Stokes number ranges St 2 ½0:0054; 4:87; ½4:87; 19:50, and ½19:50; 54:16. We note that the Stokes numbers are defined using values taken at the orifice exit and therefore will decay as particles evolve in time and downstream. Nevertheless, these Stokes number ranges provide insight into the relative inertia of the fluid and particle phases. Specifically, particles will behave ballistically in the large Stokes limit but act as fluid tracers in the small Stokes limit.
In real expiratory events, exhaled particles will exhibit a distribution of velocities that may deviate from the local airflow due to complex interactions in the upper respiratory tract. Motivated by the fact that the majority of particles lie within the first size bin, where Stokes numbers are small St 2 ½0:0054; 4:87, we treat the particles as fluid tracers at the orifice exit and specify their initial velocity to be the fluid velocity interpolated at the particle position. Further details are provided in Appendix. We emphasize that this assumption of zero interphase slip velocity at the orifice exit is a significant assumption made within the present work. Future experimental studies will be required to find the extent at which this assumption can be considered valid.
D. Governing equations
The simulations are solved in Eulerian-Lagrangian framework, where individual particles are treated in a Lagrangian manner, and the gas phase is solved on a background Eulerian mesh. Due to the low concentrations considered in this study, volume fraction effects an/d two-way coupling between the phases are neglected. The governing equations for the incompressible carrier phase are given by and where u ¼ ½u; v; w T is the fluid velocity, p is the hydrodynamic pressure, and g ¼ ½0; Àg; 0 T is the gravitational acceleration with g ¼ 9:8 m/s 2 . The equations are implemented in the framework of the NGA code. 50 The Navier-Stokes equations are solved on a staggered grid with second-order spatial accuracy for both convective and viscous terms, and the semi-implicit Crank-Nicolson scheme is used for time advancement maintaining overall second-order accuracy.
Particles are treated in a Lagrangian manner where the translational motion of an individual particle "i" is given by where x ðiÞ p and v ðiÞ p ¼ ½u p T are the instantaneous particle position and velocity, respectively, and m p ¼ q p pd 3 p =6 is the particle mass. Here s½x ðiÞ p is the resolved fluid stress at the particle location and f ðiÞ drag accounts for unresolved stress due to drag. In this work, the classic Schiller and Naumann drag correlation 51 is used to account for finite Reynolds number effects, given by where u½x ðiÞ p is the fluid velocity at the location of particle "i" and Re p ¼ ku½x ðiÞ p À v ðiÞ p kd p = is the particle Reynolds number. The particle equations are advanced in time using a second-order Runge-Kutta scheme.
We briefly note that the present work does not consider thermodynamic effects, such as evaporation and buoyancy, in order to isolate the role of pulsatility on particle dispersion and minimize the parameter space under study. Thus, this study does not consider particleparticle interactions, such as coalescence and the size of individual particles are held constant throughout the duration of the simulated cough. Recent experiments showed that thermal effects are small until the jet speeds are reduced to ambient speeds. 47 Thus, the present work focuses on the near-mouth region, where the unsteadiness of the expiratory events is expected to have a more pressing role on particle dynamics. In addition, it was recently demonstrated that preferential concentration can increase local humidity that in turn prevents evaporation and extends the lifetime of droplets significantly (by as much as two orders of magnitude). 52
III. RESULTS AND DISCUSSION
A. Pulsatile free-shear jet 1. Flow visualization Figure 5 shows visualizations of the single-, two-, and three-pulse jets at t ¼ 0.75 s, immediately after the final pulse is complete (cf. Fig. 3). Inspection of the vorticity magnitude kxk, with x ¼ ½x x ; x y ; x z T , reveals distinct differences between the three cases. Vortical structures are visualized using the Q-criterion, 53 defined as the second invariant of the velocity gradient tensor, given by where X ¼ ðru À ru T Þ=2 and S ¼ ðru þ ru T Þ=2 are the antisymmetric and symmetric components of the velocity gradient, respectively. Physically speaking, the Q-criterion represents a local balance between shear strain and vorticity, with vortices being defined by regions where rigid body rotation is greater than the rate-of-strain.
To demonstrate the effect of pulsatility on the fluid phase, we first consider the location of vortical structures at the end of the third pulse (see Fig. 5). For the single-pulse case, a primary vortex ring structure is generated at the downstream edge of the jet while vorticity is minimal near the orifice exit. By contrast, the two-pulse and three-pulse cases Physics of Fluids ARTICLE scitation.org/journal/phf exhibit multiple vortex ring structures, corresponding to the number of pulses, with comparatively higher regions of vorticity upstream near the orifice. Vortical structures in the near-orifice region, which are absent from the single-pulse case, will impact the transport of low inertia particles. Specifically, the higher vorticity levels observed in two-and three-pulse cases are expected to accelerate and entrain latestage injected particles to a larger degree when compared to the singlepulse case. However, the strength of the leading vortex structure for two-and three-pulse cases is significantly attenuated from the singlepulse case. The role of pulsatility on particle dynamics is reserved for Sec. III B.
Entrainment
Entrainment of the surrounding air into the jet plays a key role in its transport properties. In the seminal paper by Morton et al., 54 it was suggested that entrainment, defined as the mean radial velocity at the edge of an axisymmetric boundary layer (in this case edge of the jet), is proportional to the axial velocity, i.e., hu r i ¼ ahui, where u r ¼ ðvy þ wzÞ=r is the radial velocity with r ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi y 2 þ z 2 p the radial position. Due to the lack of statistical stationarity in the present configuration, angled brackets used herein denote an average in the circumferential direction but not in time. Pham et al. 55 suggested that the coefficient of entrainment, a, can be estimated as where d is the momentum balance length scale defined as Taub et al. 44 showed that a is approximately constant when the jet has achieved self-similarity. For the pulsatile transient jets considered here, the centerline velocity huij r¼0 is zero near the orifice when the pulses complete, resulting in an ill-defined a. To this end, we propose an alternative definition based on the jet bulk velocity U 0 according to The entrainment coefficient a as a function of streamwise location x/D at t ¼ 1 s, 1.5 s, and 2 s for each case is summarized in Fig. 6. The corresponding specific momentum M ¼ Ð 1 0 hui 2 r dr, normalized by the maximum value M 0 ¼ Ð 1 0 U 2 0 r dr ¼ 8 Â 10 À4 m 4 =s 2 , is also reported to indicate the instantaneous location of each pulse. It can be seen that the single-pulse case generates significantly more momentum downstream at the jet front, while the two-pulse and three-pulse cases exhibit a wider distribution in momentum in the streamwise direction. In addition, the momentum profile is bi-modal for the two-pulse case and trimodal for the three-pulse case at t ¼ 1 s, where each peak coincides with the peak amplitude of each pulse. As a result, larger values of momentum are observed near the orifice for the two-pulse and three-pulse cases, which proceed to decay as the flow propagates downstream.
The entrainment coefficient is seen to be positive for all three cases at x=D < 20 when t ¼ 1 s, which indicates a net entrainment of ambient air into the jet. Beyond this point (x=D > 20) a becomes negative with the largest magnitude in concert with the first pulse. Note that unlike in the more traditional statistically stationary jet, where a is positive for all x, here the primary vortex ring generated by the first pulse induces a net momentum flux from the jet into the ambient air, resulting in a < 0. By comparing the three cases at t ¼ 1 s, it is observed that a is larger in the upstream regions (x=D < 16) for the multipulse cases compared to the single-pulse case due to the vortical structures generated by subsequent pulses. In addition, the three-pulse case exhibits significantly larger entrainment at the jet front where the primary vortex ring resides.
Pulsatility is also observed to affect late-time single-phase dynamics of the jet. By comparing Figs. 6(a)-6(c), the primary vortex of the two-pulse jet is seen to travel at a higher speed, followed by the singlepulse case then the three-pulse case. This results in noticeable differences in the entrainment coefficient between each case at late times. Sections III B-III C seek to understand how these combined effects influence particle entrainment and dispersion.
B. Role of pulsatility on particle dispersion
Visualizations of the particle cloud at three instances in time are shown in Fig. 7. Particles are colored by the corresponding pulse they were injected into. At each time step, the total number of particles associated with each pule, N p ðtÞ, is identified and used to define the geometric center of the cloud according to x c ¼ P Np i¼1 x ðiÞ p =N p . Qualitative differences in the cloud evolution can be seen between the three cases. Careful inspection of Fig. 7 (Multimedia view) reveals that the overall penetration of the cloud is slightly hindered with increased pulsatility, although this effect is minor. More pronounced is the penetration of particles associated with subsequent pulses. This is best seen in the three-pulse case, where particles emanating from the second pulse (colored blue) penetrate to the cloud front when t > 1.5 s, despite being injected with lower velocity. In addition, particles from the third pulse (colored red) in the three-pulse case penetrate nearly as far as particles from the second pulse in the two-pulse case. In summary, the geometric centers associated with particles of later pulses travel further downstream with increased pulsatility and advance upon particles injected earlier. The observed increase in penetration by secondary and tertiary pulses has important connotations as it is expected that later expulsions could contain higher viral concentrations depending on the location in the respiratory tract where the infection resides. Therefore, the aforementioned phenomena may prove to be significant when determining distances at which an infectious person can pose a risk to others; as the enhanced transport of high viral load droplets is expected to increase the probability of transmission.
Particle dispersion is characterized herein by the root mean square (rms) of lateral particle position, given by The temporal evolution of the cloud penetration, x c , and dispersion, z rms , associated with each pulse are shown in Figs. 8 and 9. During the early-stage injection, x c and z rms are seen to oscillate in the two-and three-pulse cases. Additionally, the final displacement of x c and z rms at t ¼ 2 s is lower than that of the single-pulse case. These observations can be attributed to the decreasing volumetric flow rate between each pulse, and as a consequence, the average injection velocity is lower compared to the single-pulse case. After the pulsatile injection completes, however, the streamwise displacement of the cloud associated with later pulses is seen to "catch up" with the earlier pulses despite being injected at later time and with significantly lower velocity. For example, the geometric center of the second-pulse particles nearly coincides with the overall particle cloud for the threepulse case at t ¼ 2 s, indicating once again that the penetration of potentially more contagious particles from later pulses is accelerated by earlier pulses. On the contrary, such an effect of pulsatility is not observed for the lateral dispersion for which particles from earlier pulses disperse further.
The velocity of the geometry center of each pulse associated with the three-pulse case, u c ¼ dx c =dt, is shown in Fig. 10. Due to the finite inertia of the particles, u c lags the fluid velocity during early-stage injection. It can be seen that the peak velocity of the third pulse is only reduced by a factor of two compared to the first pulse, despite its injection velocity being reduced by a factor of four. In addition, the velocity associated with the second pulse exceeds the velocity of the first pulse when t > 0.3 s, further demonstrating the ability of particles injected at
ARTICLE
scitation.org/journal/phf later times to catch up to particles near the front of the cloud. Further downstream of the inlet, the velocities of each geometric center converge, consistent with observations from human subjects 47 that the unsteadiness of expiratory events diminishes far from the mouth. In studying the entrainment characteristics of particle-laden channel flows, Marchioli and Soldati 56 presented a framework highlighting the use of instantaneous joint correlations of nonvanishing components of the fluctuating velocity gradient tensor to determine locations of particle preferential concentration. In addition, the relationship between particle size and their corresponding fluid topology was exploited to identify particle preferential sampling. The present work extends these techniques to correlate the range of particle sizes seen in a pulsatile cough with their immediate fluid environment and vortex structures. This is accomplished by correlating the value of the Q-criterion against a particle's directional velocity (u p ; v p ; w p ) (see Fig. 11). The sign of Q crit describes the fluid environment that the particle currently resides, with Q crit > 0 corresponding to regions of high vorticity, Q crit % 0 corresponding to regions of no flow or constant strain, and Q crit < 0 corresponding to regions of high strain rate. The particle directional velocity describes the dispersive characteristics of the particles, with a large spread in velocity being associating with high directional dispersion.
Particles are grouped into three size ranges as depicted in Fig. 4. It is first observed that small particles (d p 2 ½1; 30 lm) equally sample all values of Q crit , exhibiting no preferential location in terms of fluid vortical structures. Mid-sized particles (d p 2 ½30; 60 lm) tend to sample regions of high strain rate, indicating their ejection from vorticitydominated regions, i.e., classical preferential concentration. Large particles (d p 2 ½60; 100 lm) are seen to sample regions of constant strain (Q crit ¼ 0) as a consequence of them falling out of the cloud due to gravity. Note that the distribution of mid-sized particles are skewed to negative values of v p , while this is not observed for small particles, indicating that gravity has an effect on the former but not the latter.
It can also be seen that the distribution in lateral velocity, w p , associated with mid-sized particles is narrower in the cases with multiple pulses compared to the single-pulse case. Specifically, mid-sized particles are preferentially sampling near-zero values of w p for a wider range of Q crit compared to the one-pulse case. The distribution in the gravityaligned (y) direction remains unchanged between the three cases, which is expected as gravity plays a more important role for mid-and largesized particles in this direction. For the scatter plots in x-direction, although most particles in all three cases are preferentially sampling the right half plane (u p > 0) as the net particle flux is positive in the streamwise direction, more mid-sized particles are seen to have negative u p
ARTICLE
scitation.org/journal/phf over a wider range of negative Q crit . These aforementioned differences in x and z are likely a result of mid-sized particle being entrained by the vortices generated by the subsequent pulses as they respond most effectively to turbulent eddies due to their intermediate Stokes numbers. Consequently, it also explains the decreasing penetration (x c ) and dispersion (z rms ) with increasing pulsatility, as seen in Figs. 8 and 9.
C. Theoretical modeling of respiratory emissions
Bourouiba et al. 13 proposed a theoretical model for the cloud penetration based on the notion of conservation of cloud momentum. It has been generally observed from their experiments that two phases of the cloud evolution exist. The first phase is dominated by jet-like dynamics, corresponding to the high-speed release of the fluid-particle mixture. In this phase, penetration is modeled as a function of specific momentum flux M according to where C M is a constant coefficient of the first regime. The second phase is dominated by "puff-like" dynamics, characterized by the self-similar growth of the puff cloud. The puff penetration evolution is given by where C I is a constant coefficient of the second regime, and the total specific momentum of the cloud, I, is defined as Here, a 0 1 and a 0 2 are particle entrainment coefficients, which satisfy =N p is the geometric mean radius of the particle cloud.
The particle entrainment coefficients of the two regimes for all three cases are determined by the slope of r c vs x c as shown in Figs
ARTICLE
scitation.org/journal/phf Gupta et al. 23 ($0:21). The entrainment coefficient of the second regime, a 0 2 , is observed to be larger and more sensitive to pulsatility than the entrainment coefficient of the first regime, a 0 1 , indicating that the vortex interactions may be more significant in the puff-like regime. Penetration can be predicted by combining Eqs. (11) and (12) using these values of a 0 1 and a 0 2 . In contrast to Bourouiba et al., 13 here M is time-varying due to the pulsatile nature of the expiratory flow. To this end, the average specific momentum M is used in Eq. (11), given by where M is assumed to follow the pulsatile profile given by Eq. (1), i.e., MðtÞ ¼ M 0 je Àt=s sin ðxtÞj. The coefficients C I and C M are determined by least-square fitting of the puff regime and solving ð4C 2 M M=a 02 1 Þ 1=4 t 1=2 cr ¼ ð4C I I=a 03 2 Þt 1=4 cr , respectively, with t cr the intersection time of the two regimes.
The model predictions are displayed in Figs. 12(d)-12(f). It can be seen that the second puff-like regime for all three cases scales as x c $ t 1=4 . For the first jet-like regime, however, the penetration profiles deviate from x c $ t 1=2 and instead exhibit oscillations for the pulsatile cases, which is not correctly captured by the model.
Here, we extend the current model to account for pulsatility. Instead of applying the conservation law to the entire cloud, the particle concentration coefficient of each pulse is extracted separately for the pulsatile cases. As shown in Figs. 13(a)-13(c), the particle cloud from each pulse follows their own two-stage evolution. In addition, a 0 1 is observed to be similar between different pulses, whereas a 0 2 of the second and third pulses are significantly larger than the first pulse, indicating a larger dispersion for late-injected particles as facilitated by the earlier pulses despite the overall dispersion is hindered. Using these values of a 0 1 and a 0 2 , the penetration of each pulse is then modeled by following the same procedure described earlier. Let x 1 c ; x 2 c , and x 3 c denote the penetration of the first, second, and third pulses, and t 1 , t 2 , and t 3 denote the time when the first, second, and third pulse completes, the penetration of the entire cloud for the two-pulse case is modeled as the weighted average of each pulse given by and similarly for the three-pulse case
ARTICLE scitation.org/journal/phf
Pulsatility is now incorporated in the model by explicitly superimposing of the two-stage dynamics of each pulse. Figures 13(d)-13(f) show the corrected model predictions. It can been seen that the oscillatory trend of the first jet-like regime is accurately predicted by leveraging the particle entrainment coefficient obtained from of each pulse instead of from the entire particle cloud. Note that this modified model can be readily applied to different coughing profiles or extended to speech patterns (which are essentially a continuous train of expulsions 47,57 ) to investigate other effects on particle penetration.
IV. CONCLUSIONS
In this work, direct numerical simulations of particle-laden turbulent pulsatile jets were conducted to assess the role of pulsatility on particle dynamics. Realistic turbulence was provided at the jet orifice using data obtained from an auxiliary simulation of a turbulent pipe flow. The flow rate of the incoming turbulence was modulated in time according to a damped sine wave that provides control over the number of pulses, their duration, and peak amplitude. Particles were injected in the flow with diameters sampled from a lognormal distribution informed by experimental measurements from the literature.
Vortex structures were analyzed for single-, two-, and three-pulse jets. Qualitative comparison of Q-criterion revealed that the two-pulse and three-pulse cases exhibit multiple vortex ring structures with high vorticity regions persisting for longer times near the orifice. Entrainment coefficients were found to be larger for the multipulse cases compared to the single-pulse case due to the vortical structures generated by subsequent pulses, with their largest magnitude in concert with the pulses.
Particle dispersion and penetration were found to be hindered by increased pulsatility. However, particles emanating from later pulses traveled further downstream with increased pulsatility due to acceleration by vortex structures. The observed increase in penetration by later pulses may prove to be significant when determining physical distances at which an infectious person can pose a risk to others, especially since later expulsions have been found to contain higher viral concentrations. Specifically, as it was previously observed that particles from subsequent pulses were found to penetrate the cloud front, this work recommends larger-timescale studies of realistic/pulsatile coughs to verify particle dispersion against single-pulse events. Additionally, measures to dampen the strength of vortex structures in a pulsatile cough, such as home-grade cloth masks, can prove to be beneficial in reducing the penetration distances of subsequent pulses with potentially higher viral concentrations. As such, future work should focus on investigating the efficacy of common flow barriers when exposed to pulsatile particle-laden flows. The evolution of the particle cloud penetration was then compared to an existing single-pulse model by Bourouiba et al. 13 While the penetration for all three cases is well predicted by the puff-like regime (x c $ t 1=4 ) at late time, they deviate from the jet-like regime (x c $ t 1=2 ) at early time and instead exhibit oscillations for the pulsatile cases. A modified model was therefore proposed to account for pulsatility by leveraging the particle entrainment coefficient of each pulse and has been shown to accurately predict the oscillatory trend of the early stage penetration.
ACKNOWLEDGMENTS
The computing resources and assistance provided by the staff of the Advanced Research Computing at the University of Michigan, Ann Arbor are greatly appreciated. We would also like to acknowledge the National Science Foundation for partial support from Award Nos. CBET 2035488 and 2035489.
APPENDIX: TURBULENT INFLOW GENERATION
To accurately model an inlet condition resembling the expiratory turbulent flow exiting from a human mouth, a direct numerical simulation (DNS) of single-phase flow traversing through a cylindrical pipe is performed. The pipe diameter D ¼ 0:02 m is representative of typical mouth opening. The fluid-phase equations are discretized on a Cartesian mesh, and a conservative immersed boundary (IB) method is employed to model the cylindrical pipe geometry without requiring a body-fitted mesh. The method is based on a cut-cell formulation that requires rescaling of the convective and viscous fluxes in these cells and provides discrete conservation of mass and momentum. 58,59 We consider a domain of size 10D  D  D, discretized using 326  256  256 grid points (see Fig. 14). The grid spacing is chosen such that Dy þ ¼ Dz þ ¼ 1:25 and Dx þ ¼ 9:8 to satisfy the resolution criteria of DNS for pipe flows, 60,61 where ðÁÞ þ ¼ ðÁÞu s = denotes frictional wall units with u s the friction velocity. Periodic boundary conditions are applied in the streamwise direction. A uniform source term resembling a mean pressure gradient is added to the right-hand side of Eq. (3) and adjusted dynamically to maintain the desired flow rate. The flow is initialized with a bulk velocity U 0 ¼ 4:0 m/s with 10% sinusoidal fluctuations to accelerate the transient process. A statistical stationary state is reached after 240 D=U 0 . A comparison of the velocity statistics against DNS experimental data from the literature 62 is provided in Fig. 15.
The velocity field at steady state is saved and used to prescribe the boundary condition in the main simulation. At each simulation time step, the velocity in the yz-plane is interpolated from the finescale auxiliary simulation onto the coarser domain boundary of the main simulation. The fluid velocity is then rescaled to achieve the desired time-dependent flow rate according to Eq. (1). The number of particles injected into the main simulation is adjusted each time step to obtain the same mass flow rate as the fluid. The velocity assigned to each particle is equal to the fluid velocity interpolated to its location. This assumes that particles expelled from the orifice have zero interphase slip velocity, and thus zero initial drag.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request. Physics of Fluids ARTICLE scitation.org/journal/phf | 8,934.8 | 2021-02-28T00:00:00.000 | [
"Physics",
"Engineering"
] |
The pathogenesis of post-stroke osteoporosis and the role oxidative stress plays in its development
Cardiovascular disease and osteoporotic fractures (OF) are the main diseases affecting the health of middle-aged and elderly people. With the gradual increase of population aging in China and even the world, the incidence of the two and the prevalence of high-risk groups are also showing a continuous upward trend. The relationship between the two, especially the impact of cardiovascular disease on the risk and prognosis of OF, has attracted more and more attention. Therefore, it is of great significance to fully understand the pathogenesis of cardiovascular and cerebrovascular diseases and the resulting osteoporosis and to provide targeted interventions to prevent the occurrence of diseases and fractures. This article reviews the relationship between one of the Cardiovascular disease—stroke and related therapeutic drugs and the risk of OF, and the role of oxidative stress in its pathophysiological mechanism by reviewing relevant domestic and foreign literature in recent years, in order to gain a more comprehensive understanding of the association between stroke and OF, and then provide a basis and reference for screening high-risk groups of fractures and reducing the burden on the health system caused by the disease.
The pathogenesis of post-stroke osteoporosis and the role oxidative stress plays in its development JinYan Li 1 , Lin Shi 2 and JianMin Sun 1,2 * 1 School of Clinical Medicine, Weifang Medical University, Weifang, China, 2 Weifang People's Hospital, Weifang, China Cardiovascular disease and osteoporotic fractures (OF) are the main diseases affecting the health of middle-aged and elderly people.With the gradual increase of population aging in China and even the world, the incidence of the two and the prevalence of high-risk groups are also showing a continuous upward trend.The relationship between the two, especially the impact of cardiovascular disease on the risk and prognosis of OF, has attracted more and more attention.Therefore, it is of great significance to fully understand the pathogenesis of cardiovascular and cerebrovascular diseases and the resulting osteoporosis and to provide targeted interventions to prevent the occurrence of diseases and fractures.This article reviews the relationship between one of the Cardiovascular disease-stroke and related therapeutic drugs and the risk of OF, and the role of oxidative stress in its pathophysiological mechanism by reviewing relevant domestic and foreign literature in recent years, in order to gain a more comprehensive understanding of the association between stroke and OF, and then provide a basis and reference for screening high-risk groups of fractures and reducing the burden on the health system caused by the disease.KEYWORDS cardiovascular disease, stroke, bone mineral density, osteoporosis, oxidative stress, vitamin D
Highlights
-The following criteria are used to determine the appropriate study to study the relationship between stroke and osteoporosis; -A comparative study including stroke patients and healthy people; -To provide the information of bone mineral density (BMD) level in case group and control group at the onset of osteoporosis after stroke; -Study on the relationship between oxidative stress and stroke and osteoporosis; -Research on the present situation and prospect of osteoporosis treatment after stroke; -Study on evaluating the correlation between stroke and osteoporosis; -Case-control study in Chinese and English; -Chinese and English documents on case reports, meeting minutes, abstracts, communications and news developments; -Research on maxillofacial osteoporosis; -The data of the conclusion is incomplete; -documents that are too old (if necessary).
Introduction
Stroke is the second leading cause of death in the world (1), and it is a major health problem that seriously harms human health.With the growth of patients' age, the death rate of stroke will increase rapidly.As shown in Table 1, Zhang (2) gave the incidence of osteoporosis in patients with and without stroke.The risk of fracture, disability and death is higher in elderly stroke patients.The study found that the fracture risk of stroke population is four times higher than that of the non-stroke population.According to statistics, 3-6% of stroke patients have fractures within 1 year after stroke (2).In addition, the risk of fractures after stroke also exists in other ethnic groups.Lisabeth (3) et al., found that, among non-Hispanic white people and Mexican Americans, a 3% increased risk of fracture at 1 year and a 10% increased risk at 5 years after stroke were found.Benzinger et al. (4), found in a German cohort study that the risk of OF after stroke was higher, the fracture incidence density of non-stroke patients was 21.4/1000 person-years, and the fracture incidence density of stroke patients was 33.6/1000 personyears.It can be seen that cerebrovascular disease (stroke) is closely related to OF, and the risk of OF may be further increased after stroke.
Most patients have mild hemiplegia after stroke, and the reason for the increased risk of OF in stroke patients is related to osteoporosis, decreased bone mineral density and increased risk of falls on the hemiplegic side (4).In addition to increasing the risk of falling due to the loss of mobility of hemiplegic limbs, it can also lead to the reduction of stress stimulation received by bones and the increase in the functional activity of osteoclasts, which in turn leads to bone loss.Studies have confirmed that within 1 year after a stroke, the bone mineral density of the hemiplegic side will drop by 12-17% (5).In addition, stroke in some vascular areas of the brain stem (6) may lead to visual, motor, sensory or cognitive function, balance damage, and may also lead to falls, thus increasing the risk of fracture.Malnutrition, decreased sun exposure and subsequent vitamin D deficiency will all aggravate the bone loss in stroke patients; Common treatment methods for ischemic stroke, such as oral anticoagulants, may also increase the risk of fracture.In a word, as one of the common complications of stroke, fracture can further hinder functional recovery, prolong disability and increase the risk of death.Therefore, it is imperative to formulate prevention strategies for osteoporosis and fractures for stroke survivors (7).
Numerous scholars have conducted research on this disease before, covering the etiology, pathogenesis and treatment of post-stroke osteoporosis, and all of them have achieved good results.This paper systematically summarizes the etiology, pathogenesis and treatment methods of post-stroke osteoporosis by summarizing previous studies, and innovatively introduces the role of oxidative stress in the whole process of the disease, which provides new ideas for the related research and treatment of the disease in the future.
The etiology and mechanism of osteoporosis after stroke
Osteoporosis is a known consequence of stroke.The pattern of bone loss observed in patients with stroke is different from that usually encountered with postmenopausal osteoporosis, which is limited to the paralyzed side and more obvious in the upper limbs.There are many reasons for osteoporosis in stroke patients, including limited exercise and reduced load due to paralysis, insufficient nutrition intake due to eating disorders, intake of various drugs and reduction of vitamin D due to insufficient sunshine (8), as shown in Figure 1.The pathogenesis of post-stroke osteoporosis is not clear.Mild paralysis, decreased mobility and decreased bone load seem to play a major role, and other factors such as nutrition and iatrogenic factors may also play an important role (9).
Osteoporosis after stroke and bone density
People at high risk of stroke are already at risk for osteoporosis and fracture (10).However, research on changes in BMD in patients with osteopenia after stroke is limited, and the research on the appropriate treatment and management of osteopenia is also rare.In addition, there are few studies to compare the changes in BMD in stroke patients with osteoporosis and stroke patients with osteoporosis.According to the World Health Organization, the BMD T of −2.5 or less is a defining characteristic of osteoporosis.Several studies have investigated the relationship between low BMD and stroke, suggesting that low BMD is a potential risk factor for stroke and affects the long-term prognosis of stroke (10)(11)(12)(13).Osteoporosis is a metabolic bone disease characterized by an imbalance between bone resorption and bone accumulation, resulting in micro-architectural disruption, decreased BMD and increased bone fragility (14).Accelerated loss of in bone mineral density after stroke (15,16) can lead to fractures in stroke survivors, and post-stroke weight-bearing limitation of the affected limb inevitably leads to bone loss.After a stroke, most fractures occur on the hemiplegic side of the body because the BMD of that side is 4.6-14% lower than that of the uninjured side (8).In addition, social deprivation, malnutrition, reduced sun exposure and subsequent vitamin D deficiency accelerate bone loss in stroke survivors (4).Bone loss starts immediately after the stroke, lasts for 3-4 months after the stroke, and continues at a slower rate for up to a year after the stroke (17).In addition, stroke patients may have various neurological deficits that can lead to reduced physical activity and reduced use of a paralyzed limb, which can lead to bone loss.Although researchers have not yet definitively established whether osteoporosis is a hallmark of stroke, a potentially complex causal relationship between stroke and osteoporosis has been reported (18).
Osteoporosis after stroke and limb immobilization
There are many potential mechanisms that lead to bone loss after stroke, and restricted movement is one of the important factors involved in this process.BMD studies of the upper and lower extremities consistently show more bone loss on the restricted side than on the unrestricted side.The exact mechanism of the reduction in BMD on the hemiplegic side after stroke is not fully understood, but the link between reduced mobility and bone loss has long been known.In a pivotal study published by Schneider and McDonald in 1984, serum and urinary calcium rose rapidly in 90 healthy young men on bed rest for 5-36 weeks, plateaued for several weeks at week 6, and then declined to a plateau above the dynamic baseline.This happened even when volunteers were given vitamin D supplements throughout the study (19).Reduction of mechanical stress on bone inhibits osteoblast-mediated bone formation and accelerates osteoblast-mediated bone resorption, leading to disuse osteoporosis, mechanical stress on bone is a determinant of bone morphology, BMD and bone strength One of the factors is therefore that disuse accelerates bone resorption and inhibits bone formation, making bones atrophic and brittle (20).This explains why disability after stroke is related to abnormal motor function.It has been shown that bone mineral loss is more pronounced in the upper extremities than in the lower extremities, and in long-term hemiparesis after stroke, the difference between the two sides is more pronounced.It is worth noting that the BMD of the non-hemiplegic side is between that of the hemiplegic side and the normal side.Due to the stroke, his activities of daily living require assistance and the mobility of the whole limb is reduced, resulting in a wide range of mild osteoporosis, severe osteoporosis The degree is consistent with the degree of limitation of the patient's overall activity (21).However, in some stroke patients, the use of the paralyzed arm in the upper limb is greatly reduced, consistent with a large decrease in bone mass, which is compensated for by increased use of the non-paralyzed arm, which may result in no loss of bone mass and may even increase (22).The degree of bone loss depends on age, severity of hemiplegia, duration of total or partial immobilization, and the degree of reduction in bone loading on the affected side and reduction in muscle stretch.Not only may these factors act differently in the upper and lower extremities, but they may also act differently in trabecular and cortical bone for the same bone, as shown by Liu et al. (23).Some studies have also shown that men who have had a stroke have a higher prevalence of osteopenia and osteoporosis and less moderate or vigorous physical activity than men without a stroke.In women, BMD is not associated with stroke (24).
Post-stroke osteoporosis and serum vitamin D
Vitamin D is one of the fat-soluble vitamins thought to have anti-rickets activity.Vitamin D is not only abundant in nature, but is also synthesized in the skin with the help of the sun's UV-B rays (25).Vitamin D deficiency and concomitant arterial disease may contribute to increased severity and depth of bone loss.Vitamin D deficiency is also common after a stroke and can be caused by insufficient sunlight, but also by dietary deficiencies (26).After stroke, an inflammatory response may also be triggered, affecting intestinal absorption of vitamin D and metabolism of vitamin D in the liver, thereby reducing serum vitamin D levels.In a study of 152 people by Wang, Q et al., serum vitamin D levels in acute stroke patients were correlated with inflammatory markers, including hsCRP, white blood cell count, neutrophil-γ/γ lymphocyte ratio, IL-γ6 and TNF-α: They found that vitamin D levels were negatively correlated with serum IL-β6 levels and hsCRP, whereas serum vitamin D levels were not correlated with other inflammatory markers such as white blood cell count, neutrophil-γ/-γ lymphocytes and TNF-α.In another observational study of 957 older adults, there was no association between vitamin D and TNF-α levels (27).Some medicines can affect the metabolism of vitamin D, including anti-epileptic medicines, glucocorticoids, etc.These drugs can cause vitamin D deficiency and reduce serum vitamin D levels.Therefore, physical status and treatment regimen after stroke may affect serum vitamin D levels.
The relationship between lipid metabolism and bone metabolism after stroke
Osteoporosis is closely linked to hyperlipidemia and its incidence increases with age.This may be due to an imbalance between bone cells and fat cells in the bone marrow.Research shows that among stroke patients admitted to rehabilitation, the proportion of patients with abnormal bone density is higher than that of those with normal bone density.The levels of total cholesterol, high-density lipoprotein (HDL), low-density lipoprotein (LDL), apolipoprotein A and B in the abnormal group were higher than those in the normal group, while the bone mineral density of L1 ∼ L4, femoral neck and proximal femur were lower than those in the normal group, suggesting that the level of lipid metabolism in stroke patients may be the influencing factor of their bone (28), Other studies have found that total cholesterol and triacylglycerol are risk factors for decreased bone mineral density in stroke patients, and that disruption of lipid metabolism increases osteoclast activity.A high-fat diet may promote the differentiation of precursor osteoclasts into osteoclasts, increasing the volume and activity of osteoclasts, thereby promoting bone resorption and increasing the risk of osteoporosis (29).Second, the disruption of lipid metabolism leads to microcirculatory disturbances in the bone marrow cavity.In the hyperlipidemic rat model, the proportion of adipose tissue in the bone marrow cavity increased significantly, while the proportion of blood sinus tissue decreased.Fat accumulation also increases the pressure of the bone space and compresses the blood vessels in the cavity, eventually leading to inadequate blood supply to bone tissue, reduced bone marrow microcirculation, reduced osteogenesis and ultimately reduced bone density (30).
Osteoporosis after stroke caused by medication
Proton pump inhibitors (PPIs) are powerful acid-suppressing drugs that are currently widely used to treat drug-related upper gastrointestinal disorders.Current research suggests a link between the use of PPIs and the risk of osteoporosis and bone fractures (31-33).However, not all studies support this link (34,35).In stroke survivors, antiplatelet drugs are commonly used for secondary prevention of stroke.However, antiplatelets have adverse effects on the upper gastrointestinal tract, ranging from heartburn and gastroesophageal reflux disease (GERD) to severe stomach ulcers (36).These adverse effects may lead to poor compliance or even discontinuation of antiplatelet therapy, which may ultimately lead to recurrence of ischemic stroke.Therefore, it is recommended that stroke patients who require continuous antiplatelet therapy use a PPI for gastric protection at the same time.In addition, hypertension is an important modifiable risk factor for preventing stroke recurrence, but previous studies have shown that patients treated for hypertension have an increased risk of GERD (37).In addition, reduced lower esophageal sphincter pressure is common in stroke patients, which may further contribute to GERD.Therefore, in clinical practice, a significant proportion of stroke patients receive PPI therapy (38).In the study by Lin et al. (39), The incidence of osteoporosis, hip fracture and cone bone fracture were higher in patients using PPIs than in those not using PPIs, and the incidence of osteoporosis, hip fracture and cone bone fracture increased with increasing PPI dose.The exact biological mechanism for the association between PPIs and the risk of osteoporosis and fractures is unclear.One possibility is that PPIs may reduce calcium absorption.Since the acidic environment of the stomach promotes the dissociation of insoluble calcium salts into calcium ions, PPIs act as potent inhibitors of gastric acid secretion, which may affect calcium absorption.In addition, PPIs may also affect the function of osteoblasts and osteoclasts, interfering with osteoblast function by inhibiting tissue non-specific phosphatase in osteoclasts, and with osteoblast and osteoclast function by inducing MEK and JNK pathways.In addition, PPIs have been linked to malabsorption of vitamin B12, which can lead to hyperhomocysteinemia, which affects collagen cross-linking, resulting in reduced bone strength (39).
Post-stroke depression is very common, with a reported prevalence of approximately 31% (40).Which is higher than the prevalence of depression before stroke (11.6%).Depression can be effectively treated with antidepressants, most commonly selective serotonin reuptake inhibitors (SSRIs) (41).In the study by Jones, JS et al. (42) found that the risk of fractures doubled within 6 months after a stroke, but there was no significant effect on the risk of falls, seizures or stroke recurrence.However, the above trials only confirmed that fluoxetine and citalopram may increase the risk of fracture in stroke patients and cannot be extended to other SSRIs.In addition, the mechanism by which SSRIs increase the risk of fracture is unclear, and it is uncertain whether the duration of treatment is a predictor of fracture.However, in the study by Richter et al., the proportion of falls due to injury did not increase with SSRI treatment, and the decrease in bone mineral density with SSRI treatment may explain this finding (43).It should be noted that the above only discusses the relationship between the use of antidepressants and osteoporosis after stroke from one point of view.As this relationship may be influenced by many factors, it is necessary to investigate the relevant mechanism from many aspects, such as the sex, age, type of drug and dosage of the patient.
Oxidative stress and stroke
Cardiovascular and cerebrovascular diseases are among the leading causes of death, accounting for about 40% of deaths in China (44).Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids and inflammatory cells in the walls of large and medium-sized arteries.The pathogenesis of atherosclerosis involves the activation of pro-inflammatory signaling pathways, the expression of cytokines/chemokines and an increase in oxidative stress, an imbalance that is conducive to an increase in reactive oxygen species (ROS) production and/or a decrease in the innate antioxidant defense system in vivo (45).This process is long and involves many pathological processes, and the causes and molecular biological mechanisms are not fully understood.In recent years, however, much evidence has emerged that oxidative stress plays an important role in the development and progression of atherosclerosis.
Homocysteine (Hcy) and stroke
Studies have shown that serum homocysteine (Hcy) is an independent risk factor for acute cerebral infarction, and that changes in serum Hcy levels are closely related to changes in cerebral vascular endothelial function and the occurrence of cerebral infarction (46).Hcy is a normal metabolite in the human body.When its level rises, it damages the vascular endothelium, affects the proliferation of smooth muscle cells and reduces the levels of NO and endothelial nitric oxide (eNOS) (47), disrupt the endothelial function of cells (48).This leads to atherosclerosis and/or plaque formation in the head and neck in turn, and ultimately to stroke.Nitric oxide synthase (NOS) plays an antioxidant and pro-oxidant role in atherosclerosis.eNOS is structurally expressed in endothelial cells.NO produced by activation of eNOS can inhibit lowdensity lipoprotein (LDL) oxidation, white blood cell adhesion and migration, vascular smooth muscle cell (VSMC) proliferation and platelet aggregation.(49).
In addition to the oxidative stress mechanism, Hcy may also play a neurotoxic role through microglia-mediated neuroinflammatory injury: treatment with Hcy can activate microglia, significantly increase the volume of cerebral infarction and induce cell injury.One possible mechanism by which Hcy enhances the inflammatory response of microglia is that JAK 2/STAT 3, a key immune signaling pathway, is normally expressed in the brain.It also plays an important role in regulating microglial activation and inflammatory response.The study by Chen S et al. tested whether the JAK 2 inhibitor AG 490 could affect the expression of pSTAT3, the microglia-specific markers Iba-1 and OX-42, and the pro-inflammatory mediators TNF-α and IL-6 induced by Hcy.The results showed that activation of STAT 3 and secretion of IL-6 in microglia were significantly upregulated in Hcy-treated ischemic brain tissue and this effect was reversed by AG 490.The experimental results indicate that the increased expression of STAT 3 after homocysteine treatment may be involved in microglial activation and neuroinflammatory injury in rats with middle cerebral artery ischemia-reperfusion.
For bone, there is limited evidence that homocysteine has a direct effect on bone, including bone density.The potential mechanism of the association between high homocysteine levels and fracture risk may involve the interference of homocysteine with collagen cross-linking and the specific interference of homocysteine with collagen cross-linking and fibril formation in solution.As collagen cross-linking is very important for the stability and strength of the collagen network, interference with its formation may alter the bone matrix, which may increase bone fragility (50).
Ferroptosis and ischemia-reperfusion injury after stroke
When blood flow is restored after cerebral ischemia, brain tissue can be damaged and deteriorate.Many mechanisms are involved in this process, such as inflammatory activation, oxidative stress, ferroptosis, etc.Among these, dysregulation of iron metabolism is closely linked to the occurrence of ischemia-reperfusion injury (50): after reperfusion, excessive reactive oxygen species (ROS) and iron accumulation mediate ischemia-reperfusion injury after stroke.(1) The Fenton reaction occurs between the cytoplasm, mitochondria and excess free iron to generate hydroxyl radicals, which have toxic effects.With calcium ion overload, the mitochondrial respiratory chain is blocked and neurons die.(2) Lipids induce excessive ROS production through the Fenton reaction.ROS mainly include oxygen molecules, hydroxyl radicals, superoxide anions and hydrogen peroxide radicals, and excessive ROS will destroy cell homeostasis.ROS are involved in oxidative stress, promoting lipid peroxidation, depleting the antioxidant (3) The accumulation of ROS can cause a large increase in macrophages, damage to the blood-brain barrier, hemorrhagic transformation and cerebral edema after stroke.
Among these, NOx (NADPH oxidase) is considered to be an important source of RONS in the vascular wall, and there are many types of NOx, of which NOx 4 is the most abundant at the vascular level, but it is controversial whether it has the functions of promoting atherosclerosis and preventing atherosclerosis at the same time: NOx 4 releases more hydrogen peroxide than O 2− (57), so amount of peroxynitrite (ONOO-) formed is therefore low and the bioavailability of NO is maintained; other studies have shown that the increase in NOx 4 activity destroys vascular function in some diseases, such as diabetic cardiomyopathy (58).The Increased NOx activity leads to eNOS uncoupling, decreased NO bioavailability and endothelial dysfunction.Uncoupled eNOS exhibits Nox activity and produces O 2− , exacerbating oxidative stress in the vascular wall.Neuronal NO synthase (nNOS) is expressed in central and peripheral nerve cells and blood vessel walls, which contributes to vasodilation and is considered anti-atherosclerotic.In contrast, inducible NOS (iNOS), which is induced by inflammation, oxidative stress and sepsis, is atherosclerotic, possibly due to the formation of peroxynitrite (ONOO − ), thereby increasing nitrite stress (59).
Oxidative stress and osteoporosis after stroke
After a stroke, dyskinesia reduces the activity of the limbs and bone tissue loses the stimulus of mechanical stress.When the mechanical stress received by the bone is reduced, the activity of osteoclasts is increased, which leads to the easy absorption of bone tissue and the obvious reduction of bone mineral density in the body, which eventually leads to different degrees of osteoporosis.Increasing daily activity can reduce the incidence of osteoporosis, while immobilization after bed rest can cause hypercalcemia and hypercalciuria in patients with moderate and severe stroke, especially in the elderly, and accelerate bone absorption to cause osteoporosis.
Oxidative stress and osteoporosis
Mitochondria, the powerhouses of cells, maximize the use of intracellular oxygen while generating energy and ROS.Low levels of ROS can maintain bone homeostasis and balance between osteoclasts and osteoblasts (60).Abnormal ROS levels have been shown to lead to the death of osteoblasts and osteoclasts and the reduction of bone structure (61).Oxidative stress has been shown to shorten the lifespan of osteoblasts in mouse models of osteoporosis and to reduce trabecular bone density.Oxidative stress can promote the differentiation and proliferation of bone marrow mesenchymal stem cells (BMSCs) into osteoclasts.Bone marrow mesenchymal stem cells are bone marrow-derived progenitor cells that can differentiate into osteoblasts, chondrocytes, adipocytes, myoblasts and other cells.The dynamic balance of osteogenic differentiation, apoptosis and metabolism of bone marrow mesenchymal stem cells plays a key role in maintaining bone tissue structure and bone mass homeostasis.The lack of differentiation capacity of bone marrow mesenchymal stem cells is one of the mechanisms leading to osteoporosis.Oxidative stress can inhibit osteogenic differentiation and damage bone marrow mesenchymal stem cells.During oxidative stress, excessive accumulation of active oxygen in the body will have a negative effect on the differentiation of bone marrow mesenchymal stem cells into osteoblasts.This peroxidation not only damages molecular structures such as proteins, lipids, deoxyribonucleic acid (DNA) and cell structures such as mitochondria and endoplasmic reticulum, but also inhibits the ability of the bone marrow mesenchymal stem cells themselves to proliferate (62).The increase in ROS and the decrease in antioxidant levels lead to an increase in osteoclast activity and a decrease in the osteogenic potential of osteoblasts, resulting in bone degradation.
The negative effects of oxidative stress on osteoblasts
The maintenance of bone mass depends not only on the resorptive function of osteoclasts and the function of osteoblasts, but also on the difference in the production and apoptosis rates of osteoblasts and osteoclasts.Among them, osteoblasts play an important role in maintaining bone homeostasis, regulating cytoplasmic matrix mineralization, controlling bone remodeling and osteoclast differentiation (63).Osteoblast apoptosis promotes the development of osteoporosis, so inhibiting osteoblast apoptosis provides a new direction for the prevention and treatment of osteoporosis.Studies have shown that oxidative stress plays an important role in the pathological process of osteoporosis (64).During the pathogenesis of osteoporosis, osteoblast oxidative stress levels increase significantly, suggesting that osteoblast oxidative stress plays a critical role in pathological bone loss.Oxidative stress not only inhibits osteogenic differentiation but also promotes osteoblast apoptosis (65).Therefore, it is of great importance to explore the mechanism of oxidative stressinduced osteoblast apoptosis to understand the pathogenesis of osteoporosis.Osteoclast is a type of multinucleated cell derived from the monocyte/macrophage lineage and is the only cell with bone resorption capacity.Oxidative stress can activate the differentiation of osteoclast precursors and increase bone resorption, as shown by the increase in tartrate-resistant acid phosphatase (TRAP) activity in osteoclasts and the increase in bone resorption area on the bone surface (66).Autophagy is a catabolic process, which removes damaged organelles and some cell molecules including protein aggregates through lysosomal digestion (67).More and more evidence show that autophagy dysfunction leads to changes in osteoclast function and increased bone loss.Under oxidative stress, autophagy is activated, accompanied by an abnormal increase in osteoclast differentiation and bone resorption.Therefore, the interaction between oxidative stress and autophagy plays an important role in intracellular homeostasis and osteoclast survival.Therefore, inhibition of autophagy may delay osteoclast activation caused by excessive ROS.(68).
Treatment of osteoporosis after stroke
The management of osteoporosis post-stroke warrants meticulous attention.Osteoporosis significantly heightens fracture risk, and post-stroke patients typically require prolonged bed rest or wheelchair use, which may expedite bone loss.Hence, timely and effective treatment measures for stroke patients with osteoporosis become crucial.At present, the management of osteoporosis post-stroke encompasses drug therapy, calcium and vitamin D supplementation, exercise, and rehabilitation.The medication regimen typically involves estrogen drugs, bisphosphonates, and biodegradable, osteocalcin-related peptides which have shown efficacy in reducing bone loss rate and fracture incidence.Calcium and vitamin D supplementation can improve bone mineral density, while exercise and rehabilitation can lower the risk of fracture by enhancing muscle strength and balance.Additionally, the writing must adhere to grammatical correctness, avoid biased language, and maintain clear, concise, and objective language throughout the text.However, it must be noted that subjective evaluations need to be excluded unless clearly marked as such.The treatment plan needs to be tailored to the individual conditions of each patient, and therefore, an individualized treatment plan should be created accordingly.Abbreviations of technical terms must always be explained when they are first used.In brief, the comprehensive consideration of several factors is crucial in addressing osteoporosis treatment after stroke.An individualized treatment plan is useful in mitigating fracture risk and enhancing patients' quality of life.However, a study of stroke patients revealed inadequate assessment and treatment of osteoporosis and fracture risk factors.However, a study of stroke patients revealed inadequate assessment and treatment of osteoporosis and fracture risk factors.However, a study of stroke patients revealed inadequate assessment and treatment of osteoporosis and fracture risk factors.The percentage of stroke patients receiving osteoporosis medication or supplements is relatively low.It is essential to enhance comprehension, prevention, and management of bone loss in this vulnerable population (69).
Non-pharmacological treatment remains the cornerstone for maintaining bone health prior to the onset of osteoporosis or fragility fractures.Physical exercise is particularly advantageous in averting falls and fractures among the wider population (70)(71)(72).Gait defects and long-term immobilization are the primary risk factors for osteoporosis development after a stroke.Therefore, it is crucial to implement strategies to improve these issues.Utilizing techniques to enhance walking ability and interaction between lower limbs and the ground will limit bone loss.It is imperative to avoid biased and ornamental language while complying with standard grammatical rules and technical vocabulary.Treatment must aim to enhance motor function and strength, rectify walking-limiting deformities, and suppress spasms.Consistency in formatting and citation style is vital as well.Aerobic recovery and weight-bearing exercise may enhance bone mass in individuals suffering from chronic stroke.
Another non-pharmacological approach involves increasing the duration of exposure to sunlight.Stroke patients frequently face social isolation, which may restrict their exposure to sunlight.That is one of the reasons behind the vitamin D deficiency in stroke patients.The scientific comprehension of the significance of vitamin D extends beyond its role in the absorption of calcium and phosphorus and the maintenance of healthy teeth and bones.Especially amongst the elderly, vitamin D assumes a crucial physiological function in numerous non-skeletal processes, such as regulating the normal thyroid function, blood clotting, providing muscle strength and flexibility, enhancing the production of endogenous antibiotics, preventing the onset of autoimmune and allergic diseases, combating infectious diseases, and discouraging the growth of tumors (73-81).According to the National Institutes of Health (NIH) recommendations, adults under the age of 70 should consume 15 µg (600 IU) of vitamin D daily, whereas those above 70 should consume 20 µg (800 IU).The body can produce vitamin D3 (cholecalciferol) by exposing the skin to UVB light and obtain provitamin D2 (ergocalciferol) through consumption.D3 is the more dominant form.Exposing uncovered skin to sunlight for 20-30 min daily can fulfill your vitamin D requirements.Nevertheless, the skin's capacity to generate vitamin D3 declines with age (82).This is one reason why stroke survivors may have insufficient levels of vitamin D. Making certain that these individuals receive adequate sun exposure daily is crucial in the prevention and treatment of osteoporosis after a stroke.In a study conducted by Hsieh, CY et al., it was suggested that postmenopausal women take supplements of both calcium and vitamin D to decrease the risk of fractures in those individuals who have osteoporosis and stroke (83).As vitamin D deficiency is a frequent occurrence following a stroke, administering vitamin D supplements (at a dosage of 800 to 1000 U/day) may prove advantageous in preventing fractures among patients with stroke and osteoporosis (84).In addition, vitamin D deficiency in stroke patients may cause various issues.For instance, it can lead to impaired absorption of calcium in the gastrointestinal tract, compromised bone mineralization and muscle strength, and is associated with decreased muscle mass, consequently increasing the risk of falls (85).Vitamin D also possesses neuroprotective, neuromuscular, and skeletal protective effects, potentially mitigating the cognitive and functional damage experienced by patients who have suffered a stroke (86) (88,89).In a long-term study of stroke survivors, a correlation has been found between low levels of vitamin D, low bone mineral density, and hip fractures post-stroke (90).They discovered that stroke survivors had a greater incidence of osteoporosis compared to nonstroke participants and a higher incidence of vitamin D deficiency.However, there was no direct link established between vitamin D deficiency and osteoporosis.Bone loss occurs in the initial stages post-stroke and escalates over time due to restricted physical activity.Existing data imply that other factors, like inactivity or osteoporosis itself, instead of vitamin D deficiency, may serve as explanations.The elevated prevalence of vitamin D deficiency in stroke survivors may be associated with malnutrition or decreased exposure to sunlight and skin production.Further research is required to assess the clinical importance of vitamin D deficiency in stroke survivors (91).A meta-analysis has indicated that daily intake of vitamin D and calcium cannot be strongly recommended to prevent fractures due to methodological concerns.Furthermore, the efficacy and safety of high-dose vitamin D in high-risk groups is uncertain (92).The previous research solely discusses the role of physical therapy in averting bone loss following a stroke.Further areas, including nutrition and vitamin supplements, have not been examined in enhancing bone health (93).Only one study, conducted by Han et al. (94) ensured that all participants received sufficient protein, vitamin D and calcium in their diet.However, the study did not mention monitoring intake through methods such as pill counting or food diaries.None of the studies included in the review monitored serum 25 (OH)D levels to detect vitamin D deficiency.In a Rotterdam-based study, vitamin D deficiency was discovered to be a result of stroke, which quickened the loss of proximal femur bone in post-stroke patients.The risk factors for osteoporosis following a stroke include physical inactivity, malnutrition, illness, and ageing, which all increase the likelihood of falls and fractures.The deterioration of muscle strength and mass after an acute stroke necessitates early intervention to maintain the bone density/strength index.Therefore, this must be addressed promptly as a functional unit.Acute stroke induces muscle hyper catabolism, which results in greater protein degradation than synthesis.Studies have demonstrated the ability of amino acid supplementation to reverse the effects of stroke.However, the impact that amino acid supplementation has on bone properties was not measured.Therefore, in order to create the best prevention and treatment strategy for bone loss and osteoporosis after stroke, it is advisable to adopt an array of interventions, including individualized physical and drug therapy, ample intake of protein, calcium and vitamin D, rather than relying on a single approach or treatment (95).
Regarding the effect of statins on osteoporosis after stroke, Lin, SM and colleagues (7) conducted a population-based trendmatching cohort study and concluded that statin use was associated with a decreased risk of osteoporosis, hip fracture, and vertebral fracture in stroke patients.Furthermore, a doseresponse relationship between statin cDDD and reduced risk of osteoporosis and fractures was observed.A recent metaanalysis, encompassing clinical trials and observational studies, demonstrated a significant correlation between statin consumption and augmented bone mineral density alongside mitigated hip fracture hazard.Users of statin exhibited a lower odds ratio of 0.75 than non-users, with a consequent risk reduction of vertebral fracture (OR = 0.81), albeit the tendency was deemed statistically insignificant (96).Another meta-analysis also found a noteworthy correlation between the use of statins and the reduction of overall fracture risk, with an odds ratio of 0.80 (97).On the other hand, a recent meta-analysis specifically examining clinical trials found that although statin use was associated with an increase in BMD, there was no statistically significant correlation with fracture risk (98).The link between statins and osteoporosis and fracture risk remains unclear.However, some pathways through which statins may affect bone metabolism have been identified in prior research.Specifically, statins can upregulate bone morphogenetic protein-2 via the ras/phosphoinositide 3kinase/protein kinase B/mitogen-activated protein kinase signal pathway.This, in turn, stimulates the expression of runt-related transcription factor 2, inducing osteoblast differentiation.The proliferation and differentiation of osteoblasts can be promoted by statins through inhibiting the synthesis of farnesyl pyrophosphate and geranyl pyrophosphate.Technical term abbreviations such as SMAD will be explained when first used.Furthermore, statins regulate the transforming growth factor-b/SMAD 3 signaling pathway to prevent apoptosis in osteoblasts.In addition, statins could potentially activate the expression of estrogen receptorα via the osteoprotegerin/nuclear factor kB receptor activator ligand/nuclear factor kB receptor activator signaling pathway while simultaneously impeding the generation of osteoclasts, resulting in augmented bone formation (7).In summary, the utilization of statins is correlated with a decrease in the risk of osteoporosis, hip fracture, and vertebral fracture among stroke patients, with a noted dose-effect relationship.However, additional prospective clinical trials are necessary to reaffirm these findings, which can aid in the development and application of pharmaceuticals.
Bisphosphonates are frequently used as therapeutic drugs for osteoporosis following stroke, and previous studies have reported their beneficial effects on stroke patients (99), Bisphosphonate therapy proves effective in diminishing the occurrence of spinal fracture subsequent to stroke by enhancing lumbar spine bone mineral density (LS BMD).A study indicates that bisphosphonates reduce the risk of spinal fractures by 35 The number of osteoclasts in patients treated with zoledronate was lower than in the control group treated with a placebo (100).In the Poole, KE et al. study, the use of zoledronic acid was found to prevent bone mineral density (BMD) loss in the hemiplegic hip joint.Administered intravenously within 5 weeks of admission, the treatment effectively prevented bone loss in patients with hemiplegia who were unable to walk independently for at least a week after a stroke.This study was the first to confirm the effectiveness of intravenous bisphosphonates in preventing significant bone loss in the hemiplegic hip joint during the first year.They discovered that stroke patients treated with zoledronic acid had stable average femoral neck BMD within 12 months, with a minute 0.1% change (95% CI, 2.5, 2.7).Thus, zoledronate is an effective countermeasure to prevent hemiplegic hip joint bone loss in acute stroke treatment.Other research demonstrated a significant increase in osteoclast absorption within 1 week of acute stroke according to biomarkers (101), Zoledronic acid salt remains a strong and enduring suppressor of osteoclast resorption even after a year.Supporting this, the histomorphometry analysis of biopsy samples taken from the iliac bone of hemiplegic patients in the Zoledronic acid salt group confirmed a marked decrease in the number of osteoclasts and their progenitor cells.It should be noted that the utilization of zoledronate can result in adverse gastrointestinal reactions and renal function impairment.Therefore, caution should be exercised while administering this medication.
Discussion
Fracture is now recognized as a risk factor in stroke, and preventing the development of hemiplegic osteoporosis should be a priority in managing stroke patients.Post-stroke osteoporosis has not been fully recognized and treated compared to postmenopausal osteoporosis.Oxidative stress theory is a crucial theory in the process of stroke and osteoporosis.It is important to investigate the mechanism of oxidative stress in the cardiovascular, cerebrovascular, and bone metabolism systems to elucidate the causes of stroke and osteoporosis.As its research deepens, our comprehension of the physiological and pathological mechanisms and treatment measures increases.Controlling the serum homocysteine level, avoiding or relieving ischemia-reperfusion of cerebrovascular or limb blood vessels, and managing atherosclerosis caused by nitration stress (rather than oxidative stress) through medication are all important for the treatment of stroke and the subsequent osteoporosis.In the future, treatment can focus on two areas: (1) Blocking signal pathways associated with oxidative stress during cerebral atherosclerosis or mitigating ischemia-reperfusion injury; and (2) Addressing the impact of lipid metabolism disorder or oxidative stress on osteoclast activity post-stroke.It is expected that advancements in molecular biology, basic medicine, and clinical medicine technology will enhance the comprehension of the pathogenesis of stroke-induced osteoporosis, and facilitate the development of more effective drugs and technologies to treat and prevent it.Osteoporosis after stroke is a prevalent and incapacitating aftermath that impairs the quality of life of stroke victims.The precise mechanism underlying the occurrence of stroke and osteoporosis is yet to be fully understood, hampering progress in developing effective intervention strategies.Nonetheless, recent research suggests that the incidence of fracture can be effectively reduced by mitigating the risk of falls through activities such as drug therapy, physical exercise and targeted fall-prevention measures.Further research is required to establish the most efficient intervention methods for osteoporosis in stroke survivors and to determine the means of resolving complications simultaneously.
This study provides an overview of the pathogenesis and treatment options for stroke and post-stroke osteoporosis.It also explores the role of oxidative stress theory in the disease, presenting information on its etiology, symptoms, treatment methods, and suggestions for preventive measures.The study examines treatment outcomes and side effects, and offers guidance to doctors in formulating more appropriate treatment plans.Furthermore, it presents novel ideas for developing new drugs or treatment methods in the future, with potentially significant impacts on the prevention and treatment of post-stroke osteoporosis patients.
TABLE 1
Incidence and HRs of osteoporosis by demographic characteristics among patients with or without stroke.
capacity of cells and causing superoxide damage to the inner wall of cerebral arteries.At the same time, Fe 2+ and Fe 3+ mediate lipid peroxidation through hydroperoxide to form lipid free radicals, leading to DNA denaturation and further aggravating cell damage.
. A lack of Vitamin D can cause mild secondary hyperparathyroidism.Mild deficiency of Vitamin D can result in "type II" osteoporosis, which can cause hip fractures in individuals aged approximately 70, both male and female.Inpatients and outpatients after a stroke -50% and improve LS BMD by 1-6%.Given the literature indicating a correlation between decreased physical activity due to osteoporotic spinal fractures and increased risk of stroke, treating osteoporosis can serve as a preventative measure for stroke recurrence.Furthermore, this study indicates that bisphosphonate use in the osteoporosis group significantly prevents a decrease in femoral neck (FN BMD) compared to the osteopenia group.One such bisphosphonate is zoledronic acid salt.In Poole's study and others, an iliac bone biopsy was obtained within 3 months after an acute stroke and compared to a healthy control group.The study found a decrease in bone formation at the tissue level regardless of whether the patient was treated with Zoledronic acid salt or placebo. | 9,772 | 2023-10-19T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Monitoring System for Remote Bee Colony State Detection
. Honeybees are the main pollinators for agricultural and horticultural plants. They help at least 30% of the worlds crops and 90% of the worlds wild plants to thrive via cross-pollination. To minimise the effect of on-site colony inspections application of Precision Beekeeping solutions are becoming increasingly frequent. Real time, remote monitoring of the colonies using ICT can help the beekeepers to detect abnormalities and identify states of the colony. For successful implementation of the Precision Beekeeping system development of the bee colony monitoring hardware solution and computer software for data collection and further analysis is needed. This paper describes authors developed bee colony monitoring system for the remote bee colony state detection. Bee colony weight together with temperature are the key metrics for state and behaviour analysis. Hardware of the developed monitoring system is based on the popular ESP8266 low-cost Wi-Fi microchip. Weight is measured using single point load cell with possibility to measure weight up to 200kg, which is enough for the bee colony measurements. Data transfer from the remote apiary is provided by the external 3G router. For data storage and analysis cloud-based data warehouse was developed. Collected data is accessible in the web system in real time. In addition, web tool for system power consumption and battery life evaluation was developed to assess monitoring system sustainability. Described monitoring system is developed within the Horizon 2020 project SAMS, which is funded by the European Union within the H2020-ICT-39-2016-2017
Introduction
Pollination of agricultural and horticultural plants is crucially necessary for human food supply. Insects are the main pollinators, and honey bees are the most widespread and active insects worldwide (Bradbear, 2009;Breeze et al., 2011).
Beekeeping is one of the traditional branches of the agriculture and recently Precision Beekeeping (Precision Apiculture) has been defined, as apiary management strategy based on individual bee colony monitoring . One of the main objectives of the Precision Beekeeping is to assist the beekeepers with on time detection of bee colony states. To detect different states of the bee colonies different sensors can be used and data should be centrally collected and analysed. Behaviour and state of bee colonies can be monitored by the use of temperature, humidity, acoustics, video, weight and other sensors (Meikle and Holst, 2015). Continuous and real-time monitoring of colony parameters is becoming feasible also for smaller beekeepers as the cost of the end systems are decreasing while their precision and valuable outcome are increasing. It is evaluated that implementation of Precision Beekeeping system can lead to economic benefit for the beekeepers . Another benefit of the remote bee colony monitoring is the reduction of the number of manual on-site inspections thus decreasing the effect of bee disturbance. Frequent, physical inspections of bee colonies interferes with bees normal living and can cause additional stress, that negatively affects the whole colony productivity (Komasilovs et al., 2019;Zabasta et al., 2019). As well the distributed locations of apiaries are frequently present, and thus, indicate the need to ease the monitoring of animals in a 24/7 mode, which can benefit from advanced intelligent ambiance technologies (Zgank, 2019).
There are many studies about bee colony parameter monitoring and it is concluded that weight and temperature are the main ones as costs compared to outcome information is adequate. Bee colony weight monitoring provides one of the most important kinds of data beekeeper can have about the colonies (Fitzgerald et al., 2015). Automated weight systems can supply the beekeeper with important information on several important events from the honey bee colonies (Buchmann and Thoenes, 1990;Meikle et al., 2006;Meikle et al., 2008). Weight is related to such important activities of the bee colony like starting of nectar collection, resource consumption by the colony indicating the need of additional feeding. Some developed solutions are described in scientific publications (Fitzgerald et al., 2015;Ochoa et al., 2019;Terenzi et al., 2019;Zabasta et al., 2019;Zacepins et al., 2017). Second important parameter of the bee colony is temperature, as bees can regulate temperature inside the hive (Southwick, 1992). Temperature measurements of bee colonies have the longest history and nowadays, bee colony temperature measurements seem to be the simplest and cheapest way to monitor bee colonies (Zacepins and Karasha, 2013). Basically, temperature sensor usually is added to every bee colony monitoring device. The monitoring of honeybee colonies over long periods of time can result in long-term data for better analysis and understanding of the colony behaviour Lecocq et al., 2015;Odoux et al., 2014;Simon-Delso et al., 2014).
Aim of this paper is to describe the developed monitoring system for honey bee colony state detection using single point load cell and ESP8266 low-cost Wi-Fi microchip.
Within this research authors used several modern computing methods, including but not limited to hardware prototyping and 3D designing, automatic data collection about hardware power consumption patterns, actual web system development and data flow control methods etc.
Development of the bee colony digital monitoring system is done within the Horizon 2020 project SAMS. A combined biological, sociological and technical approach is made within the SAMS -Smart Apiculture Management Services -project (https://samsproject.eu/). The SAMS project is funded by the European Union within the H2020-ICT-39-2016-2017 call. It enhances international cooperation of ICT and sustainable agriculture between EU and developing countries in pursuit of the EU commitment to the UN Sustainable Development Goal "End hunger, achieve food security and improved nutrition and promote sustainable agriculture".
Developed digital system for bee colony monitoring
As discussed in the introduction the main parameters of the bee colonies are weight and temperature. Thus, digital system for those parameter monitoring is developed within this research. The high costs and inapplicability for outdoor conditions of continuously loaded general purpose electronic scales limits their application in Precision Beekeeping.
Authors of this research developed prototype of honeybee colony digital monitoring system based on single point load cell (BOSCHE Wagetechnik Single point load cell H30A, https://www.bosche.eu/en/scale-components/load-cells/single-point-load-cell/ single-point-load-cell-h30a) with max load of 200kg and ESP8266 low-cost Wi-Fi microchip. ESP8266 has a full TCP-IP stack and microcontroller capability. ESP8266 CPU frequency and built in memory is enough to perform intended task of collecting bee colony data and transferring it to the remote storage for further processing. In addition to weight sensor, two other sensors are added to the system for bee colony temperature and environmental humidity and temperature monitoring. Environmental parameter monitoring is essential component too, because sometimes to correctly interpret bee colony state it is necessary to know the outside conditions .
Chosen load cell is an analogue one and for usage in digital monitoring system it is needed to convert analogue signal to digital. For getting the weight data analogue/digital converter HX711 is used. Measurement node itself is battery powered (by 4x1.2V NiMH rechargeable batteries). For testing purposes, the Wi-Fi router (HUAWEI E5330) was powered from power grid (220V AC) via 5V micro USB adapter that can be substituted also by a suitable 5V battery (or solar powered solution). As well for data sending to the remote server Wi-Fi network (provided by the 3G router) is used, but it can be easily substituted by mobile network adding additional module to the ESP microchip or even data transfer using LoRaWAN (Zacepins et al., 2018) can be implemented. At this moment system is assembled on the printed circuit board without any casing, but after tests end market prototype will be developed. Developed system's architecture is based on proposed approach by (Kviesis and Zacepins, 2015) where an individual bee colony measurement node sends sensor data to the remote server via wireless or mobile network communication.
Mounting of the single-point load cell is performed based on instructions provided by the manufacturer. Load cell is mounted between two metal plates (10cm x 15cm), and afterwards metal plates are screwed to the plywood plates (50cm x 50cm). Beehive can be placed directly on the platform or some additional wooden planks can be used. Mounting of the load cell is shown in Figure 1.
Usually beekeepers are not ready to invest much in the digital solutions, thus economic aspect of the system is very crucial and system costs should be as minimal as possible. This proposed solution evolved from authors previous researches and solution based on four small load cells (Zacepins et al., 2017) and weighting system based on a Raspberry Pi single board computer (Komasilovs et al., 2019).
Usage of cheaper microchip allowed to decrease the overall costs of the system. In authors case costs for system components and additional materials are summarised in Table 1. The calculated costs for one developed system are 192.00 EUR. System installation, maintenance, data storage, SIM card with appropriate data plan and usage of the web system, usage of alternative power supply is not considered in those calculations. It should be mentioned that some components are optional, like 3G router can be dismissed if there are constant Wi-Fi connection at the apiary site.
System can take measurements based on measurement intervals, that can be configured individually based on required information that should be gained from the system and depending on bee colony states, that could be detected. In authors case, as system is powered from central power supply for testing purpose, measurements can be taken more frequently than it is needed. In authors case measurements are performed each 60 seconds.
In the future it is planned to use battery power also for the networking part for the real system deployment. It was evaluated that individual measurement node's current draw (at 3.3V) during different operational modes are as follows: Measurement mode (device is making measurements and getting values from connected sensors): 25mA for 1.2s WiFi power-up mode (device is switching on the Wi-Fi module): 47mA for 1.4s Connection mode (device is connecting to the Wi-Fi network and getting network configuration parameters): 69mA for 2.3s Data sending mode (device sends measurement data): 79 mA for 1.8s Going into sleep mode (switching off the modules): 36 mA for 1.4s Sleep mode (there is no activity of the device, it is in deep sleep state): 0,028 mA for 60s Current consumption was logged using the UNI-T UT181A True RMS Datalogging Multimeter. Current draw by different operational states is represented below:
Fig. 2. Current draw during one measurement iteration (at 3.3V)
The battery life is depending on the measurement intervals. If the system will perform measurements each two minutes and will use four 1900mAh batteries, then theoretically system can operate for 18 days. Authors also developed Web based calculator for estimation of battery life, see https://sams.science.itf.llu.lv/battery-life (Fig. 3).
Measurement intervals are directly connected with events beekeeper would like to detect and bee colony states that should be identified. Some bee colony states require constant and frequent (each 1 min) measurements, but for some states several measurements per day is enough. Table 2 summarises possible bee colony states and needed measurement and intervals (recommended and minimally required), which can be detected based on weight and / or temperature measurements. As well it should be mentioned that for precise weight measurements single point load cell should be calibrated before placing the beehive on it.
Developed PCB prototype of the authors' system is shown below: Fig. 4. 3D PCB prototype of the developed monitoring system's measurement node
Developed digital system for bee colony monitoring
Next important aspect of the whole system after the hardware development, is implementation of the data transfer procedures and methods. For the data storage and reporting specific SAMS data warehouse (DW) is developed. DW is available online at https://sams.science.itf.llu.lv/. SAMS DW is a universal system, which is able to operate with different data inputs and have flexible data processing algorithms. SAMS DW solution uses authentication and authorization services provided by Auth0 universal platform (https://auth0.com/). There are several steps, that have to be performed on device to send data to the remote data warehouse: 1. Acquire access token. Access token is used by DW to authenticate and authorize the request. In order to acquire the token, the device should send POST request to https://sams.science.itf.llu.lv/api/token with its Client ID and secret (requested individually). Each device has its unique credentials. 2. Post the data to DW. There is a single endpoint for posting data to DWhttps://sams.science.itf.llu.lv/api/data. Access token should be provided in the Authorization header and request body can contain multiple data packages. 3. Reports about posted measurements are immediately available in UI under the Reports section. Additional debugging information is available in Dashboard and Devices (last events and errors). Full instruction on how to connect general bee colony monitoring hardware to the SAMS data warehouse is available online: https://sams-project.eu/wp-content/uploads/2020/02/DW-data-sending-guide.pdf
Conclusions
Continuous, remote and real time monitoring of the bee colony weight and temperature becomes a must have procedure in the beekeeping practice and acts as a first stage in implementation of the Precision Beekeeping approach. Weight monitoring of at least one reference colony at the apiary can help to identify start and end of the nectar flow and evaluate the colony foraging activity.
Developed monitoring system focuses on minimisation of manual bee colony inspections, which should lead to the minimisation of stress to the bee colony and increase of welfare.
Proposed honey bee monitoring system uses one single point load cell for weight measurements, one temperature sensor for internal colony temperature and ESP8266 for data collection from the sensors and transferring it to the remote data warehouse. In a future system can be set up also in a remote area, when alternative power supply and mobile network capabilities will be integrated.
Developed web tool for monitoring system power consumption and battery life estimation can help to evaluate developed system sustainability and applicability and provide useful information for potential solar powered solution.
There are many possible bee colony states therefore it is crucial to choose the right measurement intervals for correct state detection.
Application of ICT solutions and remote monitoring systems facilitates the beekeepers' knowledge gathering about behaviour of individual bee colonies and improve the efficiency of beekeeping management. | 3,250.8 | 2020-09-28T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Analytical Modeling of RRM-ATM Switch for Linear Increment and Exponential Decay of Source Data Rate
Abstract This paper describes analytical modeling of relative rate marking (RRM) switch for available bit rate (ABR) ATM service considering linear increment and exponential decay of data rate(s) of the source(s) to achieve faster congestion control of the network through resource management of the RRM switch. Theoretical performance of the switch has been evaluated in respect of link utilization factor and cell loss probability. It is shown that the switch achieves faster control over source(s) along with improved link utilization factor. This is particularly attractive for congestion control in ABR-ATM networks. The developed switch model can find potential application in the design of RRM switch specific to a network environment.
Introduction
Architecture and performance parameters of asynchronous transfer mode (ATM) switches [1,2,3] greatly influence the quality of service (QoS) of ATM networks. This has been the major driving force for continued interest in the development of variety of ATM switches having their inherent merits for specific applications. Essentially relative rate marking (RRM) switches achieve speed superiority over explicit forward congestion indication (EFCI) by making use of congestion indication (CI) and no increase (NI) bits of resource management (RM) cells for congestion control [4]. Further, RRM switches are obvious choice and thus being extensively used in ATM networks as they can operate with finer resolution and provide a good trade off between the congestion control time and hardware complexity [6,9]. In spite of attractive features, RRM switches are not yet fully explored nor optimized under various operational network parameters. Mars et al. used an RRM switch to investigate the performance of transmission control protocol (TCP) connection [7], Lapsley and Rumsewicz [8] discussed advantages and disadvantages of using NI bit in the feedback loop to control the source data rate. Mischa Schwartz [10] discussed another feedback method for congestion control in broadband networks and Plotkin and Sydir proposed an implementation of RRM switch [11].
This paper bridges the gap of knowledge about performance of RRM switches by developing its mathematical model which provides theoretical foundation for RRM switch analysis. Theoretical analysis of maximum and minimum queue length for determination of Available Bit Rate (ABR) and resource management delays is presented followed by computation of queue length, time delay parameters, link utilization ( ρ ) and cell loss probability ( loss P ) for a given QoS. Figure 1 shows the basic RRM switch architecture, which contains a multiplexer (MUX) and de-multiplexer (DMUX) at the input and output respectively and a shared memory used as queue ( Q ) to store data cells of sources. The Q is an important shared resource of RRM switch whose status is monitored for source data rate control using feedback loop. Therefore the techniques used for Q implementation and its allocation have significant impact on the overall ATM performance which warrants development of an appropriate theoretical model for analysis of the switch and its effect on the ATM network performance.
Analogy with Fluid Flow
Considering the shared memory of Figure 1 as a bucket having an inlet-outlet in which the cells are poured as fluid that comes out from outlet. Assuming there are "N" sources each having same allowed cell rate ( ACR ), the total rate of incoming cells " ( ) can be given as " ( ) time of forward resource management cells ( ). FRMC Considering that output (or service) rate is µ then fluid flow approximation of the Q can be represented as in Figure 2 which shows the total incoming cell rate ] .
Analysis of Q(t) and ACR
The network model of the RRM switch having shared memory Q is shown in Figure 3 along with definition of the relevant parameters given in Table 1. Figure 4 gives the simplified data flow view of ATM network indicating forward and backward resource management cells (FRMC and ) BRMC and actual data cells. Under operating conditions, any change in the rate of data cells affects the rate of BRMC and the bounds of the source rate would be ( ) MCR ACR t PCR ≤ ≤ , where MCR and PCR are the minimum and peak cell rates respectively. The steps in which the cell rate of sources can be increased is 1 2 n [12], where n is a positive integer and this step size is called rate increase factor ( RIF ). Therefore increase in cell rate at any instant t would be ( ) RIF PCR × and for FRMC it shall be ( . ) / PCR RIF Nrm , where Nrm is number of data cells between two FRMCs . Therefore the elemental increase in allowed cell rate ( ) ACR t for one source can be given as: Integrating equation (2) and using boundary condition Therefore the value of ( ) ACR t at any instant can be found by considering its bounds as Congestion Impending
Memory Shared
The variations of ( ) Q t and ( ) ACR t computed using equations (1) and (4) respectively are plotted as function of time in Figure 5 from which it is seen that ( ) Q t has six operational phases between its limiting values min Q and max Q as discussed below. Phase (i): Initially at 0 t = , the source starts sending data and ( ) Q t starts building up after delay f τ until the Q is filled up to L Q and when ( ) (5) and for special case Therefore using equation (7), Q L τ can be expressed as and for Using equation (5) Considering the ( ) ACR t curve of Figure 5 As the rate between two FRM cells cannot be decreased by more than rate decrease factor ( RDF ) [12], referring Figure 4 arrives at the source. Therefore the final expression for L Q as shown in Figure 5 can be written as Referring ( ) ACR t curve of Figure 5, we have min max The time L τ during which ( ) ACR t remains constant at min ACR can be found as
Analysis of Network Parameters
This section describes the analysis of "link utilization factor" and "cell loss probability" which are the important network parameters. These parameters have been determined using the proposed model. The link utilization factor can be computed from the knowledge of the status of ( ) Q t at any instant. For max 0 ( ) Q t Q < < the shared Q buffer is fully utilized but the buffer underflow occurs when ( ) 0 Q t ≤ , therefore the number of waste cells Therefore the waste bandwidth i.e. number of cells wasted during one cycle of queue building and depletion over the time period ( ) T can be expressed as / .
waste waste The cycle time T of ( ) Q t can be expressed as and link utilization factor " " ρ can be written as From equation (37) it is clear that link utilization factor " " ρ varies in the range " 0 " to " 1 "when the waste bandwidth " waste BW " changes due to change in network traffic condition from " 0 " to LCR . However, the cell loss occurs when max ( ) Q t Q > that causes overflow of Q buffer resulting in loss of cells. Therefore the number of cells lost " loss N " in one cycle time T of ( ) Q t can be expressed as .
Therefore the cell loss probability " loss P " can be
Analytical Results
The The variation in max Q as function of N is shown in Figure 7.
Variations of ρ and loss P are calculated using equations (38) and (40) respectively (utilizing the min Q and max Q expressions of equations (32) and (19)) as function of RTT as shown in Figure 8. ρ and loss P are calculated using equations (38) and (40) respectively as function of N in the range 1-10 as shown in Figure 9. Variation of loss P determined as function of max Q using equations (40) and (19) is shown in Figure 10.
Referring Figure 6 it can be inferred that the proposed model of the RRM switch offers the benefit of reduced time delay between the instant when ACR attains its maximum value (= max ACR ) and the time when ( ) Q t reaches to H Q with increasing number of sources, however, max ACR τ exhibits inverse behaviour. This reduction in H τ can be useful in faster control of source data rate for reducing cell loss probability due to Q buffer overflow. It may be seen from Figure 7 that max Q increases rapidly for more than three sources. The link utilization ( ρ ) decreases moderately with increasing RTT and the cell loss probability ( loss P ) increases as in Figure 8. The variation of " ρ " and " loss P " for different values of " N " is shown in Figure 9 from which it is clear that higher link utilization is achievable for more number of data sources, however, this is accompanied by affordable increase in loss P which can of course be minimized by increasing max Q as shown in Figure 10.
Conclusion
Modeling of RRM switch has been carried out to facilitate theoretical evaluation of the important parameters of the switch like Q length variation, link utilization factor, cell loss probability as function of round trip time for resource management / number of sources considering linear increment and exponential decay of source data rates. The expressions for different parameters have been developed and results of theoretical computations considering an illustrative example are presented. It is shown that the RRM switch implementation can achieve faster congestion control of the network along with improved link utilization factor which is particularly attractive for large number of sources. Further, the cell loss probability can be reduced by affording higher value of max Q . The presented model can find potential application by using the developed equations for the design of RRM switch specific to a network environment. | 2,389.8 | 2017-09-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Information Feedback Between Size Portfolios in Boursa Kuwait
This paper examines the transmission of information between small and large sized portfolios within the Boursa Kuwait between 2011 and 2020. The study documents a constant and steady stream of feedback which demonstrates a sizeable and significant impact on market volatility; albeit at varying degrees of effect on smaller portfolios as compared with larger ones. Evidence suggests a more persistent volatility on larger portfolios, indicating a disparity on the interpretations of transmitted information between the varied styles of investors in the Kuwait Boursa.
Introduction
Transmission of information has attracted finance academics through many studies of financial markets interdependence, however, information feedback in the context of portfolio allocation is still lagging in terms of the quantity of research. The importance of this issue comes from the fact that the transmission of information between different portfolio sizes has strong implications to portfolio construction as investors always seek to maximize their portfolios' expected returns by estimating the optimal allocation of their investments.
Whilst many studies have sought to understand market independence and extensively covered transmission of information as a factor; information feedback in the context of portfolio allocation remains scarce by comparison and lags behind in terms of both quality and quantity of research. This remains the case despite holding significant implications to portfolio construction and management with investors seeking optimal allocations of their investments on the basis of expected returns with a view to maximizing these returns.
One important metric for investors is cross-correlation of stocks, usually examined by employing GARCH specifications in order to account for information spillover to the conditional correlation between different stocks whilst also enabling researchers to explore multidirectional information transmission between stock returns. Information feedback between large and small portfolios takes added significance given its notable impact on expected returns on the investors' respective portfolios. Such information also has a profound impact on decisionmaking in reference to period reallocations and rebalancing of portfolios.
Indeed, many empirical papers covering this area of research assume that investors will typically adopt conventional, mean-variance optimization strategies designed to maximise expected returns within their respective risk appetites.
More specifically, when identifying the effects of information feedback on Boursa Kuwait, we can draw parallels with previous studies demonstrating its impact in other markets with larger data sets. Ross (1989) for example linked information flow to volatility, particularly in the case of smaller firms whose returns tend to lag behind larger ones. Similarly, Lo and Mackinlay (1990) found evidence that stock prices changes of larger stocks often preceded those of smaller stocks.
The relationship between volatility and the size of a portfolio was also investigated by Conrad et al (1991), which found an asymmetric effect of transmission of volatility with volatility appearing to transmit from larger firms to smaller ones, but not vice versa McQueen et al (1996) seem to support the previous findings, going on to document significant asymmetry in cross correlation in US stock portfolios of various sizes. Grieb and Reyes (2002) meanwhile, investigate the volatility spillover between large and small stock returns in UK stocks; seemingly confirming persistent correlation between the two size-based indices with a more consistent two-way flow of information.
The above studies however, have relied on significantly more developed markets than the Boursa Kuwait, especially in terms of settlement procedures. Many of the standard models used in modern finance are however, in their current forms, ill-suited to account for many of the circumstances and idiosyncrasies specific to emerging markets.
As suggested by Bekaert and Harvey (2003), an opportunity arises therefore, for finance models to be amended, modified, updated or completely redesigned in order to accommodate the structure and trading patterns of emerging markets such as the Boursa Kuwait, in order to take on added relevance in a quickly developing global scene Given the above, it is the following hypotheses that this study seeks to investigate: (H1): There is no feedback effect between large and small size portfolio returns (H2): Returns of small and large portfolios are not correlated
Data
This study uses daily closing index values that span the period 3-Jan-2012 to 31-Dec 2020 with total sample daily observations of 2367. The index used is constructed by the Global Investment House (a prestigious investment and brokerage firm in Kuwait) and is capitalization weighted index which avoids the bias introduced by value weighted indices. The index includes all firms traded in Boursa Kuwait.
Methodology
To account for constant parameterization, all returns are calculated by taking the log of the previous day's index value divided by today's index value as follows R it = ln {R it /R it-1 } For the purpose of this study, two indices are used. The large index portfolio (LI) includes the largest 10 firms in terms of market value with monthly rebalancing while the small index portfolio (SI) includes the smallest 10 firms in terms of market values and is also with monthly rebalancing. Portfolios are formed at the beginning of each month based on the firms' market values and held for one month.
As in Hamao et al (1990) and Booth et al (1997), this paper employs the Exponential generalized Autoregressive Conditional Hetroscedasticity (EGARCH) framework in order to investigate any information spillover between the large and small capitalization stocks in Boursa Kuwait. Koutmos (1996) also examined the information transmission among several financial markets by employing the GARCH framework in order to account for hetreoscedasticity and any possible interactions among markets. Others, such as Darber and Deb (1997), Longin and Solink (1995), King et al (1994), Deb (1999, 2000), and Deb (2000) investigated the cross correlation and information spillovers among many financial markets. With this study focusing on micro level factors of financial markets, as it strives to find evidence of time-varying correlation between stocks of contrasting capitalizations, the EGARCH model is perfectly suited as it divulges a two-way information flow between large and small capitalization portfolios to the next period's correlation.
The specification of the EGARCH model employed in this study is in line with the the original specification proposed by Nelson (1991) , which uses the assumption that the conditional variance of stock returns as a function both; its lagged innovation and the lagged conditional variance. The original specification of Nelson (1991) that is employed in this study is as follows: Where, in equation (1), "R it " (R jt ) stands for returns of the large (small) portfolio returns. The same equation is run for the small portfolio returns. The second equation models the log of the conditional variance where the leverage effect is exponential which means that the forecasts of the conditional variance will be non-negative.
In order to corroborate results, the feedback effect is also detected through a set of linear equations that include different variables whose effects are expected to be significant and hence show any potential contemporaneous spillover between the two sets of portfolio returns. The parameterization of the "seemingly unrelated regression" (SUR) takes the following form where "R it " denotes the returns of large (RL) size portfolio. The same equation is run for the small (RS) size portfolio. The explanatory variables test any contemporaneous feedback between the two portfolio returns. The "Corr t-1 " is the time-varying one-period lagged correlation between the two portfolio returns while the "SD(.)" refers to the time-varying risk of the two portfolios. Table 1 reports the results of a preliminary examination of the data which includes the basic statistics and correlation The table indicates that the mean return is the highest for the small size portfolio which conforms to the literature of the small-size effect. The risk is also higher for the small-size portfolio which also comes in line with results obtained in the literature , such as that of Black (1986). Return distributions for both portfolios are seen to be abnormal as evidenced by the Jarque-Bera test for normality. The small-size portfolio returns are skewed to the left, whilst the larger portfolios are skewed to the right. The autocorrelation test of lags up to thirty (30) days indicates that the two series of returns show evidence of strong dependence and strong ARCH effect which must be tested formally and accounted for within the analyses
Results
Given this papers intention to investigate the presence of returns of large-size portfolios as compared with smallersized ones in Boursa Kuwait, table 2 contains the effect of the lagged value of "RS" and "RL" on "RS" using the EGARCH specification. Results in the table show that, unlike the lagged values of "RS" with relation to "RS", the effect of the lagged value of "RL" on "RL" is insignificant; i.e. the autoregressive term is insignificant. However, the autoregressive term of "RS" significantly affects the "RL". The EGARGH coefficient is significantly different from zero which confirms the significance of the GARCH effect. This partially confirms the results of Grieb and Reyes of UK size-based stock portfolios as their results document a significant autoregressive term in both large and small index returns. Moreover, as far as this parameterization is concerned, the results of the analysis show that the leverage effect exists for the "RL" model. On the other hand, the results are seen to differ for the "RS" portfolio. The lagged values of both the "RL" and "RS" have significant effect upon the "RS". This result contradicts the no-spillover effect documented by Reyes (2001) when studying the Tokyo stock exchange. The past volatility also effects the "RS" portfolio at the 1% level. In addition, the asymmetric volatility is statistically apparent in the data which confirms the leverage effect of Black (1986) and supports Reyes (2001) for Japanese stocks. Whilst the coefficient estimates for both models suggest a high volatility persistence, these estimates imply that volatility shocks to the returns on small-size portfolios persist longer than such shocks to the larger-sized portfolios; a results that comes in line with the implication of the evidence documented by Grieb and Reyes (2002) and McQueen et all (1996). More specifically, whilst about 73.2 % of a certain shock is seen to remain after 3 days for returns of the small-size portfolios (=.9011), only about 36.6% of the same shock remains for the same 3-day period of larger-sized portfolios.
Taken together, these results confirm the feedback effect between the two portfolio returns. Since the model describes historical portfolio behavior, it would appear that informative investors on Bourse Kuwait, who construct portfolios using size criteria (most likely the institutional investors), reshuffle their portfolios based on information from different sources and from these are the returns variations of both small and large portfolios. In order to further explore any feedback effect between the large and the small-size portfolios, as well as to corroborate the results obtained in the EGARCH test, an unrelated regression (SUR) model is developed to investigate any contemporaneous effect from the many exogenous factors that are not considered in the study. The conjecture here is that since both portfolios are from the same market, they should suffer similarly from any external shocks that have not been considered in the analysis. That is, disturbances in the different equations at a given time are likely to reflect some common measurable or omitted factors, and hence could be contemporaneously correlated. Table 3 shows the results of the SUR model where the two variables, the "RL" and the "RS" are modeled as a function of some control variables. Many models have been run and the results are consistent. The results of model 1 show that the returns of the "RS" portfolio have significant effect on the returns of the "RL" portfolios. Even if the lagged values of the "RS" returns are considered, we still observe a significant effect on the large portfolio returns. The implication is clear: there is a feedback effect between the large and small size portfolio returns in Boursa Kuwait. As for the lagged correlation between the returns of the two portfolios, it also has its significant effect upon the returns of both portfolios. However, that effect is not symmetrical. Specifically, while the lagged correlation has negative and significant effect on the "RL", it has a positive and statistically significant effect on the "RS". The same is correct when considering the contemporaneous correlation (results are not shown). Although this result demonstrates that correlation is an important piece of information that can be used to improve the performance of Kuwaiti investors' portfolios, the inconsistency in the effect of the correlation on both portfolios could be an indicator of a noise in investment trading during the studied period. Furthermore, this result may lead to the formation of a style of portfolio management for investors in Boursa Kuwait. Model (2) tests only the effect of the lagged values of both "RS" and "RL". The results are not different from those obtained in table 2 when considering the 5% significance level as both lagged values have no significant effect on the "RL". This model, thought, has less predicting power than model (1). To corroborate the results of model (1), model (3) employs the same variables but adds the return volatility of both portfolios. The results show that this model has the same predicting power as model (1) and all variables retain their sign. The return variability of the "RS" significantly and negatively affects "RL" while the return variability of "RL" has no significant effect on the "RL" returns. This result seems eccentric since it is expected that the volatility of "RL" would affect large stock returns. The results also contradict Lo and MacKinlay (1990) who document asymmetric cross-autocorrelation between size-based portfolios of the US stock market.
Models (4), (5), and (6) relate to the determinants of "RS" returns employing the same proposed variables thought to affect "RL" returns. The results of model (4) show that the information about both the "RL" returns and the one period lagged returns of "RS" have positive effect on the "RS" portfolio returns. One may surmise form this that Kuwaiti investors use this information to make their investment decisions during the alteration of the assets in their portfolio. The one-period lagged correlation, whilst it shows a positive effect on the "RL" returns, negatively affects the "RS" returns (considering a 10% significance level). Model (5) replicates the analysis in model (4) and the results are the same to those that are obtained in table 2 in terms of the positive effect of the lagged value of both the "RL" returns and the "RS" returns upon the "RS" returns. Model (6) is analogous to model (3) of "RL" returns in terms of the included variables. The results of this model again show that the return variability of "RL" has no significant effect on the "RS" return. However, "RS" returns' variability has a positive effect on the "RS" portfolios returns. The reason appears clear from the results in table 2 which show that the volatility persistence of "RS" is higher than that of "RL" and hence its effect would expectedly be higher and more considered by investors in Boursa Kuwait. This would suggest that investors in Boursa Kuwait tend to place a higher value on information related to "RS" more than they do for those related to "RL". This contention is supported by the fact that the predictive power of model (6), as well as model (4), is the highest of those studied.
Conclusion
This study provides evidence of a feedback effect between returns of large and small portfolios in Kuwait' stock exchange (Boursa Kuwait). The findings appear to support the hypotheses of t information flow from one period to the next as the correlation between returns of the smaller and larger sized portfolios affect the returns in the following period for both sets of portfolios; albeit to varying degrees. The existence of this effect may primarily be caused by the continuous arrival of information which, in theory, should impact all stocks across the Boursa Kuwait; a fact that is by and large supported by the literature.
It might also be the volatility changes that lead investors towards a shift in focus from one market to another (King and Wadhawani, 1990). Specifically, a volatility persistence was detected, and that persistence lasts longer for the small-size portfolio than for the large-size ones. The difference in the effect on both portfolios implies that what is considered good news for small stocks may be considered bad news for larger ones. In other words, the study documents a sensitivity of changes in returns due to the explanatory factors considered and contained within this study. This sensitivity appears to support the conjecture that investors usually infer that the nonfundamental factors that affect both large and small firms within the economy are different; and therefore, potentially manifests in varied decision making by investors when making their investment decisions based on small and large stock fundamentals. Generally speaking, the results of this study suggest that studies of portfolio selections and investment behavior in Boursa Kuwait should account for both the contemporaneous and the one-period lagged volatility effect. additionally, this study supports the well-documented and long-observed leverage effect of stock returns where there is an asymmetric effect on volatility during rising and falling stock markets. | 3,991.4 | 2021-03-26T00:00:00.000 | [
"Economics"
] |
CDE++: Learning Categorical Data Embedding by Enhancing Heterogeneous Feature Value Coupling Relationships
Categorical data are ubiquitous in machine learning tasks, and the representation of categorical data plays an important role in the learning performance. The heterogeneous coupling relationships between features and feature values reflect the characteristics of the real-world categorical data which need to be captured in the representations. The paper proposes an enhanced categorical data embedding method, i.e., CDE++, which captures the heterogeneous feature value coupling relationships into the representations. Based on information theory and the hierarchical couplings defined in our previous work CDE (Categorical Data Embedding by learning hierarchical value coupling), CDE++ adopts mutual information and margin entropy to capture feature couplings and designs a hybrid clustering strategy to capture multiple types of feature value clusters. Moreover, Autoencoder is used to learn non-linear couplings between features and value clusters. The categorical data embeddings generated by CDE++ are low-dimensional numerical vectors which are directly applied to clustering and classification and achieve the best performance comparing with other categorical representation learning methods. Parameter sensitivity and scalability tests are also conducted to demonstrate the superiority of CDE++.
Introduction
Categorical data with finite unordered feature values are ubiquitous in machine learning tasks, such as clustering [1,2] and classification [3,4]. Most machine learning algorithms are built for numerical data based on algebraic operations, such as k-means and SVM, which cannot be directly used for categorical data. These algebraic machine learning algorithms will be applicable for categorical data only if we embed the categorical data into numerical vector space. However, learning numerical representations of categorical data is not a trivial task since the intrinsic characteristics in categorical data need to be captured in embeddings.
As stated in [5], the hierarchical couplings relationship (i.e., correlation and dependency) between feature values in categorical data is a crucial characteristic which should be mined sufficiently. The sophistic couplings between feature values also reflect the correlations between features. Take the simple dataset in Table 1 as an example. It is intuitive that the value (short for feature value) Female of feature Gender is highly coupled with the value Liberal arts of feature Major. Similarly, The value Engineering in feature Major is strongly coupled with the value Programmer in feature Occupation. Thus, the relation between feature Gender and Major could be expressed by a semantic cluster, i.e., {Female, Liberal arts}, as well as feature Major and Occupation by {Engineering, Programmer}. These value clusters which may contain For most learning tasks, the more relevant information (i.e., the hierarchical couplings) the categorical data embeddings captures, the better performance it has. However, besides CDE [5], other representation learning methods could capture only limited or none of the couplings in categorical data. Generally, existing methods fall into two categories: the embedding-based method and the similarity-based method. Typical embedding methods, e.g., 1-hot encoding and Inverse Document Frequency (IDF) encoding [6,7], transform categorical data to numerical data by some encoding schemes directly. But these methods treat features independently and ignore the couplings between feature values. Also, several similarity-based methods, e.g., ALGO (clustering ALGOrithm), DILCA (DIstance Learning for Categorical Attributes), DM (Distance Metric), COS (COupled attribute Similarity) [8][9][10][11], take value couplings into consideration. However, these methods do not take feature value intrinsic clusters and couplings between clusters into account so that their representation capacities are limited for categorical data.
Learning the heterogeneous hierarchical couplings in categorical data is not a trivial task. There are short of work representing hierarchical couplings in categorical data so far. To our knowledge, our previous work CDE (Categorical Data Embedding) [5] is the first work focusing on hierarchical couplings mining and categorical data representing. Compared with other existing representation methods, it gets relatively better performance. However, CDE can only capture homogeneous value clusters through single clustering strategy and linear correlation between value clusters through principal component analysis which limits its performance in complex categorical data.
To address the above issues, we propose an enhanced Categorical Data Embedding method, i.e., CDE++, which can capture heterogeneous feature value relationships in categorical data. In value couplings learning phase, we use mutual information and margin entropy to learn the interactions of features and feature values. To learn the value clusters couplings, we design a hybrid clustering strategy to get heterogeneous value clusters from multiple aspects. Then the Autoencoder is adopted on these value cluster indicator matrices to obtain lower-dimensional value embeddings which can capture complex nonlinear relationships between value clusters. We finally concatenate the value embeddings to generate an expressive object representation. In this way, CDE++ can capture the intrinsic data characteristic of categorical data in the expressive numerical embeddings which largely facilitate the following learning tasks.
The contributions of this work are summarized as follows: • By analyzing the hierarchical couplings in categorical data, we propose an enhanced Categorical Data Embedding method (CDE++), which could capture heterogeneous feature value coupling relationships in each level.
•
We adopt mutual information and margin entropy to capture the couplings between features and design a hybrid clustering strategy to capture more sophisticated and heterogeneous value clusters in the low level. CDE++ implements different metric-based clustering methods, including density-based clustering method and hierarchical clustering method, with various clustering granularities from different perspectives and semantics.
•
We utilize Autoencoder to learn the complex and heterogeneous value cluster couplings in the high level. With this, CDE++ maps the original value representation into a low-dimensional space, while learning both linear and nonlinear value cluster coupling relationships.
•
We empirically prove the superiority of CDE++ through both supervised and unsupervised learning tasks. Experiment results show that (i) CDE++ significantly outperforms the state-of-the-art methods and their variants in both clustering and classification. (ii) CDE++ is insensitive to its parameters and thus has stable performance. (iii) CDE++ is scalable w.r.t. the number of data instances.
The rest of this paper is organized as follow. Related work is discussed in Section 2. We introduce the proposed method, i.e., CDE++, in Section 3. Experiments setup and results analysis are provided in Section 4. We conclude this work in Section 5.
Related Work
Existing representation learning algorithms broadly fall into two categories: (i) embedding-based representation which represents each categorical object by a numerical vector, (ii) similarity-based representation which uses object similarity matrix to represent the categorical object.
Embedding-Based Representation
Embedding-based representation, which is the most widely used in categorical data representation, generates a numerical vector to represent each categorical object. A popular embedding method called 1-hot encoding translates each feature value to a zero-one indicator vector [6]. It first counts the values of one feature f i as |V i |. Then the value in the feature is represented by a 1-hot |V i |-dimension vector, where '1' corresponds to the value entry and '0' to the others. 1-hot encoding treats each value equally and ignores the instinct couplings of real datasets. Our previous work CDE [5] is a state-of-the-art embedding-based representation which makes use of coupling relationships of data sets. However, the method could not exploit heterogeneous coupling relationships comprehensively due to its clustering method and the limits of nonlinear relationship mining. This method uses a dimension reduction method, such as the principal component analysis (PCA) [12], to alleviate the curse of the dimensionality issue. IDF encoding is another popular embedding-based representation method [7], and it utilizes the probability-weighted amount of information (PWI), which is calculated based on the value frequency, to represent each value. IDF-encoding learns couplings between values from the occurrence perspective, accordingly, its ability of mining intrinsic coupling relationships of data set is very limited. The method in [13] has the same goal as our work, which is to learn transforms categorical data to numerical representations for categorical data. The main difference between the method in [13] and our method is that they need class labels while our method is an unsupervised method.
Embedding-based representation methods are also used for textual data, and there are several effective embedding methods such as Skim-gram [14], latent semantic indexing (LSI) [15], latent Dirichlet allocation (LDA) [16], as well as some variants of them in [17][18][19]. Granular Computing paradigm [20][21][22] is an embedding method which is powerful especially when dealing with non-conventional data such as graphs, sequences, text documents. However, the embedding representation for textual data is significantly different from categorical data since categorical data is structured, whereas textual data is unstructured. Thus, we do not detail these embedding methods here.
Similarity-Based Representation
Similarity-based representation methods utilize an object similarity matrix to represents categorical data. The inspiration of several similarity-based methods comes from learning couplings of categorical data. For instance, ALGO [8] first takes advantage of conditional probability in a pair of values to describe the value couplings; DILCA [9] learns a context-based distance between feature values to capture feature couplings; DM [10] incorporates the frequency probabilities and feature weighting to mining couplings of the feature. COS [11] grasps couplings from two aspects, i.e., inter-feature and intra-feature. The above similarity measures learn feature couplings by pair-values. However, they could not obtain comprehensive couplings since the value clusters and the couplings therein are not considered. Moreover, the similarity methods are inefficient because they require to calculate and store the object similarity matrix.
There are several embedding methods that utilize similarity matrix to optimize their embedding representations [23,24]. However, the performance of these embedding methods depends heavily on the underlying similarity methods.
Learning Process of CDE++
We aim to rebuild the categorical data set so as to make it more convenient for the following learning tasks. Figure 1a illustrates the framework of our enhanced Categorical Data Embedding Learning method (CDE++). The gray boxes in Figure 1a represent a series of learning methods, whereas the white boxes consist of a certain amount of intermediate data for our representation rebuilding. Figure 1b is an instance of data flow in CDE++. The notations are illustrated in Table 2.
Symbols Description
X, x The dataset and a specific object.
F, f
The feature set in the dataset and a specific feature.
The whole feature value set in the dataset and a specific feature value.
The value in feature f of object x. n The number of objects in the dataset. d The number of features in the dataset. m The number of feature values in the dataset.
|C|
The number of groud-truth classes in the dataset.
The probability of v that calculated by its occurrence frequency.
The joint probability of v i and v j .
The relation between two features f a and f b .
The relative entropy of joint distribution and marginal distribution between two features f a and f b .
The occurrence-based value coupling function. ξ c The co-occurrence-based value coupling function.
M o
The occurrence-based relationship matrix.
M c
The co-occurrence-based relationship matrix. τ(eps, MinPts) The parameter of DBSCAN. K The number of clusters parameter of HC. C The cluster indicator matrix. vc The dimension of cluster indicator matrix. ε The factor of drop redundancy value clusters. λ The hidden factor of Autoencoder. q The dimension of value after Autoencoder. Ω The general function to generate new objects embedding.
As shown in Figure 1, we first construct the value couplings matrices by occurrence-based and co-occurrence-based value coupling method, which can capture the interactions between values. Then, we learn value clusters by hybrid clustering strategy with multiple granularities. After obtaining the value clusters, we learn the couplings between value clusters by the deep neural network, Autoencoder, for the value representation. Finally, we obtain the object representation by concatenating the value vectors for the following learning tasks.
Preliminaries
Consider a dataset X with n objects, that is, X = {x 1 , x 2 , ..., x n }, where each object x i is described by d categorical features, and the features belong to For better describing how to calculate the joint probability of two values v i and v j , we need to introduce some symbols. Let f i denotes the feature that v i belongs to, and let v f x denotes the value in feature f of object x. Let p(v i ) denotes the probability of v i that calculated by its occurrence frequency. Thus, the joint probability of v i and v j is The normalized mutual information, denoted as NMI, is a measurement of the mutual dependence between two vectors [25]. When we observe one vector, the information of the other vector that we can obtain can be quantified by NMI. Accordingly, the relation between two features f a and f b could defined as where I( f a , f b ) is the relative entropy of joint distribution and marginal distribution, and it is written in H( f a ) and H( f b ) are the marginal entropies of feature f a and f b , respectively. The marginal entropy of the specific feature can be described by
Learning Value Couplings
The value couplings are learned to reflect the intrinsic relationship between feature values. As we used in the previous work [5], which is proved effective and intuitional. The relation between values has two aspects: on the one hand, the occurrence frequency of one value is influenced by others; on the other hand, one value could be influenced by its pair value because of their co-occurrence relationship in one objects. For capturing the value couplings based on occurrence and co-occurrence, two coupling functions and their corresponding relation matrices (m × m) are constructed, respectively.
The occurrence-based value coupling function is , which represents the occurrence frequency of v i influenced by v j . In this function, the NMI of two features works as a weight. After constructing the coupling function, the occurrence-based relationship matrix M o is constructed by: The co-occurrence-based value coupling function is , which indicates the co-occurrence frequency of value v i influenced by value v j . Note that f i and f j will never be equal since it is impossible for two values owned by the same feature to co-occur in one object. Thus, the co-occurrence-based relationship matrix M c is designed as follow: The two matrices could be treated as new representations of value couplings based on occurrence and co-occurrence, respectively. Moreover, they could be applied in the following values clustering.
Hybrid Value Clustering
To capture the value clusters from different perspectives and semantics, we cluster the feature values in different granularities and use the new representation (M o , M c ) as the input of the clustering algorithm. To make the cluster results more robust and reflect the data characteristics more precisely, we choose a hybrid clustering strategy, which combines the clustering results of DBSCAN (Density-Based Spatial Clustering of Applications with Noise) and HC (Hierarchical Clustering).
The motivation we use the hybrid clustering strategy is as follows: (i) The metric of DBSCAN is density-based, whereas HC is a partition-based method like K-means. So when we combine the cluster results of the two clustering methods, we can obtain the comprehensive value clusters, which is crucial for capturing the intrinsic data characteristics. (ii) DBSCAN has excellent performance for both convex data sets and non-convex data sets, whereas K-means is not suitable for non-convex data sets. HC can also solve the non-spherical datasets that K-means can not solve. (iii) DBSCAN is not sensitive to noisy points, which means DBSCAN is stable. Consequently, our hybrid clustering strategy suitable for majority data sets; meanwhile, it has a better clustering result.
DBSCAN contains a pairwise parameter τ(eps, MinPts), where eps represents the maximum radius of circles centered on cluster cores, and MinPts represents the minimum number of objects in the circle. HC only has one parameter K, which means the number of clusters likes K-means. Therefore, for clustering with different granularities, we set parameters {τ 1 , τ 2 , ..., τ o } and {τ 1 , τ 2 , ..., τ c } for M o and M c clustering with DBSCAN respectively. Likewise, we set parameters {k 1 , k 2 , ..., k o } and {k 1 , k 2 , ..., k c } for clustering with HC.
Parameter selection. In HC clustering, the strategy of choosing K is demonstrated in Algorithm 1. Instead of giving a fixed value, we use another proportion factor ε to decide the maximum cluster number as shown in Steps (3-12) of Algorithm 1. We remove those tiny clusters with only one value from the indicator matrix. When the number of removed clusters is larger than k ε , we stop increasing K, whose initial value is 2. In DBSCAN clustering, for a specific τ(eps, MinPts), the parameter eps and MinPts are selected based on k-distance graph. For a given k, the k-distance function is mapping each point to its k-th nearest neighbor. We sort the points of the clustering database in descending order of their k-distance values. Furthermore, we set eps to the first point in the first "valley" of the sorted k-distance graph, and we set MinPts as value k. The value k is same to the parameter K of HC. The parameter selection is following [26].
After clustering, we get four clustering indicator matrices to represent the clustering results. The clustering indicator matrix of (
Embedding Values by Autoencoder
Deep Neural Network (DNN) is the hottest topic in machine learning because of its ability in feature extraction. Each middle layer in DNN has the ability of feature learning; it is a self-learning process without any prior knowledge.
After constructing the value clusters indicator matrix C, which contains comprehensive information, we further learn the couplings between the value clusters. Meanwhile, it requires to build a concise but meaningful value representation. It is intuitional for us to use DNN for value clusters couplings learning, and we use Autoencoder to handle this in unsupervised circumstance. The simple function of Encoder and Decoder are as follows: Encoder : code = f (x), The Encoder is used to learn low-dimension representation code of the input X. Each layer of the Encoder learns the feature and features couplings of Input X, therefore, code contains the complete information of X. The Decoder is implemented to reconstruct X from its input, i.e., code. The training process of Autoencoder is minimizing the loss function Loss[x, g( f (x))]. After training, the code will contain the feature couplings of X and convey similar information with X as well.
The Autoencoder makes it possible for us to capture the heterogeneous value clusters couplings and obtain a relatively low dimension values representation. In our method, we train the Autoencoder by using the value clusters indicator matrix C as the input. Furthermore, we use the Encoder to calculate a new values representation matrix V new in m × q. The column size q is determined by |o + c + o + c | (denoted by vc) and hidden factor λ which will be discussed in Section 4.5. The new value representation V new would convey the information of value clusters C as well as the clusters couplings, which is considered as a concise but meaningful value representation.
The Embedding for Objects
The final step is to model the objects embedding after we get values representation from Autoencoder. The general function is presented as The function Ω in Equation (7) could be customized to suit for learning task in the following. We concatenate the new values from V new to generate the new objects embedding.
The main procedures of CDE++ are presented in Algorithm 1. Algorithm 1 has three inputs, that is, the data set X, the factor of drop redundancy value clusters ε, the hidden factor of Autoencoder λ. The algorithm mainly consists of four steps. The first step is to calculate M o and M c based on occurrence and co-occurrence value coupling function. Then CDE++ utilizes hybrid clustering strategy to cluster values with M o and M c . The parameter ε is used to control the clustering results and determines the time to terminate the clustering process. In the third step, the algorithm uses Autoencoder to learn the couplings of value clusters and generates the concise but meaningful value embedding. The parameter λ is a hidden factor of the input dimension and output dimension of Encoder, which indicates the ratio of dimension compression. Finally, CDE++ embeds objects in the data set by concatenating the value embedding. Accordingly, the total complexity of CDE++ is O(nd 2 + m ln m + m 2 + m * vc * epoch + nd). In real data sets, the number of values in one feature is generally small, thus, m 2 is a little larger than d 2 . Meanwhile, m 2 is not comparable with nd 2 . The total number of value clusters vc is much smaller than m and epoch is iteration times which is manual setting. Therefore, the approximate time complexity of CDE++ is simplified as O(nd 2 + m * vc * epoch).
Data Sets
For evaluating the performance of CDE++, as Table 3 shows, fifteen real-world datasets from UCI (https://archive.ics.uci.edu/ml/datasets.php) machine learning repository are used. These datasets cover multiple areas, e.g., life, physical, game, social, computer, education, etc. Each data set has a class label as a metric, and has several features described by categorical value. In the unsupervised K-means task, we use the whole dataset as training sets and test sets. In the supervised SVM task, we use 75% of the datasets for training sets and the rest 25% for test sets.
The detailed attributes of data sets are presented in Table 3, where {n, d, q, |C|} denotes the number of objects, features, feature values, ground-truth classes in the data set respectively.
Baseline
In this test, CDE++ is compared with IDF encoding (denoted by "IDF"), DILCA, our previous work (i.e., the coupled data embedding denoted as "CDE"), and the widely used 1-hot coding (denoted by "1-HOT"). Moreover, to make a fair comparison, we introduce the variations of CDE and 1-HOT by replacing their last step of generating value embedding with AutoEncoder. The variations are denoted by CDE-AE and 1-HOT-AE, respectively. In CDE and its variation, the parameters are set according to its original paper. The parameters of CDE++ are mentioned in Section 4.5. We use Autoencoders with same parameters settings, as shown in Table 4.
Evaluation Methods
The performance of learning tasks significantly depends on the data representation. The more expressive the representation is, the better the performance. To give a convincing evaluation, we feed the obtained representation into both unsupervised and supervised learning tasks. Without loss of generality, we choose K-means as the representative unsupervised learning task, whereas SVM as the representative of supervised learning tasks.
In K-means clustering, we set the number of clusters K = |C| in each data set. We use the widely used F-score to measure the performance. The higher the F-score, the better the K-means clustering performance, so as to the object representation performance. Although the datasets we used are relative balance, we choose the micro version of F-score. The calculation of micro F-score is shown below.
where TP i , FP i , FN i are the numbers of true positive, false positive, false negative for class i.
For the SVM classifying, we use Accuracy as the performance measurement. Likewise, the higher the Accuracy, the better the performance of object representation.
Since the starting points of value clustering are random, we run the proposed CDE++ 10 times and feed the obtained representations into the learning tasks. Each task is repeated 10 times to get a stable result. The reported F-score or Accuracy is the average value over these 100 validations. Therefore, the robustness of evaluation results is guaranteed.
Experimental Environment
All the experiments are conducted on the same workstation. Table 3 presents the K-means clustering F-score of the tested methods. In thirteen out of fifteen datasets, CDE++ has the best performance, which is much better than other embedding methods. On average, CDE++ obtains approximate 16.58 %, 14.56 %, 9.10 %, 10.80 %, 13.56 %, 12.50 % improvement compared with IDF, DILCA, CDE, CDE-AE, 1-HOT, 1-HOT-AE, respectively. CDE outperforms other state-of-art representation methods due to the learning of hierarchical couplings, while CDE++ enhance the heterogeneous value relationship capturing and achieve the best performance. Table 5 demonstrates the Accuracy of SVM using the representations output by CDE, CDE-AE, 1-HOT, 1-HOT-AE, and CDE++. CDE++ performs significantly better than the first four methods, and is comparably better than 1-HOT and 1-HOT-AE. On average, CDE++ obtains approximate 12.76%, 13.55%, 10.38%, 17.3%, 5.8%, 5.11% improvement compared with IDF, DILCA, CDE, CDE-AE, 1-HOT, 1-HOT-AE, respectively. In the supervised learning task, our enhanced CDE++ could also keep a high performance than others. Therefore, based on the results above, CDE++ has generality for both unsupervised tasks and supervised tasks.
Ablation Study
To examine whether all the components of CDE++ is necessary, we implement the ablation study, and Table 6 shows the comparative group setting. We implement K-means clustering and SVM classification learning task using the output of objects embedding. In the implementation, (i) and (ii) use DBSCAN and HC for value clusters learning respectively, whereas (iii) uses both of them. Neither (i), (ii) nor (iii) learn value clusters couplings. (iv) uses all parts of CDE++. Table 6. Ablation Study Settings. Tables 7 and 8 illustrate the K-means clustering and SVM classifying performance, respectively. Under the whole parts of CDE++, these two learning tasks obtain the highest F-score and Accuracy. Based on the ablation study, it is believed that no components can be dropped from CDE++ and the whole structure could return better objects embedding. Figures 4 and 5 present the dimension of objects representation and clustering performance using different λ values. Likewise, we fix ε = 8 to test the sensitivity w.r.t. λ. Figure 5 shows that the clustering performance is relatively stable as a whole in the range of λ. However, the dimension of objects representation decreases as λ increases, which is illustrated in Figure 4. λ is the parameter that adjusts the ratio of the output dimension and input dimension in Autoencoder, and λ is inversely proportional to the output value representation after Autoencoder. In the value range mentioned above, though the dimension of value representation decreases, it could convey similar information in virtue of the Autoencoder algorithm. So the clustering performance would not fluctuate acutely. Upon the sensitivity test results, we can claim that the CDE++ performance is not sensitive w.r.t. ε and λ. Moreover, we suggest ε = 8 and λ = 10 as a general parameters value pair.
Scalability Test
We split the largest dataset, i.e., Chess, in our work into five subsets, where the data size increase doubly, for the scaleup test w.r.t. data size. The subsets of Chess have six fixed features. Likewise, we synthetic data sets by varying the dimensions in [20,320] for the scalability test w.r.t. data dimension with fixed data size (e.g., 10,0 0). The feature value of the synthetic data sets is randomly chosen from {0, 1}. Figure 6 presents the scalability test results of the five embedding methods. As Figure 6 illustrates, the execution times increase subtly as the data set size increases. It demonstrates that the execution time of CDE++ is linear to the data size and the scalability of CDE++ w.r.t. data set size is well, while DILCA has O(n 2 d 2 log d).
1-HOT is the most efficient embedding method since it does not consider the couplings between feature values and just translate feature value to a 1-hot vector. The time complexity of CDE++ and CDE before learning clusters coupling are similar, since the neural network Autoencoder is more time consuming than PCA, the execution time of CDE++ is longer than CDE. When we replace the PCA of CDE with Autoencoder, the execution times increase and become even longer than CDE++. Figure 7 shows the execution time of the tested methods with different object dimensions. When the object dimension enlarges, the execution times of all the five methods rise up acutely. 1-HOT and 1-HOT-AE are much faster since they are simpler than other methods as introduced above. CDE++, CDE, and CDE-AE have higher and similar execution time because their complexities are quadratic functions of the feature number. Specifically, the execution time of CDE++ performed on a dataset with 10,000 objects and more than 300 features is about 10 minutes. Thus, we can say that the execution time is still acceptable in high dimension dataset embedding.
Conclusions
This paper proposes an enhanced Categorical Data Embedding method (CDE++), which aims to generate an expressive representation for complex categorical data by capturing heterogeneous feature value coupling relationships. We design a hybrid clustering strategy to capture more sophisticated and heterogeneous value clusters in the low level. We utilize Autoencoder to learn the complex and heterogeneous value cluster couplings in the high level. Different from existing representation methods, our work comprehensively captures the intrinsic data characteristic. Experiment results demonstrate that CDE++ is available for both supervised and unsupervised learning tasks, whereas it significantly outperforms existing state-of-the-art methods with good scalability and efficiency. Moreover, it is insensitive to its parameters.
Based on the superiority of CDE++, our future work is to consider mixed data (i.e., categorical and continuous data). Meanwhile, considering different applications requirements, we could customize CDE++ to get better performance. | 6,822.2 | 2020-03-29T00:00:00.000 | [
"Computer Science"
] |
Patients Are Paying Too Much for Tuberculosis: A Direct Cost-Burden Evaluation in Burkina Faso
Background Paying for health care may exclude poor people. Burkina Faso adopted the DOTS strategy implementing “free care” for Tuberculosis (TB) diagnosis and treatment. This should increase universal health coverage and help to overcome social and economic barriers to health access. Methods Straddling 2007 and 2008, in-depth interviews were conducted over a year among smear-positive pulmonary tuberculosis patients in six rural districts of Burkina Faso. Out-of-pocket expenses (direct costs) associated with TB were collected according to the different stages of their healthcare pathway. Results Median direct cost associated with TB was US$101 (n = 229) (i.e. 2.8 months of household income). Respectively 72% of patients incurred direct costs during the pre-diagnosis stage (i.e. self-medication, travel, traditional healers' services), 95% during the diagnosis process (i.e. user fees, travel costs to various providers, extra sputum smears microscopy and chest radiology), 68% during the intensive treatment (i.e. medical and travel costs) and 50% during the continuation treatment (i.e. medical and travel costs). For the diagnosis stage, median direct costs already amounted to 35% of overall direct costs. Conclusions The patient care pathway analysis in rural Burkina Faso showed substantial direct costs and healthcare system delay within a “free care” policy for TB diagnosis and treatment. Whether in terms of redefining the free TB package or rationalizing the care pathway, serious efforts must be undertaken to make “free” health care more affordable for the patients. Locally relevant for TB, this case-study in Burkina Faso has a real potential to document how health programs' weaknesses can be identified and solved.
Introduction
Direct cost-burden of illness, and particularly for a chronic disease such as tuberculosis, can cause delays, slower recovery, exacerbate health problems and drug resistance. Moreover, it may lead to catastrophic health expenditures and impoverishment as a result of the use of health services [1]. Protecting people from this financial risk is definitely a priority concern of policy-makers [2][3][4]. This is why free-of-charge health programs have been implemented, such as the TB control strategy. And yet, the costs for patients of TB treatment have largely been ignored [4].
Despite positive global progress, the international Stop TB Partnership targets of reducing TB prevalence and mortality will not be met in Africa [2]. In most Sub-Saharan African countries, and for instance in Burkina Faso, poverty and weak health systems remain a fertile breeding ground for tuberculosis and are likely to remain so in the coming years. In particular, poor case detection and treatment are jeopardizing the impact of National Tubercu-losis Control Programmes (NTPs) and generating new challenges such as the HIV/AIDS co-infection, and the growth of multidrugresistant tuberculosis. These factors complicate treatment and undermine the efficacy of the program and the achievement of targeted objectives.
While potential financial barriers have been stated as the rationale for implementing a free-of-charge strategy, the population still faces lingering and underestimated out-of-pocket expenses [5]. Therefore access to TB care is still challenging. The purpose of this study was to estimate direct costs (out-of-pocket expenditures) of TB care and control from the patient perspective and evaluate whether they are prohibitive or not. We aimed at describing direct costs in order to feed a discussion on the possible ways to mitigate financial obstacles and enhance performances of the TB care and control strategy.
Ethics statement
The study was carried out according to the international and national standards and was approved by the National Ethics Committee: ''Comité national d'éthique pour la recherche en santé (CNERS), Ministère de la Santé 03 BP 7009. Ouagadougou 03. Burkina Faso.'' Informed consent was systematically requested. All subjects participating in the study signed a voluntary consent form after being given all the information necessary and sufficient to make an informed decision regarding their participation in this study.
Study Setting
The present study was conducted in six rural health districts of central Burkina Faso (Bousse, Koupela, Ouargaye, Zabre, Ziniare and, Zorgho) covering a population of nearly 1,447,000 inhabitants (www.insd.bf). The national TB control strategy is based on the DOTS implemented nationwide through a network of public CDTs (Centres de Diagnostic et de Traitement) located at health district level. Beforehand, identification of TB suspects was performed by nurses during a consultation at the first-line health centers (FLHCs). Then, suspected tuberculosis patients were referred from first-line health centres to the CDTs where the diagnosis was confirmed and the treatment prescribed. Diagnosis was based on a series of 3 sputum smear microscopies and required at least 4 contacts before initiation of the treatment. The DOTS strategy consists in a two-month intensive treatment (during which the drugs are delivered in the CDT on a daily basis) followed by a four-month continuation treatment (during which the patient comes weekly to the CDT to get the drugs and for clinical control). In January 2008, the national program had changed its treatment regimen from an eight-month to a six-month regimen resulting in a two-month reduction of the continuation treatment. A direct supervision from health workers for daily drug administration is compulsory during intensive treatment. During continuation treatment, a stock of drugs (covering up to one month of treatment) is regularly given to each patient for home based treatment. Drug collection visits during continuation phase could take place either at the CDT or at the FLHC closest to home. Control examinations of sputum were undertaken during the treatment follow-up. A second line re-treatment is started in case the smears are positive. Under DOTS, TB diagnosis (based on sputum smear microscopy) and regimen are provided free-ofcharge.
Study design and participants
A cross-sectional study was performed systematically among smear-positive TB patients enrolled in the NTP between June 2007 and June 2008. Data collection was nested within the European funded FORESA project which supported operational research on tuberculosis and health systems related issues in West Africa between 2006 and 2009 [6], [7].
Questionnaires were available in both French and the local language and were pre-tested. After informed consent, in-depth interviews lasting on average 3 hours were held among 242 sputum smear-positive patients being treated or having completed their treatment in the last 6 months. The trained interviewers worked in pairs to guarantee quality of the data. They were supervised by a field coordinator under close guidance of the research team. By considering the whole TB care pathway, we covered the period from onset of symptoms to completion of the treatment. Data were collected and referred to the following stages: pre-diagnosis (from onset of symptoms to first visit to health facility), diagnosis (from first visit to health facility to diagnosis confirmation), treatment initiation (from diagnosis to beginning of treatment) and, finally intensive and continuation treatment (from start to end of treatment).
The cost items included in the present study related to seeking diagnosis and treatment and referred to medical and non-medical expenses (i.e. examination and laboratory tests, consultation fees, drugs, hospital care, transportation costs to reach health providers, services provided by traditional healers and food supplements). All kinds of costs (including payments in kind) were systematically explored through the successive stages of the patient care pathway. Out-of-pocket expenses (direct costs) were initially expressed in local currency (FCFA) and then converted in US dollars (US$) using OANDA Rates TM : 655.957 CFA BCEAO Franc (XOF) = 1 Euro (EUR) = 1.459 US$ (Mean study period price).
Data processing
The data capture was performed using EPI-2000 (Centre for Disease control and Prevention, Atlanta, GA, USA). Data management and statistical analysis were processed with the IC/ STATA 10 for Windows statistical package. Summary statistics (mean and standard deviation or median and interquartile range for continuous data and frequency distributions for categorical variables) were used to describe the study sample and cost-burden factors.
Patients' characteristics
Verification of the quality and consistency of data resulted in the selection of 229 patients (i.e. 95% of the 242 interviewed). The average age of the study population was 41.5 years. It included 153 males (66.8%) and 76 females (33.2%). The demographic and clinical characteristics of the patients are summarized in Table 1 while the socioeconomic factors are presented in Table 2.
Most of the patients (86%) benefitted from the support of close relatives (transportation and/or labour substitution). A majority of patients (58.5%) lived less than 5 kilometres from a FLHC and over a third (36.4%) lived less than 5 kilometres from a CDT.
Patient care pathway analysis
By breaking down direct costs of the successive stages of the TB patient pathway, Figure 1 highlights the gap between the national TB control strategy and the effective patient care pathway. Direct costs were assessed within this framework: most patients accumulated medical and non-medical out-of-pocket expenses at every single stage of their care pathway. We therefore presented the magnitude of direct costs for them. Moreover, it showed three kinds of delays which were respectively patient delay in the prediagnosis period, provider delay related to diagnosis, and treatment delay related to treatment initiation stage.
Pre-diagnosis (patient delay). The median direct cost specific to the pre-diagnosis stage corresponded to US$8.5 (8.9-41.2) per patient (n = 157). Out of the 229 patients, patient delay ranged from less than a week to more than 3 months with a majority (57.2%) within 1 month. Before entering the healthcare system, the median number of health encounters in its fullest sense (including non-conventional sector) was 3 (mean = 4.3). Definitely, 71.7% of patients (n = 157) were already facing direct costs associated with TB (i.e. self-medication, transportation, traditional healers' services). Half of the latter spend at least 23.7% (12.9-41.4) of their total direct costs before the first visit to a FLHC.
Diagnosis (provider delay). The median direct cost related to diagnosis stage was US$26.7 (13.3-56.9) per patient. For most of the patients (83.8%), provider delay lasted less than 2 weeks. Compared to the 4 visits required by the national strategy, 16.6% of patients consulted public providers even more. But most striking is that 95% of patients (n = 208) faced out-of-pocket expenditures during the diagnosis process (i.e. fees, travel costs paid to reach various providers, sputum smears microscopy and chest radiology).
Among them, the median share of overall direct costs incurred for this stage reached 35.0% (20.4-50.0).
Treatment initiation (treatment delay). For treatment initiation, the median direct cost was US$13.3 (3.8-49.3) per patient (n = 174). A majority (52.0%) of the 229 patients faced until 3-week delay before starting their treatment. In contrast with the national strategy, they were not put on treatment on the same day as diagnosis. For them, the number of additional travel to seek care ranged from 1 to 11. As a consequence, there are extensions and complications of the treatment leading to additional direct costs (mainly travel costs). Beyond this first observation, between diagnosis confirmation and start of the treatment, 79.5% of patients (n = 174) faced some expenses (i.e. travel costs, traditional healers) and half of them spent 33.3% (3.8%-49.3%) of their total direct costs.
Intensive treatment. During intensive treatment, the median direct cost was US$9.1 (4.4-23.9) per patient. Most patients (98.7%) followed the intensive treatment without interruption and few (1.3%) discontinued their intensive treatment for financial or family reasons. Even during intensive treatment where regimen is delivered free-of-charge, 67.9% of patients (152) faced some medical and/or non-medical costs. Among them, 98.0% (149/ 152) had to support non-medical costs such as food and transportation and 28.3% (43/152) had to finance medical costs such as hospitalization, user fees, chest X-rays, extra sputum tests (i.e. BAAR sputum test control) or complementary medical examinations. The median share of overall direct costs incurred for intensive treatment was 10.4% (4.4-27.3).
Continuation treatment. The median direct cost during continuation treatment was US$11.1 (4.4-20.0) per patient. Most patients (99.1%) followed the continuation treatment without interruption. As seen previously for intensive treatment, for continuation treatment, a majority of 50.2% (110) had to face spending during this stage. Among them, all patients (110/110) faced non-medical expenditures such as food and transportation expenses and 24.5% (27/110) faced medical expenditures such as drugs or small medical equipment. The median share of overall direct costs incurred for continuation treatment was 9.5% (3.2%-20.9%).
Can poor patients really afford free TB care and control?
Out of the 229 patients, almost half (45.9%) of individual incomes were under the national poverty line (i.e. US$15.3 per adult per month) [8]. This proportion rose to 86.5% (n = 198) when taking into account the minimum salary as a threshold (i.e. US$68.3 per adult per month). Male sex bias was showed for the median individual income (p-value = .003). However, there was no sex bias for median direct costs (p-value = .48), nor for median cost-burden associated with TB (i.e. percentage of household's income) (p-value = .80).
For half of the patients, overall direct costs represented more than 2.8 months of their own household income (1.1-5.2). By extrapolation and based on a linear estimation of household income, this median represented 23% of annual household income. This might get worse for those whose incomes were highly volatiles, i.e. for 29.2% of households (65).
Discussion
Our findings have shown that direct cost-burden of TB borne at household level in rural Burkina Faso is substantial and prohibitive. It suggests that the poorest would unlikely be able to finance TB without diverting basic resources. Overall out-ofpocket expenses associated with TB (i.e. Median direct costs of [9]. Indeed, exemptions for TB fees often only apply to the first three sputum tests and visits, drug regimen and clinical control. Nevertheless, other hidden costs still arise from the TB care pathway such as additional medical examinations, laboratory tests and X-rays or even travel costs. These are not supplied. Moreover, linked to poor financial resources of health services, overcharging practices may have occurred for services that were supposed to be free-of-charge [10][11][12] and can explain in part those extra-costs. Consequently, most of the patients who already live in extreme precariousness, are dangerously at risk of catastrophic TB expenditures, usually leading to deeper hazardous impoverishment of the patient's household [13], [14]. Whereas TB may occur for a limited period in lifetime, even if it is chronic, socio-economic consequences resulting from its case management tend to be permanent and impede poverty [14]. Our demonstration that the poor spend almost one fifth of their total direct expenses before diagnosis is in line with the assumption of Kemp et al suggesting that relative economic burden among poor was higher than for the richer [15]. Often neglected, yet it appeared important to draw attention to care-seeking behaviours during pre-diagnosis period [16] and diagnosis period [9], especially as a great part of expenses arose because of delayed referral for TB diagnosis. The uppermost burden related to median share of overall direct costs occurred before starting the treatment (i.e. up to 35% for diagnosis only). Supporting this observation, evidence from Nigeria reported much higher direct costs before and during diagnosis (i.e. respectively US$29 and US$31 versus US$17 for post-diagnosis period) [9]. Likewise, another study conducted in Burkina Faso attributed low casedetection to the loss of cases at each of the stages leading to diagnosis [6]. Moreover, a former study from Sierra Leone pointed out that many patients sought intermittent help from a wide range of formal and informal care practitioners [12]. Indeed, seeking care from non TB control health care practitioners led to health system delays [12], [17]. The ascertainment may explain erratic care pathways, poor use of health services as well as the challenge to case-finding. Therefore, it is crucial to instigate new strategies for early and rapid detection involving a patient-centred approach [18], incorporating a better communication pattern with the informal health system [12], and looking for a better organization and social relationship of care [19].
Although the levels of direct costs during intensive or continuation treatment are not the most important, they remain nonzero even though the TB program provides free DOTS regimen. Expenses drawn to traditional healers or private providers were comparatively modest. Whereas, expenses related to travel and to food supplements tended to be worrisome during treatment. Inappropriate care pathways including extra laboratory tests and X-rays, consultations fees, drugs and hospitalisations contribute to the risk of catastrophic TB expenditures and constitute effective financial barriers for the poor. Among our rural population, the financial pressure is catastrophically high and, for instance, far above the 10% threshold at which a household is considered to be at risk of catastrophic health expenditure [10], [20], [21].
While Africa is the only region not on track to reach the Millennium Development Goals for TB (STOP TB partnership, E-alert 17/07/2012), our study represents a real potential to feed proposals to the Stop TB Partnership initiative and prepare the next issue of the TB care and control program. To enhance success of TB strategies, countries must move towards affordable access to prevention, diagnostic and treatment services. They must focus their efforts to alleviate poverty and promote social protection interventions [22], [23]. There is still a crucial need to better understand patient barriers to care in developing countries [4], [22], [23]. Very few studies have evaluated the costs of TB care among patients in Africa [4] and so far, economic impact of TB at the household level remains an understudied area in West African countries. Furthermore, there is a lack of field data focusing on vulnerable populations such as patients living in rural areas [4]. Our study helped to fill this triple gap using a comprehensive approach and highlighted key areas generating financial barriers.
Study limitations
The study has a number of limitations. First, convenient sampling was used focusing on a rural area which does not include any of the main cities of Burkina Faso. It is thus not fully representative of the entire country. However this focus on a poor and rural area does not appear to be a limitation as this population is particularly vulnerable to TB and directly affected by potential financial barriers. Second, bias may have been caused by the parallel development of other diseases distinct from TB but affecting the treatment. In the case of the co-infection HIV/TB, we compared overall direct costs among TB patients respectively living with HIV+ or HIV2 whose medians corresponded to US$93.8 and US$89.3. Statistical tests did not show a significant difference. Third, despite the close support provided by the FORESA field officers, the careful preparation of the questionnaire (incl. cross-checking questions) and the sound training of the interviewers, some sensitive data -such as the full collection of the expenses in kind, the actual payments to traditional healers, and the possible bribes requested by health workers-may have been underestimated. However, the field knowledge of the FORESA project was an asset for the conduction of such a complex survey. Finally, relevant issues such as indirect or intangible costs were not reported in this paper. For instance, the extent of indirect costs 2whose median achieved 45 workdays lost by both the patient and his guardian, corresponding to a loss of US$22.6 if we refer to the national poverty line [10]2, confirms the need to consider them. To provide comprehensive view of the economic burden of TB, those issues need to be further investigated.
Conclusions
This study reports on the substantial and prohibitive direct costburden associated with TB in rural Burkina Faso. It shows that although TB diagnosis and treatment services are free, patients face erratic patient pathways (i.e. delays, non-relevant care-seeking behaviours) that uselessly worsen the direct economic burden of TB. This implies that policy-makers should find new ways to improve operationalization of international health policy. As obstacles to TB diagnosis and treatment may include economic, socio-cultural as well as organisational care supply dimensions, comprehensive interventions must be implemented involving the various care providers (incl. traditional healers), caregivers, and guardians around the patients. Tackling erratic patient pathways through improved patient-centred management should offer a promising area of progress. Whether in terms of redefining the free TB package or rationalizing the care pathway, serious efforts must be undertaken to make ''free'' health care more affordable for the patients. Locally relevant for TB, this case study in Burkina Faso has a real potential to document how program weaknesses can be solved. All TB programs -and more, since other international health policies are concerned-should adopt this type of approach to move towards universal health coverage. | 4,709.6 | 2013-02-25T00:00:00.000 | [
"Economics",
"Medicine"
] |
Evaluation of Grand Ethiopian Renaissance Dam Lake Using Remote Sensing Data and GIS
: Ethiopia began constructing the Grand Ethiopian Renaissance Dam (GERD) in 2011 on the Blue Nile near the borders of Sudan for electricity production. The dam was constructed as a roller-compacted concrete (RCC) gravity-type dam, comprising two power stations, three spillways, and the Saddle Dam. The main dam is expected to be 145 m high and 1780 m long. After filling of the dam, the estimated volume of Nile water to be bounded is about 74 billion m 3 . The first filling of the dam reservoir started in July 2020. It is crucial to monitor the newly impounded lake and its size for the water security balance for the Nile countries. We used remote sensing techniques and a geographic information system to analyze different satellite images, including multi-looking Sentinel-2, Landsat-9, and Sentinel-1 (SAR), to monitor the changes in the volume of water from 21 July 2020 to 28 August 2022. The volume of Nile water during and after the first, second, and third filling was estimated for the Grand Ethiopian Renaissance Dam (GERD) Reservoir Lake and compared for future hazards and environmental impacts. The proposed monitoring and early warning system of the Nile Basin lakes is essential to act as a confidence-building measure and provide an opportunity for cooperation between the Nile Basin countries.
Introduction
The construction of massive hydraulic infrastructures, such as big dams, has expanded to an unprecedented level around the world in the 20th century. With their influence on social and political relations, they are also shaped by political, social, and cultural conditions [1,2]. The downstream countries in the main world river system are generally opposed to the upstream project dams [3,4]. These dam projects cause many concerns in the downstream countries because of their possible social and environmental impacts, including droughts, water salinity, and water flow effects. In the Euphrates Basin, the downstream countries of Iraq and Syria were affected by four droughts in 2000, 2006, 2008, and 2009, which are a cascading effect of climate change and a large number of dams being constructed along the Euphrates River, which is known as the Southeastern Anatolia (GAP) Project [5,6]. The GAP project includes the construction of 22 dams and 19 hydraulic power plants for irrigation and the generation of electricity on the Euphrates and the Tigris rivers and their tributaries [2,5]. The Three Gorges Dam (TGD) was constructed in China in the Yangtze River, affecting the sediment discharge and regulation of the flow process in the downstream provinces, which resulted in severe scouring and changes in the hydrogeological regime [7]. Dam projects were established along the Mekong River from 1965 to 2019 in northeastern Thailand, China, Vietnam, Loas, and Cambodia for power electricity generation [8]. These dam projects have environmental, economic, river hydrogeology, biological, and sediment transfer effects in Myanmar, Laos, Thailand, China, Cambodia, and Vietnam [9].
In April 2011, Ethiopia started the construction of the Grand Ethiopian Renaissance Dam (GERD). Understanding the context of the dam and its position relative to other dams on the In April 2011, Ethiopia started the construction of the Grand Ethiopian Renaissance Dam (GERD). Understanding the context of the dam and its position relative to other dams on the Blue Nile is essential. The newly built dam is located downstream of the Tana Lake, a highland lake at an average altitude of 1800 m a.s.l., with a surface area of 3060 km 2 at an average lake level of 1786 m a.s.l. This lake has a maximum depth of 15 m [10]. Four major tributaries feed the Tana Lake sub-basin, the Gelgel Abay in the south, Rib and Gomera in the east, and Megech in the north (Figure 1). The GERD is a gravity rollercompacted concrete dam with a target height of 145 m and length of 1780 m . The dam's crest is supposed to be at a height of 655 m above sea level, with the prospective to impound a lake with a capacity of 74 billion m 3 [11]. About 116 km upstream of the GERD, the Rosaries Dam is located in Sudan, constructed in 1961 and heightened in 2013, with a current storage capacity of 7.4 billion m 3 (Nile Basin Atlas Program) [12]. About 100 km downstream of Rosaries, the Sennar Dam was constructed in 1926 with a capacity of about 390 million m 3 (Nile Basin Atlas Program) [12]. Further north in Sudan is the Meroe Dam, with an impoundment capacity of about 12.3 billion m 3 . Further to the north in the very south of Egypt, the Aswan High Dam was constructed in 1970 and is considered to bee the last dam near the mouth of the Blue Nile. The total capacity of the Aswan High Dam is 164 billion m 3 . It consists of dead storage of 31.6 billion m 3 , active storage of 90 billion m 3 (BCM), and emergency storage for flood protection of 41 billion m 3 (Nile Basin Atlas Program) [12]. The Eastern Nile Basin is affected by historical complex hydropolitics over the use of the Nile water [14,15]. In the summer of 2020, the first phase of construction of the GERD was finished, and shortly after, the first filling of the GERD Lake started. During this season, the Sudanese dams, especially of Rosaries and Sennar, were confusedly operated due The Eastern Nile Basin is affected by historical complex hydropolitics over the use of the Nile water [14,15]. In the summer of 2020, the first phase of construction of the GERD was finished, and shortly after, the first filling of the GERD Lake started. During this season, the Sudanese dams, especially of Rosaries and Sennar, were confusedly operated due to a lack of prior information about the size and timing of the filling (reported by the Sudanese Minister of Irrigation Yasser Abbas on 26 August 2021 Daily News [16]. This may be due to the frozen agreement of the Eastern Nile Basin Initiative (NBI) activities [14,15]. This resulted in a shortage in freshwater during June and July, the filling months, in the capital Khartoum and many other cities after Sudanese water treatment stations went out of service due to low river levels. Later, in the same season, after the end of filling, Sudan faced a vast flood as the level of the Nile reached 17.48 m on 27 August 2020 at Khartoum, which was considered the second-highest level after the 1912 flood according to Prime Minister Abdalla Hamdouk [17] (Guardian Journal; date 5 September 2020). Ninety-nine people were killed in this flood as mentioned by the state of emergency in Sudan. Ramadan et al. [18] referred to the negative impacts, including environmental, economic, and social problems, on Egyptian countries by applying different scenarios along 2, 3, and 6 years of the filling of the GERD under different flow conditions. Omran and Negm [19] considered the different filling scenarios and indicated that Egypt and Sudan would experience severe impacts during the filling phase of GERD in some scenarios.
Remote sensing has been used to estimate and monitor the volume of lakes worldwide in various case studies. The key parameters controlling the water quantity of small or large lakes are the area and top level [20][21][22]. The spatial and temporal changes in the volume of water bodies can be calculated by several methods depending on the availability of morphometric and areal data. Amitrano et al. [21] used DEM (9 to 15 m resolution obtained from SAR data) to estimate the depth. They analyzed both Sentinel-1 and COSMO-SkyMed imagery to obtain more accurate results to extract the boundary of the basin as the water level increased, reflected by increases in the contour, to estimate the reservoir surface volume and retained water volume of the reservoir in the Labaa Basin in Ghana region. Xiaoqi et al. [23] used STRM DEM of the above lake level to construct the relationship between the elevation and the area to estimate the volume of the Namsto Lake in China. Pipitone et al. [22] used both optical (Landsat 5 TM, Landsat 8 OLI-TIRS and ASTER images) and synthetic aperture radar (SAR) images to monitor the water surface and the level of the Castello Dam Reservoir. They defined the displacement using the global navigation satellite system (GNSS) to detect the relationship between the water level and dam deformation in Castello Dam on Magazzolo Reservoir in south Italy. Ahmed et al. [24] used the time series of Landsat images of 2001, 2011, and 2019 to extract the modified normalized difference water index and combined it with field observation water level data to calculate the lake volume from 2001 to 2019 in Deeper Beel, which is situated in the southwestern part of Guwahati, Assam in India. Jiang et al. [25] used the average annual coefficients of the VH backscatter for Sentinel-1A and the normalized difference water index (NDWI) of Sentinel-2 to map small water bodies in the mountain region in China for water-related environment monitoring and resource management. In the Nile Basin, Hossen et al. [26] built bathymetric and water capacity relationships based on Sentinel-3 optical and radar data for Aswan High Dam Lake, Egypt. Kansara et al. [27] used an analysis of multi-source satellite imagery and Sentinel-1 SAR imagery to display the number of classified water pixels in the GERD from early June 2017 to September 2020, indicating a contrasting trend in August and September 2020 for all upstream/downstream water bodies using a Google Earth Engine (GEE). Their results show that upstream of the dam rises steeply while it decreases downstream.
In the last 20 years, multispectral remote sensing and Sentinel (SAR-1) data have been widely used for surface water monitoring to overcome the limitations and lack of field observations for monitoring of the storage volume of water reservoirs [19,21,23,28]. The dynamic volume change in GERD Lake is essential for all Blue Nile countries, including Ethiopia, Sudan, and Egypt, to understand the balance of the water security
Depth Estimations
The depth of the GERD Lake was estimated using Shuttle Radar Topography Mission (SRTM) data, which map the topography of the Earth's surface using radar interferometry. The Shuttle Radar Topography Mission (SRTM) is an international project spearheaded by the National Geospatial-Intelligence Agency and NASA, whose objective is to obtain the most complete high-resolution digital topographic database of the Earth. We downloaded the SRTM 1 arc per second data courtesy of the U.S. Geological Survey from https:// Water 2022, 14, 3033 4 of 11 earthexplorer.usgs.gov/ accessed on 21 June 2020. It was measured on 11 March 2000. It was used in the study of the GERD Lake to obtain the elevation difference obtained through interferometry, which was transformed into a 3D digital elevation model (DEM), which was used as the GERD Basin Reservoir depth before the filling process.
Satellite Data Processing and Water Level Estimation
In this study, we tracked changes in the water capacity level boundary for the GERD Lake using the multi-optical satellite data and Sentinel 1A (SAR). We acquired the multi optical Sentinel 2A and Landsat-8 with time series from 21 July 2020 to 3 July 2021 courtesy of the U.S. Geological Survey, https://ers.cr.usgs.gov/ website accessed on 21 June 2020 while the Landsat-9 and Sentinel-1 (SAR) with time series were from 16 July 2021 to 28 August 2022. The sentinel-1 (SAR) was obtained from Copernicus Open Access Hub https://scihub.copernicus.eu/ website accessed on 29 August 2022. The Sentinel-2 data was characterized by higher spatial and spectral resolutions in the near-infrared region. The Sentinel-2 sensor, the EO satellite of the Copernicus program, has 12 bands with spatial resolutions of 10 (four visible and near-infrared bands), 20 (six red-edge and shortwave infrared bands), and 60 m (three atmospheric correction bands) [29]. Recently, the Landsat-9 satellite was launched on 27 September 2021. It is similar to Landsat-8 and characterized by four visible spectral bands, one near-infrared spectral band, three shortwave-infrared spectral bands at a 30 m spatial resolution, plus one panchromatic band at a 15 m spatial resolution, and two thermal bands at a 100 m spatial resolution. The problem of dense cloud cover is encountered in some optical satellite imagery, which masks the lake in rainy seasons, especially in June and July each year. We used a filter to remove cloud pixels, using the threshold to identify the pixel range as cloud using ArcGIS 10.8 software [30]. We found incomplete filter-out cloud in some multi-optical satellite images. We instead used Sentinel-1 SAR to obtain the water level boundary, especially in the cloud periods, which mask the GERD Lake boundaries. SAR sentinel-1 (Synthetic Aperture Radar (S-1 SAR) data are insensitive to cloud. However, Sentinel-1 SAR data are characterized by speckle noise and have some difficulties in detecting the water surface of water bodies. This can be solved by applying several techniques such as aggregation of the brightness pixels, which was proposed by Pipitone et al. [22].
The analysis scheme used to estimate the water volume in the GERD Lake is summarized in Figure 2 for optical multispectral and SAR data analysis, which was applied in this study. We used ArcMap 10.8 [30] for multioptical satellites (i.e., LandSat-8 and -9 and Sentinel-2) to separate the shape of the GERD Lake using the normalized difference water index (NDWI) as it enhances the presence of water bodies, a method introduced by Mcfeeders [31]. NDWI uses reflected near-infrared radiation and visible green light to enhance the presence of water bodies such as lakes and rivers. This method is characterized by its ability to eliminate the presence of soil and terrestrial vegetation features. The equation depends on the use of bands with a relatively high reflectance of the water green band (band-3) and one with low or no reflectance near-infrared (NIR) (i.e., band-8 in the case of multispectral Sentinel-2 and band-5 in the case of Landsat-8 and -9) as follows: The preprocessing of the workflow of Sentinel SAR-1 was applied by the Sentinel Application Platform (SNAP) [32], an open-source software version of 8.0.9 (http://step. esa.int/main/toolboxes/snap/ accessed on 1 October 2020), as follows: (a) a subset tool was used to delineate the area of the study. (b) The orbit file was applied, which allows updating of the orbit state vectors for each SAR scene, providing accurate satellite position and velocity information. (c) The thermal to noise removal algorithm was used to remove and reduce noise effects in the inter-sub-swath texture and normalize the backscatter for scenes in multi-swath acquisition modes. (d) Calibration equation was used to convert the image intensity values to sigma nought values in which the digital pixel was converted to radiometrically calibrated SAR backscatter concerning the nominally horizontal plane of Sentinel-1 GRD. (e) Terrain corrections were used to compensate for some distortions related to the side-looking geometry to be close to the real world. (f) We used coregistration with an average stack of two time series images per month to obtain a single image. We applied coregistration instead of a speckle filter to remove noise without affecting the resolution of the optical image of Sentinel SAR-1, which may result from temporal decorrelation effects. The final step was to convert it to linear transformations and apply the band math equation depending on the image histogram. The water lake was delineated using this equation in which the thresholding values range between −1, which refers to land, and +1, which refers to water bodies. The preprocessing of the workflow of Sentinel SAR-1 was applied by the Sentinel Application Platform (SNAP) [32], an open-source software version of 8.0.9 (http://step.esa.int/main/toolboxes/snap/ accessed on 1 October 2020), as follows: (a) a subset tool was used to delineate the area of the study. (b) The orbit file was applied, which allows updating of the orbit state vectors for each SAR scene, providing accurate satellite position and velocity information. (c) The thermal to noise removal algorithm was used to remove and reduce noise effects in the inter-sub-swath texture and normalize the backscatter for scenes in multi-swath acquisition modes. (d) Calibration equation was used to convert the image intensity values to sigma nought values in which the digital pixel was converted to radiometrically calibrated SAR backscatter concerning the nominally horizontal plane of Sentinel-1 GRD. (e) Terrain corrections were used to compensate for some distortions related to the side-looking geometry to be close to the real world. (f) We used coregistration with an average stack of two time series images per month to obtain a single image. We applied coregistration instead of a speckle filter to remove noise without affecting the resolution of the optical image of Sentinel SAR-1, which may result from temporal decorrelation effects. The final step was to convert it to linear transformations and apply the band math equation depending on the image histogram. The water lake was delineated using this equation in which the thresholding values range between −1, which refers to land, and +1, which refers to water bodies.
Water Level Validations
The water levels were collected for Nasser, Tana, and GERD lakes in Egypt and Ethiopia, respectively. The in situ water level data was recorded by the gauging station at Nasser Lake and obtained from the Nile Research Institute (NRI) database. The other water level data was collected at virtual stations from a satellite altimetry set obtained from the "Global Reservoirs and Lakes Monitor (G-REALM) project" of the U.S. Foreign Agricultural Service [33] and the level contour was extracted by optical multispectral satellite data in this study. Then, we calculated the average water level uncertainty as shown in Table 1.
Water Level Validations
The water levels were collected for Nasser, Tana, and GERD lakes in Egypt and Ethiopia, respectively. The in situ water level data was recorded by the gauging station at Nasser Lake and obtained from the Nile Research Institute (NRI) database. The other water level data was collected at virtual stations from a satellite altimetry set obtained from the "Global Reservoirs and Lakes Monitor (G-REALM) project" of the U.S. Foreign Agricultural Service [33] and the level contour was extracted by optical multispectral satellite data in this study. Then, we calculated the average water level uncertainty as shown in Table 1.
Results and Volume Calculation
The required parameters needed to compute the volume of the GERD Lake were as follows: (a) the input surface (i.e., 3D depth of the lake), which was established from the digital elevation model. (b) The second parameter required is the "Z" value, which was defined as the plane surface height of the water level top boundary in which the lake polygons were extracted from an optical satellite image or Sentinel 1A-SAR. The volume equation was calculated using the ARCGIS10.8 volume tool, which was dependent on the empirical formula of the volume. The volume equation is as follows: The volume of water bodies = average depth (d) of the Lake X Area of the lake (A) (2) The computation of the DEM raster surface was evaluated using the extent of the center point of each cell as opposed to the extent of the entire cell area. The resulting analysis will decrease the data area of the raster by half a cell relative to the data area displayed for the raster according to the manual of ARCGIS10.8.
The average volume uncertainty was calculated with the average uncertainty in volume (Figures 3 and 4) depending on the water level uncertainty ± 1.45 calculated in the previous section. The lakes' polygons' boundaries were extracted from a multi-optical satellite image and Sentinel SAR-1 to reflect the water area storage morphology in the GERD Lake (Figure 3). A chart of the average volume for the GERD Lake with the time series obtained every month from 21 July 2020 to 28 August 2022 is shown in Figure 4. The Ethiopian government carried out the first storage in July 2020 while July 2021 and July 2022 represent the second and third storage stages. During the storage stages and closing of the GERD Dam gates, the GERD Lake was charged by rainfall and Tana Lake, which is considered the major source of the Blue Nile [10].
The water level was observed from satellite images to be on the lower limit of the saddle dam in the third filling on 28 August 2022. This saddle dam was built with a 5-kmlong concrete face rockfall and 50 m high to maintain the required water surface elevation and depth at a relatively flat dam site. The saddle dam increases the natural features from 600 to 646 m asl, increasing the reservoir water level to the design level [34]. An emergency gated 300-m-wide spillway is located between the main dam and the saddle dam. The spillway, at a crest elevation of 624.9 m, is to be used for extreme flood conditions, releasing through a gully into the river downstream of the dam.
Discussion
The application of remote sensing and GIS to monitor GERD Lake volume changes provides critical information about the GERD Reservoir Lake water level and storage capacity. This will be very important for downstream countries in the case of a limitation or lack of information resulting from a stumble in negotiations between Ethiopia and the downstream countries Egypt and Sudan. Water safety is essential for both upstream and downstream countries. One of the most controversial debuts in GERD negotiations is the number of years for the initial reservoir filling, as a shorter filling time requires greater flow reduction and a higher investment return from the dam. A longer filling time requires lower flow reduction and lower investment return from the dam [34]. The water level shown by satellite data in this study was 600 ± 1.45 a.s.l on August 28 August 2022 in the lower level of the saddle dam. This level corresponds to 24.3% of the full storage capacity of 74 billion cubic meters. It was considered as more than the minimum reservoir fill rates, which is beneficial for hydroelectric generation without having an effect on stream flow into Egypt and Sudan as stated by Keith et al. [35]. King and Block [36] refer to the 25% filling policy, which can reduce the average downstream flow by more than 10 BCM per year. Hegay et al. [37] proposed numerous actions and mitigation strategies that could secure Egypt's water demands by minimizing the effects of the GERD project. These strategies should include the present-day operation of the AHD hydropower plant to mitigate imminent water shortages in combination with the increase in groundwater withdrawal as a backstop choice to quickly sustain the water demand. Water conservation strategies should additionally be integrated, mainly inside the agriculture sectors, by switching the countrywide production to crops that require less water.
Previous studies have investigated the possible future multi-environmental and hazard impacts on downstream countries. Wheeler et al. [38] described a post-filling period that includes severe multi-year droughts after filling of the dam with the uncertainty of The volume of the GERD Lake in the first storage reached its maximum level, which appeared in the satellite images taken on 21 July 2020, with an area of 250.16 km 2 and a volume of 5.75 ± 0.25 billion m 3 ( Figure 4). Although, there was a receding of the water in the GERD Lake in the next three months in August, September, and October in the year 2020, with an average water volume of 4.2 ± 0.3 billion m 3 (Figure 4). From November 2020 to 30 March 2021, the second receding of water storage in the GERD Lake reached an average volume of 3.75 ± 0.3 billion m 3 , calculated from the satellite images. On 28 July 2021, the GERD Lake showed an increase in the polygon area extracted from the Sentinel SAR-1 satellite image of 316.54 km 2 and an average volume of 8.45 ± 0.45 billion m 3 (Figures 3 and 4). The average volume of storage of the GERD Lake increased in August and September 2021, with a maximum of 9.4 ± 0.5 billion m 3 during August 2021 after the second storage was carried out. Receding of the water of the lake was observed during October and November 2021 to an average volume of 7.3 ± 0.45 billion m 3 (Figure 4), with a slight increase in December 2021 to 8.0 ± 0.45 billion m 3 . From January to 29 May 2022, the capacity of the reservoir lake decreased to 5.56 ± 0.45. Then, the third filling storage was reached by 23 July 2022, with an increase in the total capacity of 9.25 ± 0.25 billion m 3 and a significant large capacity of 17.4 ± 0.45 billion m 3 was reached on 28 August 2022 (Figures 3 and 4).
The Ethiopian government carried out the first storage in July 2020 while July 2021 and July 2022 represent the second and third storage stages. During the storage stages and closing of the GERD Dam gates, the GERD Lake was charged by rainfall and Tana Lake, which is considered the major source of the Blue Nile [10].
The water level was observed from satellite images to be on the lower limit of the saddle dam in the third filling on 28 August 2022. This saddle dam was built with a 5-km-long concrete face rockfall and 50 m high to maintain the required water surface elevation and depth at a relatively flat dam site. The saddle dam increases the natural features from 600 to 646 m asl, increasing the reservoir water level to the design level [34]. An emergency gated 300-m-wide spillway is located between the main dam and the saddle dam. The spillway, at a crest elevation of 624.9 m, is to be used for extreme flood conditions, releasing through a gully into the river downstream of the dam.
Discussion
The application of remote sensing and GIS to monitor GERD Lake volume changes provides critical information about the GERD Reservoir Lake water level and storage capacity. This will be very important for downstream countries in the case of a limitation or lack of information resulting from a stumble in negotiations between Ethiopia and the downstream countries Egypt and Sudan. Water safety is essential for both upstream and downstream countries. One of the most controversial debuts in GERD negotiations is the number of years for the initial reservoir filling, as a shorter filling time requires greater flow reduction and a higher investment return from the dam. A longer filling time requires lower flow reduction and lower investment return from the dam [34]. The water level shown by satellite data in this study was 600 ± 1.45 a.s.l on August 28 August 2022 in the lower level of the saddle dam. This level corresponds to 24.3% of the full storage capacity of 74 billion cubic meters. It was considered as more than the minimum reservoir fill rates, which is beneficial for hydroelectric generation without having an effect on stream flow into Egypt and Sudan as stated by Keith et al. [35]. King and Block [36] refer to the 25% filling policy, which can reduce the average downstream flow by more than 10 BCM per year. Hegay et al. [37] proposed numerous actions and mitigation strategies that could secure Egypt's water demands by minimizing the effects of the GERD project. These strategies should include the present-day operation of the AHD hydropower plant to mitigate imminent water shortages in combination with the increase in groundwater withdrawal as a backstop choice to quickly sustain the water demand. Water conservation strategies should additionally be integrated, mainly inside the agriculture sectors, by switching the countrywide production to crops that require less water.
Previous studies have investigated the possible future multi-environmental and hazard impacts on downstream countries. Wheeler et al. [38] described a post-filling period that includes severe multi-year droughts after filling of the dam with the uncertainty of the exact start and end time, which will require careful coordination to minimize possible harmful impacts on downstream countries. Donia and Negm [39] modeled three scenarios of the storage capacity of the GERD Lake. The storage capacity of the three models was estimated assuming 18 billion m 3 for the initial design storage capacity and 35 and 74 billion m 3 for the middle and final storage. Their results from scenario-3 of the full filling of GERD Lake in 5 years show a negative impact on agriculture due to the loss of silt, which is a result of restricting the water flowing to the Aswan High Dam in Egypt. Abulnaga's [40] study refers to scooping out accumulated mud and silts through dredging and the construction of onshore sediment ponds that are used for agricultural purposes due to the construction of the dam in Ethiopia. From an engineering point of view, EL Askary et al. [41] showed a deformation pattern associated with different sections of the GERD Lake and Saddle Dam (main dam and embankment dam) using 109 descending mode scenes from Sentinel-1 SAR imagery from December 2016 to July 2021. This may result in a dam failure flood, which will have harmful impacts in Sudan and Egypt.
In summary, the environmental impacts and other socio-political considerations of GERD extend across a diverse spectrum of issues from population growth, economic development, and water rights to sedimentation and/or changing flood regimes and the shock of climate change. It is necessary to examine the complex social and environmental values of water resources and the policies governing the use of water resources. A water cooperation policy is the best choice for the cooperative Nile basin initiative to overcome any debate on the remnant years of fillings [42]. Informal diplomacy has been successfully used to manage transboundary waters in a similar case in the Mekong River Dam [43]. The waterscape of the Mekong Dam issues has been extended to security actors that are not water experts within domestic politics. For this, the analysis could be extended to examine in more detail the knowledge channels within multiple tracks of diplomacy and how harms and inequalities are understood, beyond mere metrics of economic impacts and water quantities. This method of informal diplomacy can help change the frozen negotiation situations between Ethiopia and Egypt. Thus, understanding water diplomacy requires scrutiny of how power, knowledge, and the political economy of river basin development intersect.
Conclusions
The combination of open-source satellite optical and radar images with DEM provided a robust tool to estimate the water volume in the artificial GERD Lake during the initial Water 2022, 14, 3033 9 of 11 phases of filling. The water level measured from satellite data refers to the consequent increase in the stored water volume of the GERD Reservoir Lake. Three stored water stages of the initial filling were considered for the lake, corresponding to volumes of 5.75 ± 0. 25, 9.4 ± 0.5, and 17.4 ± 0.45 billion m 3 during 21 July 2020, 28 July 2021, and 28 August 2022, respectively.
Data collected from open sources combined with technical knowledge could provide very useful information that can be used to monitor the filling process and support informal diplomacy with transparent and trustful independent information that could possibly lead to a future agreement between all Nile basin countries. The authors believe that this work is a milestone in building a scientific initiative to utilize open-source data for the benefit of the community and to build a common agreement on the importance of investment in knowledge for sustaining water resources and their management. Further work is needed to extend this work to better understand the impact of the current filling process and its impact on the ecosystem and boost the knowledge and data exchange between riparian countries for integrated management plans for the Nile. An integrated database that combines ground-and satellite-based observations could utilize modern scientific techniques to integrate the dam's operation process and mitigate natural disasters and climate change's impact on the sustainable development in Nile Basin countries. Such an initiative could work as a confidence-building measure between Nile Basin countries and provide leveraging for science diplomacy to bridge cooperation and integration in an era of divergence and competition. | 7,462 | 2022-09-27T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Geography"
] |
Perversity equals weight for Painlev\'e spaces
We provide further evidence to the $P=W$ conjecture of de Cataldo, Hausel and Migliorini, by checking it in the Painlev\'e cases. Namely, we compare the perverse Leray filtration induced by the Hitchin map on the cohomology spaces of the Dolbeault moduli space and the weight filtration on the cohomology spaces of the irregular character variety corresponding to each of the Painlev\'e $I-VI$ systems. We find that up to a shift the two filtrations agree.
Introduction and statement of main result
Throughout the paper we let X denote one of the symbols I, II, III(D6), III(D7), III(D8), IV, V deg , V, V I so that the index P X refers to Painlevé X. In [25], irregular Betti moduli spaces (also called wild character varieties following [6]) M P X B are defined and shown to be C-analytically isomorphic under the Riemann-Hilbert correspondence to irregular de Rham spaces M P X dR . (At a higher level of generality, moduli spaces of untwisted irregular connections of arbitrary rank on a compact Riemann surface of arbitrary genus were constructed in [5] as algebraic symplectic manifolds, and the irregular Riemann-Hilbert correspondence for the moduli spaces was proven in [7], building on the categorical correspondence of Malgrange [23,Chapitre 4].) It follows from [4, Theorem 1] that for every X in the so-called untwisted cases II, III(D6), IV, V, V I smooth complex analytic moduli spaces M P X Dol exist and are diffeomorphic under non-abelian Hodge theory to the corresponding M P X dR . A combination of these results implies that in the untwisted cases M P X B and M P X Dol are diffeomorphic; such a diffeomorphism is expected to exist in the remaining (twisted) cases too. In [18], [19] we gave an explicit description of the spaces M P X Dol and of their (irregular) Hitchin map in terms of elliptic pencils. In [25], an explicit description of the spaces M P X B is provided as affine cubic surfaces. Deligne [14] constructs a weight filtration W on the complex cohomology spaces of an affine algebraic variety. In particular, the cohomology spaces of M P X B carry a mixed Hodge structure. On the other hand, the Hitchin map endows the complex cohomology spaces of M P X Dol with a perverse Leray filtration P [3]. Following [15, page 2], we set Remarkably, in the rank 2 case without (regular or irregular) singularities equality between these two polynomials for Dolbeault and Betti spaces corresponding to each other under non-abelian Hodge theory and Riemann-Hilbert correspondence was proven in [12,Theorem 1.0.1], and conjectured to be the case in general (the P = W conjecture). The perverse filtration for some logarithmic Hitchin systems was studied by Z. Zhang [29], where he showed multiplicativity of the filtration with respect to wedge product on Hilbert schemes of smooth projective surfaces fibered over a curve, and thereby computed their perverse polynomials. More generally, W. Chuang, D. Diaconescu, R. Donagi and T. Pantev conjectured a formula for the perverse Hodge polynomial of moduli spaces of meromorphic Higgs bundles with one irregular singularity [11]. On the Betti side, T. Hausel, M. Mereb and M. Wong investigated the weight filtration on the cohomology of character varieties of punctured curves with one irregular singularity, and extended the P = W conjecture to this case [15,Problem 0.1.4]. The purpose of this paper is to give an affirmative answer to this conjecture in the Painlevé cases. Notice that not all the cases we study fall into the class studied in [15], because some of them admit two irregular singularities, some of which with twisted formal type.
Theorem 1. For every X the perverse Leray and weight filtrations on the cohomology of the Dolbeault and Betti spaces mapped to each other by non-abelian Hodge theory and Riemann-Hilbert correspondence agree up to a shift by 1: P H P X (q, t) = q −1 W H P X (q, t). (1) Moreover, in the untwiseted cases we will prove that the classes generating the exotic pieces of the P and W filtrations match up under non-abelian Hodge theory. Our proof goes through establishing a conjecture of C. Simpson [26,Conjecture 11.1] in these special cases. It may be considered as "a homotopy version in the highest graded part" of the P = W conjecture. Theorem 2. There exists a smooth compactification M P X B of M P X B by a simple normal crossing divisor D such that the body |N P X | of the nerve complex N P X of D is homotopy equivalent to S 1 . Moreover, for some sufficiently large compact set K ⊂ M P X B , there exists a homotopy commutative square Here, h denotes the Hitchin map, D × ⊂ Y is a neighbourhood of ∞ in the Hitchin base, and the top row is the diffeomorphism coming from non-abelian Hodge theory.
For details, see Section 3. An analogous statement for two 2-dimensional and a 4-dimensional logarithmic Dolbeault moduli spaces has been proven by A. Komyo [22].
Acknowledgements: During the preparation of this document, the author benefited of discussions with P. Boalch, T. Hausel, M-H. Saito, C. Simpson and A. Stipsicz, and was supported by the Lendület Low Dimensional Topology grant of the Hungarian Academy of Sciences and by the grants K120697 and KKP126683 of NKFIH.
Perverse Leray filtration
We first deal with the left-hand side of (1). We study moduli spaces M P X Dol parameterizing parabolically (semi-)stable Higgs bundles of rank 2 over CP 1 with irregular singularities at ≤ 4 points of total pole order equal to 4, and having specific local forms near these punctures. We assume that the parabolic weights are general, so that stability is equivalent to semi-stability. Moreover, we fix the degree of the underlying vector bundle to be odd. For the local forms of the Higgs field • in case X = I, see: [ However, for the purposes of this paper, we need a certain completion of the Dolbeault moduli spaces studied for instance in [18,19,20]. Namely, in case the residue Res p (θ) of the Higgs field at some logarithmic point p is assumed to have two equal eigenvalues with non-trivial nilpotent part, then we consider the moduli space M P X Dol of corresponding Higgs bundles completed with all Higgs bundles having the same eigenvalues of its residue but with trivial nilpotent part, equipped with a quasi-parabolic structure of full flag type at these points. In order to simplify notation, we will continue to denote our completed moduli space by the symbol M P X Dol . The reason we consider this completion is that the Hitchin fibers of the non-completed moduli spaces may be non-compact, as endomorphisms with nontrivial nilpotent part may converge to ones with trivial nilpotent part. Importantly for our purposes, we have: Lemma 1. The completed moduli space is a smooth complex manifold, and the irregular Hitchin map h : is proper.
Proof. The proof of the first statement follows from [4,Theorem 5.4]. Indeed, let us consider an endomorphism A ∈ gl(E| p ) of the fiber of a given rank 2 smooth vector bundle E at p. Let the decomposition of A into semi-simple and nilpotent part be and assume that A nil = 0 (so necessarily A s is a multiple of identity). Finally, let stand for the parabolic subalgebra containing A nil and π : p → l its Levi quotient. It follows from [4,Theorem 5.4] that the moduli space parameterizing irregular Higgs bundles (E, θ) endowed with a compatible parabolic structure, with fixed underlying smooth vector bundle E and such that π(Res p (θ)) = π(A) is a smooth complex manifold. Now, given that π(A s ) = π(A) we get that the completed Dolbeault moduli space is smooth. Properness follows from [9,24] for the moduli space of Higgs bundles with some poles of total order n over any compact Riemann surface C, without any condition on the polar parts and residues at these points. In the case C = CP 1 and a divisor of total multiplicity 4, the base C 8 of the Hitchin map for this system contains the image of those Higgs bundles having prescribed polar parts and residues as an affine open subspace A ∼ = C. Namely, A is specified by the jet of the characteristic coefficients at the punctures of given order (see [18,19,20], or in greater generality [2,Theorems 5,6]). The preimage h −1 (a) of any a ∈ A is the set of all Higgs bundles having characteristic polynomial corresponding to a. By [19,Lemma 10.1], at any logarithmic singularity p the characteristic polynomial of the residue of the Higgs field is prescribed by a, but its adjoint orbit is not. Conversely, if a sequence of Higgs bundles (E n , θ n ) n≥1 in h −1 (a) converge to some Higgs bundle (E 0 , θ 0 ) then the residue of θ 0 at p has the same characteristic polynomial as the residues of θ n at p. Replacing a finite number of points in the fiber by projective lines (corresponding to choices of a parabolic line ℓ ⊂ E| p ) does not modify properness. This implies properness for the completed moduli problem.
is an irregular parabolic Higgs bundle such that with A nil = 0 then the compatible quasi-parabolic line ℓ ⊂ E| p is uniquely determined by the requirement A nil ∈ p. On the other hand, if (E 1 , θ 1 ) is an irregular parabolic Higgs bundle such that The irregular Hitchin map (2) endows H * (M P X Dol , Q) with a finite decreasing perverse filtration P • through the perverse Leray spectral sequence. As usual, we set Gr P k = P k /P k+1 . Proposition 1. We have and all the other graded pieces of H * for P vanish. In particular, we have b 2 (M P X Dol ) = 1 + d P X and P H P X (q, t) = q −1 + d P X q −2 t 2 + q −3 t 2 . Furthermore, we have d P X = 10 − χ(F P X ∞ ), where F P X ∞ is the fiber at infinity of M P X Dol listed in Table 1. Table 1. Fiber at infinity and perverse Hodge polynomial of M P X
Dol
The specific forms of P H P X (q, t) can then easily be determined using Proposition 1 and the fibers F P X ∞ , and for convenience they are included in Table 1.
Proof. As M P X Dol is a non-compact oriented 4-manifold, by Poincaré duality we have b 4 (M P X Dol ) = 0. We use the geometric characterization of the perverse filtration provided in [13,Theorem 4.1.1] in terms of the flag filtration F . Namely, let Rh * denote the right derived direct image functor in the derived category of constructible sheaves and R l h * be the l'th right derived direct image sheaf. Let H denote hypercohomology of a complex of sheaves and H stand for cohomology of a single sheaf. Let Y −1 ∈ Y be a generic point and Q M be the constant sheaf with fibers Q on M P X Dol . Here and throughout this section, for ease of notation we drop the subscript Dol and the superscript P X of M P X Dol whenever this latter is in subscript. It is known that there exists a spectral sequence L E k,l r called the (ordinary) Leray spectral sequence degenerating at the second page . We then have the equality It immediately follows that dim Q Gr P −1 H 0 (M P X Dol , Q) = 1 and all other graded pieces of H 0 vanish. Moreover, it also follows that and the non-trivial associated graded pieces of P are Let us turn to the computation of these graded pieces in general. We know from [18, for some non-reduced curve F P X ∞ , moreover there exists an elliptic fibratioñ so that the following diagram commutes In particular, we haveh −1 (∞) = F P X ∞ . The type of the curves F P X ∞ is determined by X and is listed in Table 1. If the residues of the Higgs field at the simple poles are assumed to have distinct eigenvalues then exactly the same results hold in the cases II, IV, V deg , V, V I too. Lemma 2. In cases X = II, IV, V deg , V , assume that there is a simple pole of the Higgs field such that the residue has equal eigenvalues. Then, there exists an embedding M P X Dol ֒→ E(1) of the completed Dolbeault moduli spaces so that for the non-reduced curve F P X ∞ listed in Table 1, and the diagram (6) commutes.
Proof. The proof is similar to the P V I case. We consider the pencil of spectral curves associated to Higgs bundles with poles of the given local forms. According to [28,Theorem 1.1], the moduli space arises as a certain relative compactified Picard-scheme of this pencil. In order to determine the relative compactified Picardscheme, one first needs to blow up the base locus of the pencil of spectral curves; in general this process involves blowing up infinitesimally close points. The common phenomenon in the cases when the residue of the Higgs field at a simple pole p 1 has equal eigenvalues is that one exceptional divisor E of the blow-up process (with self-intersection number equal to (−2)) maps to p 1 under the ruling and becomes a component of one of the fibers X t in the fibration. In the cases X = II, IV this is precisely proven in [19,Lemma 4.5]. The same proof goes along for the other types too, because both the assumptions and the assertion is local at the fiber of the ruling over p 1 . Let us denote by Z t the singular curve in the pencil of spectral curves whose proper transform contains E, so that X t is the proper transform of Z t . It follows from [27, Section 6] that X t is one of the Kodaira types The corresponding spectral curves Z t are listed before [19, Lemma 10.1], except in case X t is of type I 4 . The case I 4 may only occur in cases X = V deg , V , under the assumption that there exists two simple poles p 1 , p 2 of the Higgs field such that for i ∈ {1, 2} Res pi (θ) has two equal eigenvalues. In this case two non-neighbouring components of X t get mapped to p 1 , p 2 respectively under the ruling and Z t consists of two rational curves (sections of the Hirzebruch surface of degree 2) intersecting each other transversely in two points, one on the fiber over p 1 and another one on the fiber over p 2 . Indeed, Z t may have at most two components because it is a 2 : 1 ramified covering of the base curve CP 1 , so two components of X t must be exceptional divisors of the blow-up process; one of these two components must come from blow-ups at p 1 and the other one from blow-ups at p 2 , for otherwise the dual graph could not be a cycle. By [19, Lemma 10.1], Higgs bundles whose residue at p 1 (and p 2 in case IV ) has non-trivial nilpotent part correspond to locally free spectral sheaves over Z t at p 1 (respectively, p 2 ). For such curves Z t , [19, Lemma 10.2] determines the families of locally free spectral sheaves giving rise to parabolically stable Higgs bundles. On the other hand, any torsion-free sheaf on Z t is the direct image of a locally free sheaf on a partial normalization. Let us separate cases according to the type of X t .
(1) If X t is of type I 2 then Z t is a nodal rational curve with a single node on the fiber over p 1 ; there exists a unique torsion-free but not locally free sheaf of given degree on Z t . This gives rise to a unique Higgs bundle whose residue has the required eigenvalue of multiplicity 2 and trivial nilpotent part. This object is irreducible hence stable. On the other hand, the choice of quasi-parabolic structure at p 1 compatible with this unique Higgs bundle is an arbitrary element of CP 1 . This gives us that the Grothendieck class of the Hitchin fiber of the completed moduli space over the point t is As the unique Kodaira fiber in this class is I 2 , we deduce from Lemma 1 that the Hitchin fiber of the completed moduli space over the point t is of this type, i.e. the same type as X t .
(2) If X t is of type I 3 then Z t is composed of two sections of the ruling intersecting each other transversely in two distinct points, one of them lying on the fiber over p 1 . As shown in [19, Lemma 10.2. (2)], Higgs bundles with spectral curve Z t and residue having non-trivial nilpotent part form a family parameterized by a variety in the class As in the previous point, there exists a single torsion-free but not locally free sheaf giving rise to a Higgs bundle with spectral curve Z t such that Res p1 (θ) has trivial nilpotent part. Again, the quasi-parabolic structure at p 1 compatible with this unique Higgs bundle is parameterized by CP 1 , so the class of the Hitchin fiber of the completed moduli space over the corresponding point t is The only Kodaira fiber in this class is I 3 , hence the Hitchin fiber of the completed moduli space over t is I 3 .
(3) For X t is of type I 4 , as we already mentioned, Z t is a union of two sections of the ruling that intersect each other transversely in two points: one on the fiber over each of p 1 , p 2 . Therefore, the analysis is quite similar to the case of I 3 treated above: Higgs bundles with spectral curve Z t such that both Res p1 (θ), Res p2 (θ) have non-trivial nilpotent part are parameterized by a variety in class 2[C × ].
(The class of the point [pt] that appears in the case I 3 is missing here because it corresponds to a torsion-free but non-locally free sheaf at p 2 which would give rise to a Higgs bundle with trivial nilpotent part at p 2 .) Now, there exists a single sheaf of given degree that is locally free at p 1 and torsion-free but non-locally free at p 2 . For the Higgs bundle obtained as the direct image of this sheaf, compatible quasi-parabolic structures at p 1 are parameterized by CP 1 . The same observations clearly apply with p 1 , p 2 interchanged too. Finally, notice that stability excludes that the spectral sheaf be torsion-free but non-locally free at both p 1 , p 2 : this would mean that the spectral sheaf comes from the normalization of Z t , so the corresponding Higgs bundle would be decomposable. In sum, the class of the Hitchin fiber of the completed moduli space over t is As the only Kodaira fiber in this class is I 4 , we infer that the Hitchin fiber of the completed moduli space over t is of type I 4 . (4) If X t is of type III then Z t is a cuspidal rational curve with a single cusp on the fiber over p 1 . Stability of any Higgs bundle with spectral sheaf supported on Z t again follows from irreducibility. By virtue of [19,Lemma 7.2], locally free sheaves of given degree on Z t are parameterized by C × and there exists a single non-locally free torsion free sheaf of given degree on Z t . This latter gives rise to a unique Higgs bundle in the extended moduli space with residue having trivial nilpotent part. Again, compatible quasi-parabolic structures at p 1 provide a further CP 1 of parameters, so that the class of the Hitchin fiber of the completed moduli space over t is As the unique Kodaira fiber in this class is III, we see that the Hitchin fiber of the completed moduli space over t is of type III. As the only Kodaira fiber in this class is IV , the corresponding Hithin fiber is of type IV too.
Remark 2.
In the Lemma we found that the completed Hitchin system has the same type of singular fibers as the associated fibration of spectral curves. A similar statement is shown in [1, Corollary 6.7], based on the analysis [17] of Fourier-Mukai transform for sheaves on various singular elliptic curves. Our result is more general than the one of [1] in that it also treats the ramified Dolbeault moduli spaces and consequently more types of singular fibers enter into the picture, and we also consider the dependence of our result on parabolic weights. As Gl(2, C) is Langlands-selfdual, the above relative self-duality result can be considered as an irregular version of Mirror Symmetry of Hitchin systems [16]. Now, consider the smooth de Rham complex According to de Rham's theorem, the cohomology spaces of this complex of sheaves are isomorphic to the singular cohomology spaces H n (M P X Dol , C). The morphism h defines the following finite decreasing filtration on Ω • M : . This filtration gives rise to the Leray spectral sequence The first page L E 1 of this spectral sequence is defined by taking cohomology of the relative smooth de Rham differentials so we have The morphisms d 1 on these groups are induced by the smooth de Rham differential By definition, these morphisms are called Gauß-Manin connections It is known that ∇ l GM is a smooth integrable connection over the base set Y reg ⊂ Y parameterizing smooth fibers of h. The holomorphic vector bundle induced by the (0, 1)-part of ∇ l GM extends over the finite set Y \ Y reg , and the meromorphic connection induced by the (1, 0)-part of ∇ l GM on the extended holomorphic vector bundle has regular singularities at Y \ Y reg . For l ∈ {0, 2}, the complex vector bundles R l h * C M ⊗ C Ω k Y are of rank 1 over Y , and the monodromy of ∇ l GM is trivial, so that ∇ 0 GM = d = ∇ 2 GM . For l = 1, the vector bundles underlying the local systems To compute L E 2 , we need to take the cohomology groups of the terms of L E 1 with respect to ∇ l GM . For l ∈ {0, 2} the cohomology groups of ∇ l GM = d compute the singular C-valued cohomology spaces of Y = C. For l = 1, we have ) ∨ , and this latter vanishes because there exist no compactly supported invariant sections of the dual integrable connection (∇ 1 GM ) t on the non-compact space Y . We get that L E k,l 2 is of the form k = 2 0 0 0 It is known that the Leray spectral sequence degenerates at this term. In particular, the dimension b 1 (M) of L E 0,1 2 is equal to the first Betti number b 1 (M P X Dol ). Lemma 3. We have b 1 (M P X Dol ) = 0. Proof. Let N denote a tubular neighbourhood of F P X ∞ in E(1) and consider the covering Part of the associated Mayer-Vietoris cohomology long exact sequence reads as We know that H 1 (E(1), Q) vanishes because it has the structure of a CW-complex only admitting even-dimensional cells. On the other hand, M P X Dol ∩ N is homotopy equivalent to an S 1 -bundle π : M → F P X ∞ .
(11) According to Table 1, for each X the Dynkin diagram of F P X ∞ is simply connected. It follows from the fibration homotopy long exact sequence that there exists a morphism
then a generator is given by the dual of the class
[c] ∈ H 1 (M P X Dol ∩ N, Q) of a fiber S 1 of (11). To prove the assertion it is clearly sufficient to show that the connecting morphism δ between cohomology groups is a monomorphism, or dually, that the connecting morphism ∂ : H 2 (E(1), Q) → H 1 (M P X Dol ∩ N, Q) on singular homology is an epimorphism. Let us now recall the definition of ∂. Assume given a singular 2-cycle C in E(1), that decomposes as where A and B are singular 2-chains in M P X Dol and N respectively. (Such a decomposition always exists using barycentric decomposition.) We then let Given this definition, we need to show that there exists a singular 2-cycle C and a decomposition (12) such that ∂(A) = c. Now, the intersection form of E(1) is non-degenerate, specifically it turns H 2 (E(1), Q) into the lattice Let us denote by H the generator of the component (1) determined by the hyperplane class of CP 2 . On the other hand, the intersection form of N is obviously isomorphic to the negative semi-definite lattice associated to an extended Dynkin diagram of type listed in Table 1, with 1-dimensional radical. Therefore, the intersection lattice of F P X ∞ can not be in the orthogonal complement of H (which is negative definite), and we have Let C = H. It then follows that C ∩ N is homologous to a positive multiple B of a disc with boundary c. Up to modifying C by a boundary, we may assume that C ∩ N = B. Setting A = C − B, the singular 2-chain A then clearly lies in M P X Dol . This finishes the proof of the Lemma.
Let us set (where we reintroduced the superscript P X of M in order to emphasize the way in which the right-hand side depends on the specific irregular type that we work with). Let us denote Euler-characteristic of a CW-complex by χ.
Lemma 4. We have d P X = 10 − χ(F P X ∞ ). Proof. We see from degeneration of the Leray spectral sequence at L E 2 that By Lemma 3 and additivity of χ with respect to stratifications we deduce b 0 (M P X Dol ) + b 2 (M P X Dol ) + χ(F P X ∞ ) = χ(E(1)) = 12. The assertion follows because M P X Dol is connected. Lemma 5. The inclusion Y −1 ֒→ Y induces a non-trivial morphism on H 2 : Proof. Dually, we need to show that Y −1 ֒→ Y induces a non-trivial morphism on second singular homology
that a generic fiber of h is not a boundary in M P X
Dol . This follows immediately from the known fact that the generic fiber ofh is not a boundary in E(1).
This Lemma and (3), (4) coupled with (13) finish the proof of the Proposition.
Remark 3. It would be certainly possible to prove Proposition 1 merely using the definition of the perverse Leray filtration, i.e. by applying appropriate shifts to the sheaves R l h * Q M or, equivalently, applying the Decomposition Theorem [3] to Rh * Q M [2]. This direct proof would be actually quite similar to the argument presented above.
Weight filtration
We now turn our attention to the right hand side of (1). Observe first that according to [25], for all X the space M P X B is a smooth affine cubic surface defined by a polynomial for an affine quadric Q P X . Each of these quadrics depends on some subset (possibly empty) of complex parameters s 0 , s 1 , s 2 , s 3 , α, β. For a generic choice of these parameters, that we will assume from now on, the obtained affine cubic surfaces are smooth. Moreover, in case the cubics do not depend on any parameter, the affine cubic surfaces are always smooth. Denote by the homogenization of f P X as a homogeneous cubic polynomial and consider the projective surface which is a compactification of M P X B . In general, M P X B is not smooth: it has some isolated singularities over x 0 = 0. Let us set where µ stands for the Milnor number of an isolated surface singularity and the summation ranges over all singular points of M P X B . Proposition 2. The non-trivial graded pieces of H * for W are The singularities of M P X B and the weight polynomial of M P X B in the various cases are summarized in Table 2.
Proof. We use the definition given in [14] of the weight filtration on the mixed Hodge structure on the cohomology of an affine variety in terms of a smooth projective compactification. The form (14) of f P X implies that the compactifying divisor is defined by the equation where each L i is a complex projective line such that each two of them L i , L j for i < j intersect each other transversely in a point p ij . Said differently, the nerve complex of D consists of the edges (and vertices) of a triangle A 2 . In particular, the body of this complex is homeomorphic to a circle S 1 . As we will see in Subsections 3.1-3.9, all singularities of M P X B are located at some of the points p ij and are of type A k for k = µ(p ij ). We obtain a smooth compactification M P X consisting of reduced projective lines. We know that D P X contains the proper transform of each component of (18). More precisely, the nerve complex N P X of D P X arises from the graph A (1) 2 of (18) by replacing the edge corresponding to an intersection point p ij by a diagram A µ(pij ) . On the other hand, the generic plane section of M P X B is a cubic curve, therfore the nerve complex N P X must appear on Kodaira's list [21]. From this we see that the nerve complex of D P X is a cycle of length N P X + 3 As customary, we will denote by N P X 0 and N P X 1 the set of 0-and 1-dimensional cells of N P X , respectively.
We are now ready to determine the Betti numbers of M P X B . Lemma 6. We have Proof. The assertion for b 0 is obvious, and then immediately follows by Poincaré duality for b 4 too.
In case N P X = 0, i.e. M P X B is a smooth projective cubic surface, it is known that M P X B is given by a blow-up of CP 2 in six different points, and so carries the structure of a CW-complex with only even-dimensional cells, with 7 two-dimensional cells.
The non-smooth surfaces M P X B clearly belong to the 20-dimensional family of projective cubic surfaces. The points parameterizing smooth cubics form a dense set in C 20 with respect to the analytic topology. We will see in Subsections 3.1-3.9 that the spaces M P X B only admit singularities of type A k . It is known that a smoothing of a projective surface with ADE singularities coincides up to diffeomorphism with a minimal resolution thereof. In our case, a smoothing is a smooth cubic surface.
The smooth case treated in the previous paragraph therefore implies the general statement. Now, [14,Théorème (3.2.5)] implies that there exists a spectral sequence Let us list a few properties (either obvious or directly following from [14, Théorème (3.2.5)]) related to this spectral sequence.
(3) The filtration W N is induced by n ≤ N on the above diagram, and its shifted filtration W [k] defines the mixed Hodge structure on H k (M P X B , C). (4) Up to identifying H 2 (L, C) with H 0 (L, C) via the Lefschetz operator, the map δ is the differential of the simplicial complex N P X . In particular, as the body of N P X is homeomorphic to S 1 , we have Proof. Assuming that δ 4 vanish, the term H 4 M P X B , C in the spectral sequence could not be annihilated at any further page by any other term, so we would get H 4 M P X B , C = 0. This, however, would contradict that M P X B is an oriented, non-compact 4-manifold. It is also easy to derive the result directly using the explicit description of δ 4 as wedge product by the Thom class Φ L of the tubular neighbourhood N L of L in M P X B , as in Lemma 8 below. Indeed, the image of the class of a generator [ω L ] of H 2 (L, C) is then represented in N L by the compactly supported 4-form The normal bundle of L in M P X B is orientable, and the above 4-form is cohomologous to a positive multiple of a volume form of N L . This implies the assertion.
The Lemma and (20) now imply that in the top row of W E 2 the only nonvanishing term will be the upper-left entry, and it is of dimension 1. be the inclusion map. Notice that as Φ L is vertically of compact support, its class can be extended by 0 to define a class The restriction of δ 2 to the component H 0 (L, C) then maps Now, according to Proposition 6.24 [8], we have where P D V stands for Poincaré duality in V and [L] is the cohomology class defined by integration on L. Therefore, for any we have As Poincaré duality is perfect, the assertion is equivalent to showing that the classes [L] for L ∈ N P X 0 are linearly independent in H 2 M P X B , C . For this purpose, we fix a generic line ℓ in the projective plane x 0 = 0, and let CP 2 t , t ∈ CP 1 denote the pencil of projective planes in CP 3 passing through ℓ. We may assume that t = ∞ corresponds to the plane x 0 = 0. For each t ∈ CP 1 , the curve is an elliptic curve. The line ℓ intersects for each k ∈ {1, 2, 3} the line L k in a single point p k , which is (by genericity of ℓ) different from all the intersection points p ij . The elliptic pencil (21) with center B; E is then an elliptic surface over CP 1 , in particular it is diffeomorphic to (5). The exceptional divisors ω −1 (p k ) are sections of Y , in particular they do not belong to the fiber E ∞ over ∞. Let us denote byL the proper transform of L with respect to ω. We may then write for some m L,k ∈ {0, 1}. The quotient We have already determined the type of E ∞ in (19), in particular its intersection form is negative semi-definite, with non-trivial radical. The only possible vanishing linear combination of these classes would then be one in the radical of (19), generated by (1, . . . , 1). However, one sees immediately that the intersection number of and any [ω −1 (p k )] is equal to 1, hence (23) is a non-zero class. Alternatively, the same argument as at the end of the proof of Lemma 3 shows that the class (23) is non-zero: the hyperplane class [H] must intersect it positively because the orthogonal complement of [H] in H 2 (E, C) is negative definite while the lattice of (19) has non-trivial radical.
Taking into account that the sequence degenerates at W E 2 and that the weight with respect to the filtration W [k] defining the mixed Hodge structure on H * (M P X B , C) is defined by from Lemma 7 we derive that the only non-vanishing graded pieces of the weight filtration on cohomology read as: Recalling (19) that N P X is a cycle of length N P X + 3, Lemmas 6 and 8 finish the proof.
Proofs of the theorems
Proof of Theorem 1 in the untwisted cases. As explained in the Introduction, for each X ∈ {II, III(D6), IV, V, V I} the Dolbeault and Betti spaces are known to be diffeomorphic, in particular we have . Propositions 1 and 2 imply that the graded pieces for the respective filtrations P and W have only two non-trivial terms: p ∈ {−2, −3} for P and −k − n ∈ {−2, −4} for W , and that the graded pieces for the −3 and −4 weights are of the same dimension. Then necessarily the graded pieces for the only remaining weights must be of the same dimension too, and we get the assertion.
Let [C] be a generator of Gr W −4 H 2 (M P X B , C). We need to show that the diffeomorphism M P X B → M P X Dol , maps [C] to a generator of Gr P −3 H 2 (M P X Dol , Q). For this purpose, we describe a representative of [C]. Throughout this proof, we work dually with homology classes. We use the notations of Proposition 2 for the dual nerve complex of the compactifying divisor of M P X B . Let us furthermore introduce the notation N P X 1 = {p 1 , . . . , p N P X +3 } for the intersection points of the divisor at infinity. Then, for each 1 ≤ j ≤ N P X +3 we introduce a cycle C j such that The cycle C j is defined as follows: let z 1 , z 2 be local coordinates defining the two divisors crossing at p j , then set for some sufficiently small 0 < ε << 1; topologically, C j (ε) is a 2-torus. We choose ε sufficiently small so that C j (ε) ∩ C j ′ (ε) = ∅ unless j = j ′ . In homology the classes C j (ε) are clearly independent of the choice of ε, hence we may omit to include ε in their notation. It follows that given any compact subset K ⊂ M P X B , the generator [C] can be represented in To match the graded pieces of degree w = −4 and p = −3 of the two filtrations W and P respectively, it is therefore sufficient to show that if a class [C] ∈ H 2 (M P X Dol , Q) can be represented in the complement of any prescribed compact K ⊂ M P X Dol , then [C] is a multiple of the class of the generic Hitchin fiber . Indeed, the class [HF ] is non-zero because it intersects any section of the Hitchin fibration non-trivially, and by (3) it is a generator of Gr P −3 H 2 (M P X Dol , Q) (see also Lemma 5). Let us denote by N P X Dol a closed tubular neighbourhood of F P X ∞ ⊂ E(1); N P X Dol is a plumbed 4-manifold over F P X ∞ . Let us now pick a compact K ⊂ M P X Dol so that M P X Dol \ K ⊂ N P X Dol , and assume that C is a representative of a class Then in particular C represents a non-zero class in the punctured tubular neighbourhood: The same also holds for [HF ]: a Hitchin fiber over a point sufficiently far away from 0 ∈ Y lies in the complement of K, hence gives rise to a class The open smooth 4-manifold M P X Dol ∩N P X Dol has as deformation retract the unit circle bundle ∂N P X Dol of N P X Dol , which is a smooth oriented 3-manifold. By Poincaré-duality, we have H 2 (M P X Dol ∩ N P X Dol , R) ∼ = H 1 (M P X Dol ∩ N P X Dol , R). As the dual graph of every F P X ∞ is a tree, one readily sees that the fundamental group of M P X Dol ∩ N P X Dol is generated by a simple loop ∂D around 0 in a punctured disc fiber D × . Therefore, we have for some q ∈ Q × . This finishes the proof.
Proof of Theorem 1, general case. We merely need to compute N P X and compare the perverse and weight polynomials explicitly for each X. As a consistency check and for sake of completeness, we treat the untwisted cases too. We will determine N P X using the explicit form of the quadratic terms Q P X provided in [25]. Before turning to the study of the various cases, let us address a result that will be needed in some of the cases. Namely, assume that M P X B has a singularity at [0 : 0 : 0 : 1]. Plugging x 3 = 1 into F P X we get with f i homogeneous of order i. If f 2 is a non-degenerate quadratic form, then the Hessian of F P X at the singular point is non-degenerate, and so the singularity is of type A 1 . Up to exchanging x 0 and x 2 , we have the following. The study of the specific cases, based on Lemma 9, is contained in Subsections 3.1-3.9 below. We note that in Subsections 3.1, 3.2, 3.4, 3.5 and 3.8 we rederive the weight polynomials obtained in Section 6 of [15] using different methods.
Proof of Theorem 2. The statement that the nerve complex of the boundary is homotopic to S 1 immediately follows from (19). Let us denote the components of the compactifying divisor D P X of M P X of these components. We may assume that for any j and for any pairwise distinct j, j ′ , j ′′ we have Set so that T P X
B
is an open tubular neighbourhood of D P X in M P X B ; T P X B is a plumbed 4-manifold. A simple computation shows that the group π 1 (M P X B ∩ T P X B ) ∼ = Z 2 is generated by two classes: one class [α] coming from the fundamental group of D P X and the class of the boundary of a fiber ∂D of T P X B . Now, let ρ 1 , . . . , ρ N P X +3 be a partition of unity subordinate to the cover (28), and consider the map The image of φ is contained in the standard simplex ∆ N P X +2 of dimension N P X +2 because the family ρ j forms a partition of unity. Moreover, it follows from (27) that Im(φ) ⊂ ∆ N P X +2 1 , the 1-skeleton of ∆ N P X +2 . Let us denote the vertices of ∆ N P X +2 listed in the standard order by P 0 , . . . , P N P X +2 and by P j P j ′ the edge connecting P j and P j ′ . Then, (26) implies that Im(φ) = P 0 P 1 ∪ P 1 P 2 ∪ · · · ∪ P N P X +2 P 0 , which is homotopy equivalent to S 1 . We claim that the loop α in T P X B ∩ M P X B coming from the fundamental group of D P X maps to a generator of the fundamental group of S 1 under φ. To be specific, for any j let us consider a pointp j close to p j ∈ N P X 1 , so thatp j ∈ ∂T j ∩ ∂T j+1 , where j + 1 is understood modulo N P X + 3 and p j ∈ T j ∩ T j+1 . Let α j be any path in ∂T j+1 connectingp j top j+1 . Then φ maps the path α = α 1 · · · α N P X +3 to a generator of the fundamental group of S 1 . Now, it is easy to see that the intersection number of α with the class (24) in the smooth oriented 3-manifold ∂T P X B (given the suitable orientation) is Indeed, for each j the curve α j intersects C j (ε) and C j+1 (ε) positively. Moreover, the intersection number of [C] with the boundary ∂D of a fiber of T P X B is easily seen to vanish, so that [α] is proportional to the Poincaré dual of [C] in ∂T P X B : for some r ∈ Q × . On the other hand, a simple loop β around ∞ ∈ Y in the Hitchin base can be lifted to a pathβ : [0, 1] → M P X Dol . We may choose any path γ connectingβ(1) toβ(0) in h −1 (β(0)). Then,βγ is a loop in M P X Dol ∩ N P X Dol such that in ∂N P X Dol we have [βγ] ∩ [HF ] = 1.
By virtue of (25) it follows from this and Poincaré duality that 3.1. Case X = V I. In this case the quadric is of the form This is the generic quadric, so it is smooth at infinity, and W H P V I (q, t) = 1 + 4q −1 t 2 + q −2 t 2 .
3.2. Case X = V . In this case the quadric is of the form 0 . An easy computation gives that the only singular point of M P V B over x 0 = 0 is [0 : 0 : 0 : 1]. We consider the affine chart x 3 = 0 and normalize x 3 = 1. Then, we have f 2 = x 1 x 2 − s 3 x 2 0 , which is a non-degenerate quadratic form because s 3 = 0. We infer that this singular point is of type A 1 , in particular its Milnor number is 1, hence W H P V (q, t) = 1 + 3q −1 t 2 + q −2 t 2 .
3.3. Case X = V deg . In this case the quadric is of the form The same analysis as in Subsection 3.2 shows that [0 : 0 : 0 : 1] is the only singular point. This time, however, we have which is degenerate. On the other hand, we have , in particular f 3 (1, 0, 0) = 1. Lemma 9 shows that the singularity is of type 3.4. Case X = IV . In this case the quadric is of the form In the first point, the second-order homogeneous term of F P IV in affine co-ordinates (x 0 , x 1 , x 3 ) is given by x 1 x 3 − s 2 2 x 2 0 , which is non-degenerate because s 2 = 0, so this singular point is of type A 1 . In the second point, the second-order homogeneous term of F P IV in affine co-ordinates (x 0 , x 1 , x 2 ) is given by f 2 = x 1 x 2 − s 2 2 x 2 0 , which shows that this singular point is again of type A 1 . We infer that M P IV B has two singular points, each of Milnor number 1, and W H P IV (q, t) = 1 + 2q −1 t 2 + q −2 t 2 . This time we have f 3 = x 0 x 2 1 + x 0 x 2 2 + (1 + αβ)x 2 0 x 1 (α + β)x 2 0 x 2 + αβx 3 0 . Now, we again see that f 3 (1, 0, 0) = αβ = 0, so Lemma 9 implies that we have an A 2 -singularity, thus W H P III(D6) (q, t) = 1 + 2q −1 t 2 + q −2 t 2 .
3.6. Case X = III(D7). This case is obtained from degeneration of Subsection 3.5 by setting the parameter β of Q P III(D6) (corresponding to the eigenvalue of the formal monodromy at one of the the irregular singular points) equal to 0. In this case (up to exchanging the variables x 1 , x 2 ) the quadric is of the form Q P III(D7) = x 2 1 + x 2 2 + αx 1 + x 2 with α ∈ C × . The only singular point of M P III(D7) B is [0 : 0 : 0 : 1], with homogeneous terms . This time f 2 is degenerate and we have f 3 (1, 0, 0) = 0, so the singularity is neither of type A 1 nor of type A 2 . Plugging x 1 = 0 in f 3 gives f 3 (x 0 , 0, x 2 ) = x 0 x 2 2 + x 2 0 x 2 . As this form has non-trivial linear term in x 2 at x 0 = 1, we get that k 1 = 1.
3.7. Case X = III(D8). This case is obtained from further degeneration of Subsection 3.6 by setting the parameter α of Q P III(D7) (corresponding to the eigenvalue of the formal monodromy at the only remaining unramified irregular singularity) equal to 0 too. We find the quadric 1 Q P III(D8) = x 2 1 + x 2 2 + x 2 .
1 Notice that this differs from the result x 1 x 2 x 3 +x 2 1 −x 2 2 −1 obtained in [25, 3.6]. We are grateful to Masa-Hiko Saito for pointing out that in this case the monodromy data has the extra symmetry x i → −x i for i ∈ {1, 2}. Indeed, the two-fold Weyl group S 2 × S 2 acts on the monodromy data by passing to opposite Borel subgroups at the two irregular singular points, and only the diagonal S 2 leaves invariant the constraints on the parameters. Now, introducing the invariant co-ordinates y 1 = x 2 1 , y 2 = x 2 2 , y 3 = x 1 x 2 and eliminating y 2 we are led to the formula y 1 y 3 x 3 + y 2 1 − y 2 3 − y 1 , which in turn transforms into (29) after some obvious changes of co-ordinates.
3.8. Case X = II. In this case the quadric is of the form Q P II = −x 1 − αx 2 − x 3 + α + 1 with α ∈ C × . We have As in the corresponding affine co-ordinates the degree two terms are respectively given by 0 , x 1 x 3 − αx 2 0 , x 1 x 2 − x 2 0 , and α = 0, we see that all these points are of type A 1 . As a conclusion, we get W H P II (q, t) = 1 + q −1 t 2 + q −2 t 2 .
3.9. Case X = I. In this case the quadric is of the form Q P I = x 1 + x 2 + 1.
There are three singular points of M At the first two of these, the degree two terms in the corresponding affine co-ordinates respectively read as 0 , x 1 x 3 + x 2 0 , so these singularities are of type A 1 . At [0 : 0 : 0 : 1] however, we have 0 . As f 3 (1, 0, 0) = 1 = 0, by virtue of Lemma 9 this singularity is of type A 2 . In total we have three singular points, with Milnor numbers 1, 1, 2 respectively, therefore W H P I (q, t) = 1 + q −2 t 2 . | 12,486.6 | 2018-02-11T00:00:00.000 | [
"Mathematics"
] |
Borehole inventory, groundwater potential and water quality studies in Ayede Ekiti, Southwestern Nigeria
This study aims at determining the state of government provided boreholes, evaluating groundwater potential and quality assessment within the Ayede Ekiti community. 12 Vertical Electrical Soundings (VES) were conducted using Schlumberger array in order to determine geoelectric layers and fracture attributes. Also, 12 water samples were collected from the study area to evaluate physicochemical characteristics of the groundwater. The study revealed average values of total depth of boreholes, static water levels and water column in the boreholes to be 18.77 m, 6.77 m and 11.99 m respectively. 70% of the boreholes are either abandoned, damaged or with evidence of corrosion and encrustation. Geophysical investigation revealed weathered layer thickness ranging from 1.3 to 34.7 m with two regimes of fracture at 40–50 and 75–80 m. The frequency of curve types obtained shows 16.67%, 33.33%, 25%, 8.33%, 8.33% and 8.33% for AK, HA, KH, AA, QH and HK respectively while weathered and fractured basement are identified as the two types of aquifer unit. Results of water analysis unveiled that dominance cations are in order Ca2+ > Na+ > K+ > Mg2+while anions are in the order of HCO3− > Cl− > SO42−. Three types of hydrochemical facies present are CaHCO3 > NaHCO3 > CaCl in 66.67%, 25% and 8.33% respectively. The Wilcox plot suggests the suitability of the groundwater samples for irrigation purposes when compared with the World Health Organization standards. Despite potential for groundwater and good quality of analyzed samples, the problem of water in this community is traceable to inadequacy in exploration, shallow boreholes with consequent seasonal water availability.
Introduction
Increase in human population and agricultural activities in rural and urban areas have led to a corresponding over withdrawal of groundwater in order to meet the desired purpose [1]. Global water resources can be classified into two main bodies of salt and fresh with 97.2% and 2.8% volume respectively [1]. Hence, borehole water has gained increased importance as it represents the largest portion of water supplies used for different applications [2]. However, the decision to drill a privately owned borehole now sends fears that kill faster than virus because of the fear of failure and luxurious symbol of owning a borehole. In addition, the most relevant borehole-related risks in the area include drying up of wells and deterioration of water quality as a result of unprofessional completion. Despite the policy of Federal government on provision of water in quantity and quality as contained in the Sustainable Development Goal 6 (SDG 6) by 2020, inadequate information about basic aquifer parameters is inimical to practical implementation of the policy.
Ayede town is a typical basement complex area with proven shortage of water for agricultural and industrial developments. The need for this study was further supported by the suspected cholera outbreak around the town in 2011 [3]. Efforts have been made at several levels of governments, institutions and individuals to solve water shortage and/ or improve the quality. These include the Omikurudu Dam in Ayede, Asijire Dam in Ikun, political constituency water projects. While the dams are abandoned despite the huge volume of water present, the constituency borehole water projects have all failed due to inadequacies in the exploration and exploitation programmes. Consequently, it has subjected the residents to unnecessary stress by trekking for kilometers in search of potable water for domestic use. For mapping shallow subsurface geological structures, aquifer units, types and thickness of aquifers, resistivity method provides good resolution and excellent sensitivity to changes in measuring parameters [4][5][6][7][8]. Implications of the presences of mineral ions, that dissolved from soil particles and rock grains on the quality of groundwater and its suitability for drinking and/ or irrigation purposes have been well discussed [9,10]. Their findings revealed anthropogenic and geogenic sources as major contribution to groundwater pollution. There is paucity of information on water potential and quality in the area investigated. However, [11] reported that groundwater of the study area has low concentration of dissolved mineral due limited rock-water interface. Reports of these investigation showed that the water was suitable for both drinking and irrigation purposes when compared with international quality standards. [12] carried out detailed geological mapping and water quality assessment in Ido/Isin area of Ekiti state. This was with a view of identifying and classifying the rock types and their relationship with the water body. Three major water facies were distinguished and it was concluded that the total quality of the groundwater is principally controlled by the chemistry of local geology. Christopher and Olatunji [13] classified Ogbese River on the basis of their quality using Water Quality Index (WQI). It was reported that virtually all the parameters tested are within the maximum permissible limit of national and international standards for both major ions and heavy metals. Bayowa et al. [14] worked on the integration of hydrogeophysical and remote sensing data in the assessment of groundwater potential of the basement complex terrain of Ekiti state, southwestern Nigeria. Local groundwater potential map was generated which shows five distinct groundwater potential zones namely; very low, low, moderate, high and very high, it was however concluded that the groundwater potential of the area is generally of very low to moderate level class.
Thus, an integrated approach was used in the study in order to evaluate the state of government provided boreholes (from constituency projects), determine the potential for groundwater exploitation to alleviate the communal suffering and assess the quality of groundwater within the town.
Location of the study area
The study area is Ayede town, 8 km north of the present Oye Local Government Headquarter, which serves as a link between Ekiti and Kwara States in Nigeria. It is bounded by latitudes 7°53′ N and 7°54′ N and longitudes 5°19′ E and 5°20′ E (Fig. 1). It covers an area of about 20 km 2 and easily accessible by road all through the year [11]. The drainage is generally dendritic with hummocky and undulating topography [11]. Annual rainfall is about 1300 mm and its distribution is bimodal within hydrologic year [11]. The first peak of the rain occurs between June and July while the second peak of the rain is between September and October [10]. density of fractures [17]. In addition, undulating topography of the area can facilitate and increase run off into shallow wells and surface water bodies thereby encouraging degradation of water quality.
Borehole evaluation
Static water level, water column and total depth recovered from drilling were measured using calibrated-end-sensor attached meter tape as shown in Table 1. The tape was quietly lowered until the probe touches the water level which generates that signals the depth to static water level. The probe continues until the final depth is reached. The well type, locations, pump types and depth of installation were also examined for proper document and interpretation ( Table 2). The column of water in each borehole was estimated using the difference between the static water levels and the total depths. All the boreholes were fitted with either 0.5 or 0.7 capacity horse power pump.
Vertical electrical sounding (VES)
A total of 12 VES with maximum current electrode spacing (AB) of 300 m were conducted using Schlumberger array. All the VES points were deliberately located at the existing government provided boreholes in order to examine the true potential of the points for groundwater production. Petrozenith (04) Terrameter was utilized to acquire the VES data following the principle of Ohm's law for current penetration in porous media. The field data was modeled using WinResist computer base software. The results of the interpretation were represented in the form of the resistivity values that can be used for preparing the geoelectric sections. This section reflects vertical variations in resistivity values.
Groundwater quality
Twelve water samples were collected from the 12 boreholes to evaluate physicochemical characteristics of the groundwater. Some of the wells were functional while some have been abandoned ( Table 2). The water samples were collected within the water column (at the depth ranging from 5 to 40 m) of the boreholes. The samples were collected into new 2 L plastic bottles rinsed three to four times with the water sample before filling it to capacity and then labelled accordingly and stored at room temperature before laboratory analysis. For collection, preservation and analysis of the samples, the standard method [19] was followed. The pH, the electrical conductivity (μs/cm at 25 °C) and the Total Dissolved Solids (TDS) were estimated in the field utilizing a well calibrated Hanna HI9813-6 Portable pH/EC/TDS/Temperature Meter. Subsequently, the samples were analysed in the laboratory for their chemical constituents. Hydrochemical evaluation was assessed using relevant tools such as Gibbs diagram, Stiff diagram, Sodium hazard diagram and Wilcox. RockWare/2006 6.12 and Microsoft Excel Software were utilized to plot the diagrams.
Borehole inventory
All information on borehole inventory is compiled in Table 1. Some of these parameters were plotted against the boreholes' locations on Histogram in order to appreciate the behavior of the aquifer. The total borehole evaluated was 12 with four functional while eight were damaged. The total depth of boreholes varies from 5.27 m (at OSC) to 44.38 m (AHCI) (Fig. 3a), Static Water Level range from 2.41 m (at AHC1) to 19.69 m (77E) (Fig. 3b) while water column range from 0.91 m (OFS) to 41.64 m (77E) (Fig. 3c).Well attributes assessments showed that 70% of the wells are not-functional at the time of study (Table 2). A close observation of the parameters revealed that 60% of the wells were terminated within the overburden (less than 20 m). Consequently, the wells are not only seasonal in production due to effect of evaporation but unfit for any purpose at all seasons due to high degree of turbidity as observed during field study. Also, there is no clear or consistent relationships between any two parameters evaluated. Lack of maintenance also caused high level of encrustation and backfilling as evident in some of the failed holes (Fig. 4). 6.77 m and 11.99 m respectively. Many of the abandoned boreholes in the area were traceable to: inadequate knowledge of underground geomaterial, Improper well development, improper installation, Lack of maintenance and excessive constant withdraw of water beyond the boreholes sustainable yield. Encrustation and backfilling were evident in some of the failed holes; in boreholes where these processes were prevalent, geophysical results were interpreted to confirm that some borehole were terminated within the overburden thickness (< 20 m).
Potential for groundwater
The VES positions were arranged such that 3 major alignments (N-S and NE-SW) were assumed (Fig. 5). The 1-D electrostratigraphic generated units revealed four geoelectric layers consisting of topsoil, weathered layer, partly weathered/ fractured basement and the presumably fresh bedrock. The topsoil is relatively thin (0.3 and 1.3 m) while the resistivity values range from 22.3 to 154.7 Ωm. This thin vertical thickness is an indication of low potential for water holding capacity especially in the dry season due to increased radiation and evaporation activities. This layer spreads through the entire community and serves as water bearing unit for hand-dug wells especially where is thickest and in wet season. The lateritic clay component of this unit, which marks the contact with the weathered layer, has good water retention capacity but limited flow potential due to non-interconnectivity of the pores. This layer is directly underlain by weathered layer whose thickness range from 1.3 to 34.7 m and resistivity values range from 13 to 791.9 Ωm. This layer is common in basement complex areas and characterized by moderate to highly weathered lithology. They serve as pathway or conduit for percolating water. In the study area, hand-dug wells that penetrate this layer provide water all year round (personal communication) possibly due to relatively higher interconnectivity of pores and distance from effect of radiation. The observed relatively high values of resistivity compare to the top layer may be due to incomplete weathering and consequent large crystal of grains or pebbles. Also, wide variation in the thickness of the layer may have influenced the variability in the availability of water in the borehole since most boreholes actually depend on recharge from this unit. The third layer is the partially/fractured zone (Table 3), the layer is characterized by uneven thickness range (1.3-95.7 m) and resistivity values range (20.9-4385.1 Ωm) which might also support the hydrogeological diversity in the community. This is the most targeted area for groundwater exploitation in typical basement complex area [14]. The last unit is the unfractured/fresh basement rock with undetermined thickness and resistivity values (≥ 1500 Ωm). This lithologic unit is known to be generally impervious, hence, non-water bearing. Therefore, aquifer setting is generally heterogeneous and anisotropic in the community and groundwater flow and its accumulation can be controlled by topography, weathered mineral types, degree of tectonic deformation and thickness of overburden.
The curve types obtained in the area AK (16.67%), HA (33.33%), KH (25%), AA (8.33%), QH (8.33%) and HK (8.33%) (Fig. 5) while two types of aquifers (weathered and fractured basement) were delineated. HA-curve type is a four-geoelectric layer curve (ρ 1 > ρ 2 < ρ 3 < ρ 4 ). It is a combination of the minimum and ascending segment. The first elevated apparent resistivity value (ρ 1 ) may be associated with effect of dehydration from sun radiation (especially in dry season). The relatively low resistivity values in second and third geoelectric layers, commonly due to presence of pore fluid (water), may have been facilitated by weathered nature of basement rock. Thus, these layers are the major hydrogeologic interest of the curve type. The fourth geoelectric layer has continuous elevated apparent resistivity values with limited water bearing potential. AK-curve type is also a four-geoelectric layer curve (ρ 1 > ρ 2 < ρ 3 > ρ 4 ). This curve type has consistent elevated apparent resistivity from the first layer until the fourth layer which has low apparent resistivity value. The reduced resistivity in the 4th layer signals fracture (porosity) within the fresh basement rock. AK-curve indicates closeness of the basement rock to the surface and the hydrogeologic significance will depend on the presence of water in the fractured fourth layer. KH-curve type is another four-geoelectric layer curve (ρ 1 < ρ 2 > ρ 3 < ρ 4 ). This curve type has two regimes of elevated apparent resistivity values (i.e. the first and fourth layers). Hydrogeologic significance of a KH curve will be determined by low apparent value, degree of weathering and mineral types. The AA-curve types are not desired in groundwater exploration. It has consistent elevation in apparent resistivity value (ρ 1 < ρ 2 < ρ 3 < ρ 4 ) and a consequential limitation for groundwater accumulation or flow. QH-curve type is opposite to "AA"-curve type. It is characterized by consistent reduction in apparent resistivity (ρ 1 > ρ 2 > ρ 3 > ρ 4 ). This shows that the subsequent geoelectric layer is less resistive. This type of curve is typical of thick overburden material and desirous in groundwater exploration. However, it requires high level of technicality during drilling to prevent collapse of less resistive lithologies. HK-curve type is a fourgeoelectic layer curve (ρ 1 > ρ 2 < ρ 3 > ρ 4 ). It has two regimes of low apparent resistivity values (i.e. first and fourth layers) which indicate two aquifer units especially where the fracture in fourth layer is water bearing. This type of curve is one of the most desired four geolectric layer curves in basement complex groundwater exploration. Two regimes of fracture were observed between the depth of 40-50 m and 70-80 m. The fractures between the depth of 70-80 m seems to be more pronounced, hence, a total drilling depth of > 90 m should be considered for sitting a prolific borehole, considering the maximum AB/2 spread of 150 m and two-third rule of the thumb for actual depth of probing. Generally, the area is slightly weathered and moderately fractured; hence, groundwater potential will depend largely on secondary porosity.
Geo-electric sections over the study area
Three major profiles were identified from the arrangement of the 1-D VES position over the study area. These were used to generate the geo-electric section presented in Fig. 6.
Profile A-A′
The subsurface topography is uneven. Top soils are virtually absent in some VES points but thicker than 3 m in some places. The apparent resistivity of the top soil ranges from 63 to 285 Ωm while that of the weathered unit ranges from 39.4 to 1039 Ωm. High fractured unit is absent in VES 1 and 3 but present in VES 4 and 5 with apparent resistivity 31.4 and 106 Ωm respectively and are expected to be water prolific.
Profile B-B′
The subsurface topography is relatively consistent and uniform except for VES 6 where the lithologies have different morphological characteristics. The thickness of topsoil is generally less than 5 m. The topsoil resistivity ranges from 26.2 to 264 Ωm. Only VES 6 has fractured unit with apparent resistivity 68.8 Ωm. The fresh basement rock whose thickness is known has apparent resistivity range from 329 to 27,527 Ωm.
Profile C-C′
The subsurface topography is undulating with relatively thin topsoil except at VES 9. The rock head slopes NE-SW direction. Apparent resistivity of topsoil range from 52.9 to 3505 Ωm while weathered layer has apparent resistivity range from 13.9 to 1695 Ωm. The fractured unit has apparent resistivity value range from 22.9 to 223 Ωm and are expected to be water prolific. The apparent resistivity of the fresh and unfractured basement rock is > 1000 Ωm.
Physico-chemical parameters
Electrical conductivity is a measure of the ability of a material to conduct electric current [20]. Water enriched with HCO 3 − and CO 3 2− have low conductivity. The value of EC is an indication of degree of salinity. Low EC is associated with area of low water-aquifer materials interaction due to high runoff and low infiltration while high EC is linked with area of high discharge water occasioned by long residence time with aquifer materials, high infiltration and impact of anthropogenic activities. The EC is in the range of 40-460 µs/cm. According to [21], the groundwater sample is classified as type I indicating low enrichment of salts. Total Dissolve Solids (TDS) indicate the total salt concentration of dissolved ions from salts and rocks in water. The amount and classification of dissolved solids depend on the solubility and type of rock where the water is resident [20]. Areas of recharge water at high topographic level have low TDS while discharge areas with low topographic level (anthropogenic contribution) have high TDS. In the study area, TDS values range from 50 to 310 mg/L which indicate general fresh type of water [22]. EC and TDS have similar triggering factors, thus they are directly related. Sodium in water is largely dependent on the presence and state of weathering of plagioclase feldspars, nepheline, glaucophane and aegirine [21], especially in basement complex terrain. However, brines, seawater, industrial effluent, municipal waste waters and ion exchange process can also elevate sodium concentration in water. The values range from 0.03 to 20.33 mg/L. Though, the values are generally low and within recommended standard, there is an observed spatial variation in the concentration which signals increase in anthropogenic source over geogenic origin. Potassium in water can be sourced from orthoclase feldspar, nepheline, biotite, leucite and chemical fertilizer [21]. The values of potassium range from 0.04 to 21.05 mg/L. These values are generally less than ≤ 150 mg/L by World Health Organization (Table 4). This is probably due to high absorption of K + on the clay minerals.
Calcium ions in groundwater are commonly sourced from plagioclase, pyroxene, amphiboles, limestone, dolomite, gypsum, sandstone and shale. The concentration of calcium ranges from 6.4 to 19.45 mg/L which is very low and normal when compared to standards. Differential impacts of geogenic activities on the groundwater body may be responsible for the spatial variation. Magnesium ion enrichment in groundwater is principally from minerals such as plagioclase, pyroxene, amphiboles and rocks like limestone, dolomite, anhydrite, gypsum, sandstone, ion exchange and presence of CO 2 in soil zone [23]. Magnesium in the study area ranges from 3.43 to 19.45 mg/L which indicates values within permissible limit (Table 4). Chloride (Cl − ) is dissolved from rocks and soils but most prevalent in sewage, brines, seawater, septic tanks. Groundwater receives elevated concentration of chloride when the primary source makes contact with groundwater reservoir. Chloride concentrations range from 3.98 to 45.85 mg/L. The values are within permissible limit and suggesting limited anthropogenic contribution. Bicarbonate (HCO 3 − ) and Carbonate (CO 3 2− ) are principally introduced to groundwater system through dissolved carbon dioxide during precipitation apart from decomposed organic matters. HCO 3 − vary in concentration from 19.52 to 204.98 mg/L which suggests soil carbon dioxide (CO 2 ) as the source of enrichment. The variability of concentration with the community may be due to prevailing condition of alkalinity.
Sulphate (SO 4 2− ) enrichment in water system is largely from rocks containing gypsum, iron sulphides, mine water and industrial wastes. In the study area, the concentration range from 2.00 to 15.00 mg/L which is generally within the permissible limits for drinking water standard [24,25]. The noticed variation in the concentration can be interpreted as a gradual increase in anthropogenic activity.
Water quality assessment
Chemistry of groundwater is greatly influenced by the interaction of water with rocks and its travel history. During movement of water through porous media, it carries soluble components of the solid rock in their ionic form thereby altering the chemical composition of the natural water. Broadly, two major sources of groundwater pollution have been identified, that is, anthropogenic (includes surface artificial activities such as landfill, industrial waste, agricultural waste, drill chemical, etc.) and rock-water interaction based on residence time.
The hydrochemical facies classification discusses the origin and distribution of groundwater type along the water flow paths. It also provides information on progressive ion enrichment during water residency in the subsurface and extent of rock-water interaction. In order to evaluate the quality of water and their suitability, a trilinear diagram proposed by Piper [26] was adopted. It consists of two triangular fields on either side of a central diamond-shaped polygon. The left side triangle discusses the cations while the right side triangle discusses the anions. The central diamond-shaped polygon represents an overall geochemical attribute of the water body. The percentages of major chemical variables (Table 5) are plotted on the triangles and projected into diamond shape in order to determine the dominant ion types. Genetically, 66.67% of the water samples come under carbonate hardness or fresh water type (CaHCO 3 ). They are characterized by Ca 2+ and Mg 2+ of HCO 3 − and CO 3 2− over Na + and K + of Cl − and SO 4 2− , whereas, 25% of the groundwater samples is characterized by Na + of HCO 3 − (NaHCO 3 ) which show non-carbonate Table 4 Comparison of findings to recommended standards for drinking water [24,25] Water alkali. This water class has Na + of HCO 3 − over which suggests fresh groundwater with enriched Na + possibly from weathered orthoclase feldspar. 8.33% of the water samples is characterized by Ca + of Cl − (CaCl) which suggest that the groundwater quality is becoming brackish due to influence of anthropogenic activities such as direct introduction of table salt to wells to kill germs (personal interview) as the strong acids (Cl − ) exceeds the concentration of the weak acid (HCO 3 − ) (Fig. 7). Pearson's correlation matrix was employed to show similarity between pair of chemical variables and how one variable can be used to predict another. The analyses show different levels of relationships between cations and anions ( Table 6). Positive correlation coefficients > 0.7 were considered to be strong while those between 0.5 and 0.7 were considered to be moderately correlated [27]. On the other hand, high negative correlation indicate that the chemical variables are well correlated but in opposite direction. From this study, strong correlations were established between seven pairs of chemical variables (Table 6) such as Na + with Cl − (0.94), K + with Cl − (0.94), Mg 2+ with HCO 3 (0.71) while very weak correlation was established between Na + with HCO − (0.71) and K + and SO 2 4 (0.01).
Mechanism controlling groundwater quality
The origin of dissolved ions in groundwater can be understood by Gibbs' diagram [28]. Gibbs proposed two diagrams (one is related to the ratio of cations-Na + + K + : Na + + K + + Ca 2+ and the other is related to the ration of anions-Cl − : Cl − HCO 3 , which are plotted against the TDS, such as to understand the mechanisms that control groundwater chemistry with respect to rainfall (precipitation), rock-water interaction and evaporation (Fig. 8). 33.33% of the samples (MAS, OPS, IST and EGS) have TDS less than 100 mg/L with dominant Ca 2+ and HCO 3 − over Na + and Cl − thus, fall within the precipitation domain and indicating a meteoric origin. Whereas, a progressive increase in Na + and Cl − over Ca 2+ and HCO 3 − possibly due to anthropogenic addition of table salt or long time rock-water interactions, the value of TDS increases above 100 mg/L. Thus, 66.66% of the samples are within rock dominance. This means that the original quality of fresh groundwater developed by geogenic activities has been subsequently modified to brackish due to interferences of anthropogenic sources.
Irrigation purpose
Elevated concentrations of dissolved ions in irrigation water affect plants and agricultural soil physically and chemically by lowering osmotic pressure in the plant cell [21]. This action prevents water from reaching the branches and leaves, thus reducing productivity. Salinity hazard, sodium hazard, percent sodium, residual sodium carbonate, magnesium ratio, Wilcox Total 100 100 100 100 100 100 100 100 100 100 100 plots, Kelly's ratio and permeability index are some of the evaluating tools. However, Wilcox diagram and Magnesium ratio have been used in this study. Wilcox [29] proposed diagram with respect to a combination of Electrical Conductivity and sodium percent. This combination classifies the diagram into five zones ranging from Excellent to Unsuitable, with increasing salinity hazard for irrigation. The EC varies from 40 to 460 µs/cm and %Na varies from 0.14 to 22.14%. All the samples fall within the excellent zone (Fig. 9).
Magnesium ratio (MR) was proposed by Szaboles and Darab [30]. Excess of magnesium in soil damages the soil and renders it alkaline. MR is an adverse condition on crop yields which is measured as Magnesium Ratio. It is computed as: MR < 50 indicates suitability of water for irrigation while MR > 50 suggests non-suitability. The samples have MR range from 12.65 (AHC1) to 54.39 (77E) with 83.33% having values less than 50 while 16.67% of the sample fall in the unstable field for irrigation (Table 7).
Stiff pattern
The Stiff pattern is a polygonal shape created from four parallel horizontal axes extending on either side of a vertical zero axis. According to Stiff [31], the larger the area of the shape, the greater the concentration of the various ions and the greater the degree of pollution. Figure 10a-c shows dominance of HCO 3 in all the samples and Ca 2+ in 66.66% 0f the samples. The increase enrichment of these ions supports rock dominance as the major source of ions. Water obtained from APH borehole appeared to be the most polluted while the least polluted borehole within the community is EGS. The spatial variation in the degree of pollution can be informed by several factors such as geology or mineral type, depth of borehole, resident time, degree of encrustation, access to run-off etc.
Conclusion
The situation of existing boreholes in the study area, potential for groundwater and quality of available samples have been evaluated and discussed in this paper. Borehole inventory revealed that the average total depth and static water level are 18.77 and 6.77 m respectively while average column of water in the wells is 11.99 m. Investigation showed that many of the abandoned boreholes are linked to inadequate knowledge of subsurface geology before drilling, improper well development and installation, lack of maintenance and drought due to shallow status of the wells. Geophysical evaluation of the area under investigation revealed four geoelectric layers consisting of topsoil, weathered layer, partly weathered/fractured basement and the presumably fresh bedrock was discovered. The frequency of curve types are 16.67%, 33.33%, 25%, 8.33%, 8.33% and 8.33% for AK, HA, KH, AA, QH and HK respectively two types of aquifers, which are the weathered and fractured basement aquifers were delineated. Two regimes of fracture were observed between the depth of 40-50 m and 70-80 m. The fractures between the depth of 70-80 m seems to be more pronounced, hence, a total drilling depth of > 90 m should be considered for sitting a prolific borehole. Generally, the area is deeply weathered and moderately fractured, hence, groundwater potential will depend largely on secondary porosity and greater depth. The results of the water analysis showed that the dominant cations are in order Ca 2+ > Na + > K + > Mg 2+ and anions are in the order of HCO 3 − > Cl − > SO 4 2− . Three types of hydrochemical facies present are CaHCO 3 > NaHCO 3 > CaCl in 66.67%, 25% and 8.33% respectively with chemical evolution history of the water being rock weathering. Plot of Wilcox and Magnesium ratios suggest the suitability of the groundwater samples to be fit for irrigation.
However, it is recommended that the estimation of borehole yield be carried out to buffer the current information on aquifer performance. Also, the search for groundwater should be extended beyond AB/2 spread longer than 150 m (which is the limit of this work) in order to access other possible aquifer units within the subsurface. Seasonal study of groundwater chemistry including information on bacteriological status should also be encouraged. | 7,014 | 2021-01-14T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
A novel algorithm for finding optimal driver nodes to target control complex networks and its applications for drug targets identification
Background The advances in target control of complex networks not only can offer new insights into the general control dynamics of complex systems, but also be useful for the practical application in systems biology, such as discovering new therapeutic targets for disease intervention. In many cases, e.g. drug target identification in biological networks, we usually require a target control on a subset of nodes (i.e., disease-associated genes) with minimum cost, and we further expect that more driver nodes consistent with a certain well-selected network nodes (i.e., prior-known drug-target genes). Results Therefore, motivated by this fact, we pose and address a new and practical problem called as target control problem with objectives-guided optimization (TCO): how could we control the interested variables (or targets) of a system with the optional driver nodes by minimizing the total quantity of drivers and meantime maximizing the quantity of constrained nodes among those drivers. Here, we design an efficient algorithm (TCOA) to find the optional driver nodes for controlling targets in complex networks. We apply our TCOA to several real-world networks, and the results support that our TCOA can identify more precise driver nodes than the existing control-fucus approaches. Furthermore, we have applied TCOA to two bimolecular expert-curate networks. Source code for our TCOA is freely available from http://sysbio.sibcb.ac.cn/cb/chenlab/software.htm or https://github.com/WilfongGuo/guoweifeng. Conclusions In the previous theoretical research for the full control, there exists an observation and conclusion that the driver nodes tend to be low-degree nodes. However, for target control the biological networks, we find interestingly that the driver nodes tend to be high-degree nodes, which is more consistent with the biological experimental observations. Furthermore, our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the practical strategic utility of TCOA to incorporate prior drug information into potential drug-target forecasts. Thus applicably, our method paves a novel and efficient way to identify the drug targets for leading the phenotype transitions of underlying biological networks. Electronic supplementary material The online version of this article (doi: 10.1186/s12864-017-4332-z) contains supplementary material, which is available to authorized users.
Backgroud
One final goal of our efforts is to control the complex systems in our daily life. In the past decades, plenty of attentions [1][2][3][4][5] have been paid into the study of the structures and dynamics of complex networked systems, especially biological systems. A frontier area of the research in network science and engineering is controlling complex networks, such as biological molecule networks. Since the law of some systems is hidden, it is difficult to study the controllability of the nonlinear systems directly, especially for the large scale biological systems. However, it is possible to obtain local analytical results for the controllability of nonlinear systems by developing control schemes of linear dynamic systems. Nearly decades of efforts on the controllability of linear dynamic networks, not only review a sufficient condition for "local controllability" of a nonlinear system about a trim point but also result in tremendous advances in our understanding of the problem of controlling complex networked dynamical systems [5][6][7][8][9]. In a recent breakthrough, an efficient algorithm with low polynomial time was provided for computing the minimal quantity of input nodes needed to control any given large-scale directed network [6]. But, it was also shown that in the case of sparse inhomogeneous networks, such as most of the networks emerging from biochemical and biomedical applications, controlling the entire system is expensive. On the other hand, in terms of practical applications in many cases, it is enough to control only a certain wellselected portion of the network's nodes, such as the set of essential genes, in order to impose a certain overall behaviour over the system. Thus, an interesting question, known as target control problem of complex networks, is posed that how can we chose the driver variables from the system to control a subset of the whole nodes (or a subsystem) about a trim point [8].
However, the traditional framework of network control can only be applicable for the simple networks, and it can not address the target control problem of the large scale of networks. To solve the problem, Wu et al. has proposed a method to solve the target control problem by constructing a weighted bipartite network [10]. But this method may fail when there does not exist a perfect matching in most cases. Meanwhile, Gao et al. proposed another method which offers an approximation on the minimum set of input nodes for target controlling the networks [8]. However, the above researches only focus on controlling the system through any minimum driver-node set and ignore the existence of multiple candidate drivernode sets for control a targeted subset of the network. When we actually expect to control the system with objectives optimization, the different driver-node sets may not participate in target control equally. This consideration prompts us to study how to find the desired solution for target controlling complex networks with objectives optimization. A practice of this consideration can come from our aim for combinatorial drug target identification: we not only consider how to control the diseaseassociated genes with the minimum driver nodes, but also expect that more driver nodes can be consistent with the set of well-known drug-target genes. Here, we pose a new target control problem with the objectives-guided for finding the optimal driver nodes that minimize the total quantity of drivers and also maximize the quantity of constrained nodes within the drivers.
In this paper, we develop a novel algorithm (TCOA) to identify the drivers for efficiently controlling targets in complex networks. Our algorithm consists of three steps: We first construct the target control tree of the network by finding the maximum matching in the constructed iterated bipartite graph or "linking and dynamic graph" and identify the controllable targets of each node by obtaining its reachable target nodes in the control tree; Then we find the set of optional driver nodes by using the integer linear programming to optimize a regulated factor, which is introduced to balance the quantity of driver nodes and the quantity of driver nodes within the set of constrained (or pre-selected) nodes; Finally we define the maximum matching of the constructed iterated bipartite graph or "linking and dynamic graph" as a Markov chain and use a Markov chain Monte Carlo (MCMC) approach to sample from the sets of all possible maximum matching. We have evaluated TCOA on several real-world networks, and the experiment results support that TCOA outperform existing control-focus approaches. Especially, we have also applied TCOA algorithm to analyze the PPI signaling transduction networks in pancreatic cancer, and Inflammatory bowel disease network from KEGG. The results further illustrate that our TCOA can efficiently identify the driver nodes with more optional property to guarantee the system target controllable, compared to several control-focus approaches. In addition, the experiment results on the two biological cases can also supply an efficient bioinformatics tool to identify the drug targets for leading the phenotype transitions of underlying biological networks.
Problem formulation
Since the law of the some network dynamics, such as the biological networks is hidden, it is difficult to directly study the controllability of the nonlinear networks. Most complex systems are characterized by nonlinear interactions between the components, and usually local properties can be verified [11,12]. Thus, it is possible to obtain local analytical results for the controllability of nonlinear systems [13,14]. Here, we review a sufficient condition for "local controllability" of a nonlinear system about a trim point. A system is "locally controllable" if there exists a neighborhood in the state space such that all initial conditions in that neighborhood are controllable to all other elements in the neighborhood with locally bounded trajectories [15]. Considering a dynamic system governed by a set of ordinary differential equations, where the function f(x) denotes the system's dynamics.
x ∈ R N and y ∈ R NO represent nodes state and target (output) nodes state; The element B ij in B∈ R N*NC represents whether the node v i among V = {v 1 ,v 2 ,…,v N } is inputed by the j-th signal [16][17][18]. C∈R NO*N represents the output matrix.
We are interested in how to find proper matrix B to gurantee the system (1) locally target (or output) controllable through the input u = [u(1),u(2),..,u(NC)]. Let x 0 be defined as, where f(x 0 ) = 0 provides the system's steady state, A(x 0 ) represents the system's local (linear) dynamics around a trim point x 0 and G(x 0 ) guarantees the system local target controllable. The dynamics in (1) are locally target controllable around x 0 if rank (G(x 0 )) = NO [13,14]. Therefore the local target controllability analysis of (1) about a trim point therefore reduces to the linear target controllability analysis of (2), The dynamics in Eq. 1 are deemed "locally structurally target controllable" if the linearized dynamics in Eq. 2 are structurally target controllable. And the linearized dynamics in Eq. 2 is structurally target controllable if the follow equation is satisfied when we can choose the non-zero values in A and B, max rank CB; CAB; CA 2 B; …; In a given directed network with nodes V = {v 1 ,v 2 ,…,v N }, let O and D be the set of target nodes and the driver nodes, assuming that we expect more driver nodes could For the purposes of this work, the adjacency matrix of the network is used to find the structure of A in Eq. 2. Given the constrained nodes set Q, we focus on how to find a suitable driver nodes set D such that where ||D|| denotes the quantity of nodes in the identified driver nodes set D and ‖Q ∩ D‖ represents the quantity of drivers in the constrained or pre-selected nodes set Q; the objective functions f 1 and f 2 aim to find the optional drivers with minimum quantity of drivers D and maximum quantity of drivers in the pre-selected nodes set Q respectively. However, there are no existing methods to efficiently solve the problem. For example, in Fig. 1, for simple network 1 we want to control the target nodes O = {v 3 ,v 4 , v 6 ,v 7 } with the minimum the quantity of driver nodes and also expect to maximum the identified driver nodes within the constrained nodes Q = {v 2 ,v 4 }. But Liu's approach [6] and Gao's approach [8] fail to find the optional driver nodes set for target controlling the O = {v 3 ,v 4 ,v 6 ,v 7 } (see Fig. 1).To overcome the limitations of the existed approaches for the target control problem in complex networks, we develop a novel objective optimization algorithm (TCOA). The key consideration of our TCOA to solve the problem (4) is that 1) find the controllable targets of each network node without destroying target controllable of the whloe system; 2) to extract the optimal driver nodes of the network by using objectives-guided optimization with integer linear programming and Markov chain Monte Carlo (MCMC). The former is to guarantee the identified subset satisfying the constrained condition. And the latter is to guarantee the subset to be optimal for the two objectives.
The framework of our TCOA Our TCOA presents an algorithm for detecting driver nodes that can best control a network. We adopt the paradigm of local (linear) controllability. Different from the related works, our goal is not only to minimize the quantity of drivers, but also to maximize the quantity of drivers within a given pre-selected subset. The consideration for our TCOA is in the adding of different and more efficient strategies to find the optimal driver nodes of the graph. The TCOA algorithm consists of three steps: i) Identifying the controllable subsystem by constructing target control tree; ii) Finding the optional drivers with the Integer Linear Programming (ILP); iii) Further optimizing the driver set by using MCMC samplings. The details of TCOA are introduced in bellows.
Step 1 Identifying target controllable subspace by constructing target control tree Definition 1 The target control tree is defined as a multi-layers network where the nodes in the bottom layer represent the target nodes and the nodes in the other layers consists of the upstream control nodes.
Note that in our previous reseach [19] we have defined the upstream control nodes as the nodes which can control the target nodes in network G(V,E).To efficiently obtain the target control tree for controlling targets in a complex network, a algorithm (greedy algorithm) is designed to construct the target control tree as is listed in Table 1. Note that in our greedy algorithm, we use the "linking and dynamic graph" to represent the iterated bipartite graph. The relationship between maximum matching in the "linking and dynamic graph" and "the target control tree" can be explained as follow: at each iterated bipartite graph among the "linking and dynamic graph", we can obtain the maximum matching, which result in a sub-graph; in the sub-graph, the maximum matching determines which paired nodes can be connected in "the target control tree". The maximum matching of a given general bipartite graph can be efficiently obtained by using Hopcroft-Karp algorithm [20,21]. Since where r is the iteration times when we obtain the updated bipartite graph.
Theorem (Target controllable subsystem identification theorem) In the target control tree, the target node v j among the bottom layer could be controlled by the node v i among the up layer if node v j is accessible from node v i .
This result of our theorem (the details of proof are in supplementary note 1 of Additional file 1), can allow us Demonstration of the limitations of the existed methods for the target control problem with objectives-guided optimization. a Two simple networks. In the two networks, the target set is {v 3 ,v 4 , v 6 ,v 7 } and {v 3 ,v 4 , v 6 } respectively (highlighted in green) and the constrained nodes set is {v 2 ,v 4 } and {v 1 } respectively (shape in hexagon). Here we want to minimum the quantity of driver nodes to control the target nodes set {v 3 ,v 4 , v 6 ,v 7 } and {v 3 ,v 4 , v 6 } (i.e., disease-associated genes) and maximum the identified driver nodes within the constrained nodes (i.e., practical constraints as prior known drug targets). b By applying full control of Liu's method to the two networks, we can identify the unmatched nodes {v 1 ,v 3 ,v 5 } and {v 1 ,v 2 } (nodes within the blue circle) in the right side of the bipartite graph transferred from the directed network, as the driver nodes(more details seen in ref. [6]). c By using target control of Gao's method, they first obtain the updated bipartite graph by choosing the nodes in the left side in the previous matching (highlighted in grape) as the nodes in the right side of the current matching (highlighted in green) and then calculate a maximum matching in the updated bipartite graph. Finally they add unmatched nodes (nodes within the blue circle) in right side of the updated bipartite graph to the set of driver nodes (more details seen in ref. [8]), which identify the set of driver nodes of the two networks as {v 1 ,v 3 ,v 4 } and {v 1 ,v 2 }. In the simple example 1, according to the k-walk theory in Re. [8], it is easy to know that node v 2 can control v 3 and v 6 and v 4 can control v 4 and v 6 . For the simple example 2, based on the fact that when we remove a link it will not decrease the quantity of driver nodes. For example, when we remove the link from node v 4 to node v 3 in Fig. 2, according to the k-walk theory in Re. [8], it is easy to know that node v 1 can control v 3 , v 4 and v 6 . Therefore, the nodes {v 2 ,v 4 } and {v 1 } as the optional driver nodes are ignored by the existed methods for the two networks to find the controllable targets of node v i , denoted by TCS(v i ). The term "control" means that when we act control signals on the node v i , the nodes state in TCS(v i ) can be changed to any stable or unstable targte state from any initial state at a finite time.Following on our theorem, in the target control tree, we can apply the Breadth First Search algorithm (BFS) [22,23] to obtain the controllable targets of node v i ,TCS(v i ).The procedure of BFS is shown in Table 2, Obviously, the maximum complexity of the BFS algorithm for identifying the controllable targets of all nodes in the target control tree TCT ≡ (V TCT , E TCT ), where V TCT are the nodes and E TCT are the edges in TCT, is in the order of O(||V TCT ||*||E TCT ||) [22,23].
On the other hand, Liu et al. determined the controllable subsystem of any node in a network via linear programming [24], while Wang et al. propose a concept called control range to identify the controllable subsystem [25]. However, the existing two methods are still not efficient to identify the target controllable subsystem. In Figs. 2 and 3 we give an intuitive explanation to explain how we find the controllable targets of each network node. As shown in Fig. 2, we want to control the state transition of {v 3 , v 4 , v 6 }. By using Liu's approach [24] and Wang's approach [25], they both identify the controllable targets of v 1 is {v 3 , v 4 }. However, by using our method, the identified target controllable subsystem of node v 1 is {v 3 , v 4 , v 6 } (see Figs. 2 and 3).
Step 2 Identifying the optional driver nodes by using integer linear programming We first introduce an outlier measurement on a set of driver nodes that quantifies the quantity of driver nodes outside the pre-selected node-set, where x i = 1 when a node v i belongs to the driver set and ∑x i denotes the quantity of driver nodes; y j = 1 when node v j belongs to the pre-selected set D and ∑y j denotes the quantity of driver nodes in the pre-selected node-set.
To take into account both the quantity of driver nodes and the quantity of driver nodes in the pre-selected set, we define the weight Note that W(M) is only one candidate measurement of the trade-off between the quantity of driver nodes and the quantity of driver nodes in the pre-selected set. After obtaining the controllable targets of each network node and the weight of the driver node-set, the optional driver nodes guaranteeing the constrained target controllable can be approximately determined by the following Integer Programming model, s:t: In the problem (5), the function (5a) is to get the optional driver node-set with the minimum quantity of driver nodes and the maximum quantity of driver nodes in the pre-selected nodes set. The constraint (5b) aims that at least one driver node could control a target node. The constraint (5c) points that the value of y j is equal to that of x j if node v j belongs to the pre-selected set Q. In Table 1 Our greedy algorithm for constructing target control tree for network G(V, E) Input: Network G(V, E), target nodes O Initialize: B 0 ←(R 0 , L 0 ) //the right side R 0 is made of target nodes O, and the left side L 0 is made of nodes from which the targes could be reachable. m 0 ←Matching(B 0 ) //Find maximum matching While L n ≠ ∅ (n ≥ 1) do R n ← L n-1 //Let the set of nodes in L n-1 to be the new R n set B n ←(R n , L n ) //get a new bipartite graph B n . m n ←Matching(B n ) //Calculate a maximum matching m n in B n V n = V n- fact, the problem (5) could be efficiently solved for thousands of variables with a LP-based classic branch and bound method [26,27].
Step 3 Optimizing the driver nodes by MCMC samplings Here we define the maximum matching of the "linking and dynamic graph" as a Markov chain. The state space of the Markov chain is the set of all the possible maximum matching of the "linking and dynamic graph". In our greedy algorithm, for a given node different sets of controllable targets could be found when we obtain different maximum matchings (e.g. the red edges in the Fig. 3). That is, for different Markov chains, the different sets of driver nodes with different weight W(M) could be obtained. The optimal different Markov chains need to be sampled from the state space, so that, a Markov Chain Mont Carlo method (MCMC) [28] is used. The MCMC approach samples sets of maximum matching, with the probability of sampling a set M proportional to the weight W(M) of the set. Thus, the frequencies of the maximum matching sets in the MCMC method provides a ranking of maximum matching sets, in which the sets are ordered by decreasing sampling frequency. The advantages will prove useful in analysis of real network data below.
The basic idea of MCMC, implying on our objective optimization, is to build a Markov chain whose states are the collections of k adjoin paths connecting to the target nodes in the "linking and dynamic graph" and to define transitions between states that differ by one target node. With the Metropolis-Hastings algorithm [29], we sample sets of maximum matching M G of k adjoin paths in the iterated bipartite graph with a stationary distribution that is proportional to exp.(c*W(M)) for some c > 0, which gives a desired stationary distribution on the state space.The advantage will prove useful in analysis of real mutation data below. The MCMC method is described as follows: Initialization: By using greedy algorithm, obtain the initial Markov chain M 0 ;
The complexity analysis of TCOA
The TCOA method contains three parts: (i) For constructing the target control tree for the network G(V,E), we apply the developed greedy algorithm, to find the maximum matching of the "linking and dynamic graph". In fact, the developed greedy algorithm runs in the order of O r à ffiffiffiffiffiffiffiffi ffi V k k p à E k k , where r denotes the iteration times in the iterated bipartite graph.
(ii) In phase of finding the controllable targets of all network nodes, we apply the BFS algorithm to the constructed target control tree. Therefore the maximum complexity of the BFS algorithm for finding the controllable targets of all nodes is in the order of O(||V TCT ||*||E TCT ||), where V TCT denotes the nodes and E TCT represents the edges in target control tree TCT (V TCT = V, ||E TCT || < =||E||) . (iii) For the phase of finding the optional driver nodes set, the optional driver nodes set is obtained by using the integer linear programming. Specifically, we used a branch-and-bound algorithm, an automatic method with a greedy O(log(||V||)) of solving discrete programming problems, as implemented by function intlinprog of the MATLAB programming language to solve our binary integer-programming problem [30]. To find more optional solutions, MCMC sampling method has been adopted, resulting in the overall complexity of our TCOA approach can be approximately considered as O(m * ‖V‖ * ‖E TCT ‖) where m is sampling number, and E TCT is the edges set of the target control tree.
Experiment results of real-world networks
To evaluate the target control efficiency on an arbitrary network, we first introduce two factors αand β, which represent the ratio of the target nodes and constrained (or pre-selected) nodes to the whole network nodes respectively; To target control the target nodes O(α), our TCOA can identify the optional nodes set D(α, β) with the minimum driver nodes and the maximum quantity of driver nodes contained in a given constrained set Q(β).‖D(α, β)‖/‖O(α)‖ denotes the ratio of the quantity of identified drivers to the quatity of targets, and ‖D(α, β) ∩ Q‖/‖D(α, β)‖ denotes the ratio of the quantity of identified drivers among constrained (or pre-selected) nodes to the quantity of all the identified drivers. Then we introduce two target controllability parameters. One is the average ratio of drivers to targets, which reflects the cost of controlling targets in the complex network. And another parameter is the average ratio of constrained (or pre-selected) nodes in the drivers to all the drivers, which reflects the verifiability of identified driver nodes in target controlling the network. Note that whenα = 1, β = 1, both of E 1 and E 2 can be reduced to the fraction of drivers to control the full networks. When 0 < α < 1, β = 1, E 1 is reduced to the fraction of driver nodes to control the target nodes, and E 2 = 1. However in our paper, we focus on the problem that when 0 < α < 1, 0 < β < 1, how to identify optional driver nodes to minimize the measure E 1 and to maximize the measure E 1 . In fact, we have selected α = 0.1,0.2,…,1 andβ = 0.1,0.2,…,1, and applied our TCOA to calculate the two target controllability parameters E 1 and E 2 . We have obtained the data of the real-world networks from [7,8], and for the convenience we provide the data description in the (Additional file 2: Table S1). The results on these real networks are listed in the Table 3.
Obviously, we can conclude that TCOA can efficiently identify the driver nodes with optional property to guarantee the system target controllable, compared to the existing method. However with increasing size of Q, our TCOA is receiving more and more guidance, and is expected to outperform Gao's method, which does not take a constrained set as input. From the Table 3, we find our TCOA can not only find more driver nodes contained in the constrained nodes set Q but also detects the less quantity of driver nodes.
The novelty of our TCOA is the proposed analysis framework consisting of target control tree, ILP model and MCMC sampling for improving efficiency. In addition to the algorithm comparision between TCOA and other existing methods, we have also carried on more comparisons to investigate the contribution of ILP and MCMC in TCOA.To evaluate the advantage of the ILP and MCMC sampling, we list the result of our TCOA without MCMC sampling (only with ILP) and the result of our TCOA. From Table 3, we find that our TCOA can perform better than the TCOA only with ILP but without MCMC sampling, supporting strongly the efficiency of the MCMC sampling. We also find that our TCOA can achieve better results than Gao's method even without MCMC sampling.
Case studies on PPI signaling transduction networks in pancreatic cancer
As further evidences of the applicability of TCOA, we have carried TCOA on PPI signaling transduction networks in pancreatic cancer. The main cause of cancer is genetic and epigenetic alterations, which allow normal cells to over-proliferate as tumor cells [31]. To comprehensively understand the specificity in signaling networks, we have to understand how distinct pathways communicate with each other and how proteins of one pathway make interactions with related signaling components. Here, to understand the various informationprocessing abilities employed during the molecular alteration of the cancerous cells [32], we obtain directed PPI network of 1569 interactions from 991 nodes in pancreatic cancer. The directed PPI cancer data, uses SIGNOR (SIGnaling Network Open Resource) database [33], which outputs binary matrix representations for the used-provided protein lists and allows us to create directed graphs between signaling entities. The networks are available in Network Controlability Project [34] or seen (Additional file 3: Table S2). In our paper, in total 1507 approved proteins (or genes) by the Food and Drug Administration (FDA) have been selected as the constrained (or preselected) nodes in the directed PPI network which will have a prior-known molecule mechanism, see (Additional file 4: Table S3). As is well known, only a subset of these alterations called essential proteins, from the hundreds of genomic alterations in various biological pathways [35,36], are driving the disease initiation and progression. In the Ref. [34], researchers collected essential gene data for all cancer from the COLT-Cancer database [37]. In particular, they considered the HPAF-II cell lines for pancreatic cancer, and follow the GARP (Gene Activity Rank Profile) and GARP-P value of corresponding proteins mentioned in the database. Since previous studies showed that proteins with lower GARP score are more essential and directly associated with oncogenesis [38], they selected only those essential proteins whose GARP value is in the negative range, and moreover, whose GARP-P value is less than 0.05 (p < = 0.05). Following the above criteria, they identified 168 essential proteins available in the SIGNOR PPI network database in pancreatic cancer are selected as the targets to be controlled by the input signals.The essential proteins data can be seen in (Additional file 5: Table S4).
Our TCOA focus on how to identify the optional driver proteins with the minimum quantity of drivers and the maximum of the constrained FDA-approved proteins, to control the essential target proteins. We have also applied Liu's method [6] to control the full network and apply Gao's method [8] and our method to obtain the driver nodes to target control the network. The results seen in Table 4, indicate that we can identify less quantity of driver nodes by using TCOA compared to Liu's method [6] and Gao's method [8]. Furthermore, among the driver nodes, we can also obtain more drug targetable nodes.
Furthermore, in Supplementary note 5 of Additional file 1., we also give the capacity [39] and the corresponding clinical information of the identified driver proteins by using Here, we list the network types, network name, quantity of nodes in network (N), quantity of edges in network (L), the average degree of network <k>, the average ratio of drives to the targets by using Gao's method (E 1 1 ) and our TCOA without MCMC samplings (E 2 1 ) and our TCOA (E 3 1 ) respectively and average ratio of drivers within the set of constrained (or pre-selected) nodes to all the identified drivers by using Gao's method (E 1 2 ) and our TCOA without MCMC samplings (E 2 2 ) and our TCOA (E 3 2 ) respectively. The more detail descriptions of the real-world networks including the network types, names and references, quantity of nodes and edges and brief description and the downloaded websites, are shown in Additional file 2: Table S1 our algorithm TCOA. In the Supplementary note 5 of Additional file 1, we have analyzed the drug-targetable proteins identified by TCOA as part of the strategies to control the cancer essential proteins, and we found that most of the TOP-20 proteins could be a direct target in cancer therapy. We also look for anti-cancer drugs for the drug target proteins identified by TCOA, whose results are also listed in Supplementary note 5 of Additional file 1. We find that in some cases they have been used in current cancer typespecific drugs and drug-therapies. Among the 42 identified driver genes, 34 of them have not been previously reported as the drug targets. This suggests that our TCOA will be very useful in identifying potential drug targets.
Case studies on inflammatory bowel disease network from KEGG
The causes of the common forms of idiopathic Inflammatory bowel disease (IBD) remain unclear though considerable progress [40]. Here, we utilize the network, in KEGG [41] as is listed in (Additional file 6: Table S5). The network consits of 4798 nodes and 105,606 directed and undirected edges (or bi-directed edges). To identify the drug targets, in total 702 approved proteins (or genes) by the Food and Drug Administration (FDA) have been selected in the network from the Drug Bank database [42], see (Additional file 7: Table S6) as the constrained nodes set in our TCOA.
In this study, we consider the genes in the Inflammatory bowel disease (IBD) pathways which is listed in (Additional file 6: Table S5) as the target nodes in our TCOA. We apply both Gao's algorithm [8] and our algorithm to analyze the target controllability of the network related with Inflammatory bowel disease (IBD). The results are shown in Table 4, and we find that the driver nodes for control the whole network is more than the target nodes and is not necessary to control the full network by using Liu's algorithm [6]. Furthermore the quantity of driven nodes needed for the control of target genes is actually much smaller than that of Gao's method [8] according to our method analysis. Our TCOA also found sets of driver nodes containing more drug targetable nodes, meanwhile Liu's method [6] and Gao's method [8] cannot detect drug targetable nodes as drivers, which indicate the applicability of TCOA. In addition, we calculate the frequency that each network node acts as a driver in the phase of MCMC sampling (or the control capacity [39]) as shown in Fig. 4. As is seen, STAT3, IL22, MAF and TLR5 has higher probability to be potential drivers to change the states of disease-related genes. Furthermore the existed researches have reported that IL-22 and TLR5 can be a therapy target for IBD [43]. These results suggest STAT3 and MAF can be future drug targets for IBD therapy.
General topological properties of driver nodes in inflammatory pancreatic cancer network and bowel disease network We have analyzed several topological properties of the drug-target proteins included by TCOA in the set of driven nodes. We calculated the average degree, the average betweenness centrality of these drug-target proteins by using our target control scheme, which are compared with Liu's full control scheme [6] and the average values over the entire networks. We find that, in the disease networks, the drug-target driver nodes would have higher average degree than the average values over the entire networks as shown in Fig. 5a. This shows that the driver nodes tend to be high-degree nodes for target control the networks; In addition to the summarized results on the two biological networks, we have illustrated the network information of the validated results in (supplementary note 6 of Additional file 1: Figure S5). From Additional file 1: Figure S5, we also found that the nodes with higher capacity have higher node degrees in the network and it also proved our statistic results in Fig. 5. The columns represent the following information per disease network: Different methods, the fraction of the quantity of drivers vs. the target nodes (f1), the fraction of the driver nodes within drug target nodes in FDA vs. the quantity of driver nodes (f2) Fig. 4 The frequency (fd) of the identified driver nodes within the constrained nodes set for MCMC samplings in IBD disease network By using the full control scheme in the previous theoretical research [6], there exists an observation and conclusion that the driver nodes tend to be low-degree nodes as shown in Fig. 5a. However, for target control the biological networks, we interestingly find that the driver nodes tend to be high-degree nodes as shown in Fig. 5a, which is more consistent with the biological experimental results. The drug-target driven nodes also tend to have a higher average betweeness centrality as shown in Fig. 5b, which indicates that driver nodes would act as highly-traversed bridges in networks. By contrast, driver nodes display different weights on closeness centrality on particular disease network, as shown in Fig. 5c, which would mean the modularity around driver nodes would have significant rewiring in different conditions [32].
Discussions
In fact, in our previous research, we studied another target control problem, called constrained target control (CTC) problem [19], which focuses on how to choose minimal drivers only within the set of constrained control nodes to change the states of the maximal targets. Different from CTC, we do not require that all the selected driver nodes must be in the constrained nodes set and our consideration for TCO has a double optimization to minimize the total quantity of driven nodes (on which a subsequent intervention is needed) and to maximize the percentage of constrained nodes among them (on which the findings are consistent with prior-known knowledge). Our results supply the novel insights into how we can efficiently target control a complex system, and especially many evidences on the practical strategic utility of TCOA to incorporate prior drug information into potential drug-target forecasts. However, this study is limited to focus on how to obtain the state transitions of the linear networks. It is more practical and necessary to target control the system with nonlinear dynamic in the future. | 8,689.8 | 2018-01-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Calibration Method of an Evaluation System for the Body Conduction Sound Sensor
In recent years, various household medical devices have been developed. Body conduction sound sensors that detect body sounds such as heart sounds, breath sounds and various sounds in the body are attracting attention because they are easy to measure. In this research, we aim to make an inexpensive evaluation system for evaluating body conduction sound sensors and to evaluate them. An evaluation speaker is made for the system. In order to calibrate the evaluation speaker, it is necessary to measure the amplitude of the vibration output from the speaker. The laser doppler vibrometer is used to measure it, and we calculated the frequency characteristic of the body conduction sound sensor using the measured results.
Introduction
In recent years, various household medical devices have been developed.Body conduction sound sensors that detect body sounds such as heart sounds, breath sounds and various sounds in the body are attracting attention because they are easy to measure (1)(2)(3)(4)(5)(6)(7) .Solitary death of elderly people is recognized as social problem in Japan.It is difficult to substantiate a system which monitor them using camera because of lack of privacy.There is a real need for a new monitoring system that conscious of privacy.The measurement becomes possible by using a body conduction sound sensor.
Evaluation of prototype body conduction sound sensor is necessary for development of it.To measure the frequency characteristic of the body conduction sound sensor, an evaluation method using a vibrator has been reported (8) .
However, the vibrator is very expensive, in this research, we aim to make an inexpensive evaluation system for evaluating body conduction sound sensors and to evaluate the sensor.
An evaluation speaker is made for the system.In order to calibrate the evaluation speaker, it is necessary to measure the amplitude of the vibration output from the speaker.The laser doppler vibrometer is used to measure it, and we calculated the frequency characteristic of the body conduction sound sensor using the measured results.
The Body Conduction Sound Sensor
The body conduction sound sensor is made by use of an improved small electret condenser microphone (Primo Co., EM189T).The microphone whose case is drilled a hole in and diaphragm is exposed is used.Figure 1 shows the container of the sensor that is made using polycarbonate as a material.Figure 2 is a picture of the sensor.The microphone is fixed in the center of the container, filled up with polyurethane resin.Then it can detect sounds inside of the body.
The Evaluation System
Figure 3 shows the setup of the experiment to measure the frequency characteristic of the body conduction sound sensor.In the system, an evaluation speaker is used to evaluate the body conduction sound sensor.It is difficult to obtain the same waveform each time, and the reproducibility is low in the measurement of the body conduction sound by the actual human body.By using the evaluation speaker, it is possible to output the same waveform and measure with high reproducibility.The structure of the evaluation speaker is shown in Figure 4.A speaker (Tokyo Cone Paper MFG. Co., Ltd., F77G98-6) is installed in the center part of a transparent glass container and filled with polyurethane resin that is used also for the body conduction sound sensor.Figure 5 is a photograph of the produced evaluation speaker.There are two kinds of polyurethane resins, one with transparent color and one with white.In Fig. 5, a transparent one is used to make the structure easy to understand.In the experiment, we used a white one, because it was impossible to measure with a laser doppler vibrometer.(The laser doppler vibrometer will be explained in the next section.) In the measurement, we place the body conduction sound sensor in close contact with the evaluation speaker.A sine wave signal is outputted from the computer and the vibration generated from the evaluation speaker is measured by the body conduction sound sensor.We measure the signal input to the evaluation speaker and the signal obtained from the body conduction sound sensor at the same time and measure the amplitude and the phase difference of the two signals.By changing the frequency of the sinusoidal wave generated from the speaker, we obtaine the frequency amplitude characteristic and the frequency phase characteristic of the body conduction sensor.
Calibration of evaluation speaker
For accurate measurement, calibration of the evaluation speaker is indispensable.In this study, a laser doppler vibrometer is used to measure the vibration generated from the evaluation speaker.Figure 6 shows the setup of the experiment to calibrate the evaluation speaker using a laser doppler vibrometer.Figure 7 is a photograph of the laser doppler vibrometer used in this study.We measure the signal input to the evaluation speaker and the signal obtained from obtained from the laser doppler vibrometer at the same time and measure the amplitude and the phase difference of the two signals.By changing the frequency of the sinusoidal wave generated from the speaker, we obtaine the frequency amplitude characteristic and the frequency phase characteristic of the evaluation speaker.By correcting the measurement result of the body conduction sound sensor using the measured frequency characteristic of the evaluation speaker, we can obtain the correct frequency characteristic of the sensor.
Result
We measure the frequency characteristic of the evaluation speaker using the laser doppler vibrometer.The frequency of the sine wave outputted from the computer is 100 to 10,000 Hz, the sampling frequency is 50,000 Hz, the number of data points is 131,072 points.The measurement Regarding the phase characteristics, it was confirmed that the phase lags as the frequency increases.
The frequency characteristic of the body conduction sound sensor is measured using the evaluation speaker.The measurement conditions are the same as the measurement of the evaluation speaker.The figure 9 shows the amplitude characteristic and the phase characteristic of the body conduction sound sensor.We can calibrate this measurement result using the result of the frequency characteristic of the evaluation speaker.The calibration results are shown in the figure 10.The amplitude characteristic of the body conduction sound sensor, it is confirmed that the amplitude increased with increasing frequency up to 10,000 Hz.
Conclusions
In this paper, we made of body conduction sound sensor and evaluation speaker.We built an evaluation system to evaluate the body conduction sound sensor and measured the frequency characteristic of the body conduction sound sensor.To calibrate the evaluation speaker, a laser doppler vibrometer is used.we calculated the frequency characteristic of the body conduction sound sensor using the measured results.
The results of the measurement, the amplitude characteristic of the body conduction sound sensor, it is confirmed that the amplitude increased with increasing frequency up to 10,000 Hz and the phase is delayed as the frequency becomes higher.
In this study, we established an inexpensive system that can evaluate the body conduction sound sensor.
Fig. 1 .
Fig. 1.Structure of the container of the sensor.
Fig. 3 .
Fig. 3.The setup of the to measure the frequency characteristic of the body conduction sound sensor.
Fig. 4 .
Fig. 4. The structure of the evaluation speaker.
Fig. 6 .
Fig. 6.The setup to calibrate the evaluation speaker using a laser doppler vibrometer.
(a) Amplitude characteristic (b) Phase characteristic Fig. 8.The frequency characteristic of the evaluation speaker.result is shown in the figure 8. (a) shows the frequency amplitude characteristic, and (b) shows the frequency phase characteristic.It is found that the amplitude characteristic of the evaluation speaker peaks around 200 Hz, and the amplitude becomes smaller as the frequency becomes higher.
(a) Amplitude characteristic (b) Phase characteristic Fig. 9.The frequency characteristic of the body conduction sound sensor.(a) Amplitude characteristic (b) Phase characteristic Fig. 10.The frequency characteristic of the body conduction sound sensor after calibration. | 1,769.6 | 2018-02-05T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Sustainable Nanomaterials for Biomedical Applications
Significant progress in nanotechnology has enormously contributed to the design and development of innovative products that have transformed societal challenges related to energy, information technology, the environment, and health. A large portion of the nanomaterials developed for such applications is currently highly dependent on energy-intensive manufacturing processes and non-renewable resources. In addition, there is a considerable lag between the rapid growth in the innovation/discovery of such unsustainable nanomaterials and their effects on the environment, human health, and climate in the long term. Therefore, there is an urgent need to design nanomaterials sustainably using renewable and natural resources with minimal impact on society. Integrating sustainability with nanotechnology can support the manufacturing of sustainable nanomaterials with optimized performance. This short review discusses challenges and a framework for designing high-performance sustainable nanomaterials. We briefly summarize the recent advances in producing sustainable nanomaterials from sustainable and natural resources and their use for various biomedical applications such as biosensing, bioimaging, drug delivery, and tissue engineering. Additionally, we provide future perspectives into the design guidelines for fabricating high-performance sustainable nanomaterials for medical applications.
Introduction
The world population has rapidly increased from 7 billion to 8 billion in the last decade. This has placed tremendous pressure and a socio-economic burden on a better and more affordable healthcare system to protect people from infectious and life-threatening diseases. Additionally, the world is facing grave environmental, climate, and energy challenges. In 2015, the United Nations (UN) established 17 sustainable development goals (SDGs) to address such challenges. These SDGs aim to eradicate poverty, provide better healthcare to various communities, and tackle societal challenges using renewable and sustainable materials. The UN has arguably recognized the role of nanotechnology in achieving 13 out of 17 SDGs by 2030. Nanotechnology has emerged as a game-changing technology for fabricating nanoscale materials. A large surface area to volume and the unique size, shape, and composition-dependent characteristics of nanomaterials make them suitable for various practical applications from biomedical to renewable energy and the environment. In the biomedical field, the scientific revolution of nanotechnology has witnessed the discovery of mRNA vaccines for COVID-19 using lipid nanoparticles [1], the development of wearable medical devices/sensors [2], and wireless bandages stimulating wound healing for people living in both urban and rural regions [3].
Nanotechnology holds great promise for developing the next generation of medical devices and sensors [4,5], implants [6], nanovaccines [7], diagnostics [8,9], and therapeutic technologies ( Figure 1) [10,11]. In the last two decades, intensive research attempts have been made to develop manufacturing strategies involving top-down (lithography and Sustainable nanotechnology encompasses the safe manufacturing of sustainable nanomaterials during the design phase of product development while minimizing nanomaterials' impact on society and the environment. In order to set sustainable manufacturing practices, it is imperative to consider the entire life cycle of the product, from the raw materials extraction (primary precursors, surfactants, and non-hazardous solvents from the natural and sustainable resources), to manufacturing methods, and recycling and reuse at the product's end of life. In recent years, several review articles have been published Sustainable nanotechnology encompasses the safe manufacturing of sustainable nanomaterials during the design phase of product development while minimizing nanomaterials' impact on society and the environment. In order to set sustainable manufacturing practices, it is imperative to consider the entire life cycle of the product, from the raw materials extraction (primary precursors, surfactants, and non-hazardous solvents from the natural and sustainable resources), to manufacturing methods, and recycling and reuse at the product's end of life. In recent years, several review articles have been published on synthesizing functional nanomaterials and their biomedical applications, from bioimaging to biosensing, diagnosis, and therapeutics [14][15][16][17]. However, reports on the design of safe and sustainable nanomaterials are scarce in the literature. Moreover, a rapidly evolving field of sustainable nanotechnology requires design guidelines for fabricating sustainable nanomaterials for biomedical applications. Therefore, this review aims to provide an overview of a design framework (i.e., universal design criteria) for synthesizing sustainable nanomaterials based on the data extracted from the original research articles reported in the literature. Later, Pharmaceutics 2023, 15, 922 3 of 20 we discuss how these design guidelines have been applied to produce various sustainable nanomaterials (nanocellulose, carbon, and bioceramic) from renewable resources and their biomedical applications. Finally, we summarize key factors (raw materials, manufacturing process, characterization, and recycling of waste with minimal disposal) that should be taken into consideration for manufacturing sustainable nanomaterials with optimal physiochemical characteristics. By considering these factors, a roadmap for sustainable nanomaterials can be developed with minimal impact on the environment and human.
Design Framework for Sustainable Nanomaterials
Advances in nanotechnological fabrication approaches have accelerated the discovery of new functional nanomaterials in tunable sizes, morphologies, and composition from non-renewable resources [18][19][20][21]. Such nanomaterials show superior physical and chemical properties compared to their bulk counterpart, making them beneficial to solve societal problems from health to renewable energy, information technology, and the environment [22][23][24][25][26]. However, the long-term commercial viability of these nanomaterials is posing a risk due to their reliance on non-renewable resources and highly energy-intensive manufacturing processes. In addition, there is too little attention paid to investigating the impact of non-sustainable materials on human health, climate change, and the environment. To achieve the SDGs set by the UN, intensive research efforts are needed to design sustainable nanomaterials from renewable resources while optimizing the performance, manufacturing process, and production costs of nanomaterials. Integrating sustainable resources with nanotechnology is critical to building a sustainable society in the 21st century. Sustainability encourages the use of natural resources and lower energy manufacturing methods for designing nanomaterials while maintaining the natural resources for current and future generations. Additionally, the sustainable process prevents waste production (zero waste) or harnesses recycled waste for fabricating nanomaterials to ensure minimal waste disposal, leading towards a circular economy for sustainable development.
When designing sustainable nanomaterials for biomedical applications, natural and renewable resources should be used as precursor materials and surfactants. Nature offers a wide range of various renewable and sustainable materials with rapid renewability, such as biomass and biodegradable natural materials (cellulose, chitosan, lignin) [27,28]. Renewable materials obtained from the photosynthetic process in plants are considered less toxic than synthetic materials. The second most abundant natural material on the planet, known as lignin, is an environmentally friendly material with excellent antioxidant and antimicrobial properties [29]. Cellulose, alginate, chitin, and pectin are other natural polysaccharide molecules that can be derived from microbes, animals, and plants [30]. Nanomaterials derived from natural polysaccharides offer various advantages, such as low processing cost, biodegradability, non-toxicity, and tunable surface functionalities (hydroxyl, amine, and carboxyl groups). Furthermore, nanomaterials can be designed from biomacromolecules (polypeptides, proteins, or nucleic acids) abundant in nature [30]. In addition to natural renewable resources, the waste from chemical industries (e.g., carbon dioxide), agriculture (e.g., rice husk containing~75-90% organic molecules, such as cellulose and lignin), and the environment (e.g., plastic waste from oceans) can also be considered alternative nonnatural renewable resources. This recycled waste from various resources can be used for synthesizing nanomaterials, for example, the use of waste extracted from petrochemicals for fabricating polymeric nanomaterials [31,32].
In bottom-up approaches, surfactants and reducing agents are commonly used to produce nanomaterials via the reduction of precursor materials and stabilize synthesized nanomaterials in the dispersion medium. Natural renewable resources, such as plants, biopolymers, proteins, sugars, bacteria, algae, and fungi, are a few examples that can act as surfactants and reducing agents [33][34][35]. Among various natural resources, biopolymers are the most common reducing agents used in various chemical reactions involved in the bottom-up manufacturing of nanomaterials. Examples of biopolymers that can act as reducing agents include dextran, chitosan, and cellulose extracted from sugarcane, the Pharmaceutics 2023, 15, 922 4 of 20 exoskeleton of crustaceans, and plants. Vitamin C isolated from fruits and vegetables is also a natural reducing agent that can reduce various metal ions in an aqueous solution. However, vitamin C and other natural biomolecules are more expensive than biopolymers. Therefore, these reducing agents are unsuitable for large-scale manufacturing of nanomaterials. The choice of solvent can also be a concern for synthesizing sustainable nanomaterials. Traditional approaches use toxic organic solvents that offer better control over the size, shape, and chemical composition of nanomaterials. The use of such toxic organic solvents should be prohibited or minimized. Water is the most commonly accessible and low-cost solvent for synthesizing sustainable nanomaterials. Alternatively, supercritical fluid technology has received considerable attention in fabricating nanomaterials with control over size, morphology, and composition with minimal environmental impact. This technology uses supercritical fluids, such as water and carbon dioxide (CO 2 ), instead of toxic organic solvents [36]. Exploring the use of such alternative strategies will pave the way for the design of sustainable nanomaterials.
Top-down and bottom-up strategies have been developed to fabricate a variety of nanomaterials [37][38][39][40]. However, these energy-intensive approaches use toxic and corrosive precursor and surfactant materials that produce hazardous chemicals or gases. Therefore, the choice of manufacturing process requiring the least energy is essential in fabricating sustainable nanomaterials. The hydrothermal approach is the most popular strategy for synthesizing sustainable nanomaterials [41]. In this approach, the aqueous solution of precursor materials is heated to a high temperature using various heat sources, such as microwave energy and focused sunlight [42,43]. However, these heating mechanisms are not suitable for the large-scale manufacturing of nanomaterials due to insufficient and non-uniform heating of solvents in a large reactor. Recently, cost-effective flow-chemistrybased strategies have been investigated for synthesizing nanomaterials. This method offers better control of heat transfer and reaction mixing time. Another environment-friendly and economically viable approach based on green chemistry has emerged as an alternative strategy to fabricate size-and shape-controlled metallic nanomaterials by using microorganisms (bacteria, fungi, algae, virus) and plant extracts (proteins, polysaccharides, polyphenols) [44,45]. The advantages and disadvantages of physical, chemical, and green synthesis are discussed in the Table 1. Therefore, it is essential to set design guidelines for developing sustainable nanomaterials that can serve as a viable alternative to conventional methods of nanomaterials fabricated from non-renewable sources. Furthermore, the functional performance of sustainable nanomaterials should be optimized in various aspects, for example, renewable materials type (biopolymer, carbonaceous, or composites), physical characteristics (mechanical, thermal, and conductivity), biocompatibility, materials hazard testing, and environmental impact.
Sustainable Polymeric Nanomaterials
Polymeric materials have been used for various biomedical applications, including device packaging, tissue engineering, drug delivery, surgical implants, wound dressing, and ophthalmology. In 2021, the global market size for medical-grade polymers was estimated to be USD 18.4 billion, which is expected to increase at an annual growth of 8.9% from 2022 to 2030. Polymers can be classified as synthetic polymers derived from petro-materials and biopolymers derived from biologically renewable resources. The sustainability of the plastics industry is facing rising challenges, such as fossil fuel depletion, the increasing cost of petroleum products, and the long-term catastrophic environmental impact. The fabrication of nanomaterials using renewable biopolymeric materials derived from natural resources is necessary to reduce dependence on petroleum-based plastics. Furthermore, biopolymers derived from natural resources are more biocompatible than polymeric materials derived from petrochemicals.
Among various biopolymers, nanocellulose is a renewable and sustainable nanomaterial derived from native cellulose, the main component of the plant cell [46]. Cellulose is the most abundant biopolymer on the earth, with an annual production of over 7.5 × 10 10 metric tons. It consists of D-anhydro glucopyranose units connected by β-glycosidic bonds, in which the repetitive unit is called cellobiose [47,48]. Cellulose can be extracted from multiple sources, such as plants (e.g., cotton, wood), bacteria, or animals [49,50]. The extraction cost of cellulose from natural resources depends on various factors, such as source of cellulose, extraction method, and scale of production. The plants capture CO 2 to produce cellulose through photosynthesis and release CO 2 through degradation, thus closing the carbon cycle. The highly crystallized structure of cellulose provides superior mechanical strength and rigidity to plants. Nanocellulose is produced by the mechanical or chemical breaking of cellulose fibers into their individual nanoscale components. Excellent mechanical strength, light weight, biodegradability, and unique physical properties make nanocellulose useful in applications in fields such as biomedical, packaging, and materials science.
Nanocellulose can be classified into three distinct types based on its morphology and structural characteristics: cellulose nanocrystals (CNCs), cellulose nanofibers (CNFs), and bacterial nanocellulose (BNC). Various top-down approaches have been investigated for fabricating CNCs and CNFs from cellulose. The most common top-down approach for CNCs involves the acidic or enzymatic hydrolysis of natural cellulose derived from wood. This cleaves the amorphous region of the cellulose fibers and facilitates the formation of highly crystalline and rigid nanostructures ( Figure 2) [51]. Mechanical decomposition processes, such as ultrasonic fiber delamination, ball-mining, and high-pressure homogenization of biomass, have been explored to synthesize CNFs ( Figure 3A). These top-down approaches produce gel-like materials of entangled flexible and short CNF networks [52,53]. Bottom-up approaches have been developed for synthesizing BNC via the metabolization of glucose molecules by Gram-negative Acetobacter strains ( Figure 3B) [54]. BNC has high chemical purity compared to CNCs and CNFs due to plant impurities, such as lignin and hemicellulose. Therefore, an additional purification step is required to remove the contaminants [55]. Despite different strategies used for fabricating CNCs, CNFs, and BNC, all of these nanocellulose materials possess identical molecular structures to cellulose, but they have a larger surface area, stiffness, and aspect ratio than cellulose. Moreover, nanocellulose possesses superior mechanical strength, biocompatibility, biodegradability, chemical stability, and sustainable abundance on earth, thus making these nanomaterials suitable for biomedical applications [56]. Table 2 summarizes cellulose materials (CNCs, BNCs, and CNFs) obtained from various natural resources and their biomedical applications. lignin and hemicellulose. Therefore, an additional purification step is required to remove the contaminants [55]. Despite different strategies used for fabricating CNCs, CNFs, and BNC, all of these nanocellulose materials possess identical molecular structures to cellulose, but they have a larger surface area, stiffness, and aspect ratio than cellulose. Moreover, nanocellulose possesses superior mechanical strength, biocompatibility, biodegradability, chemical stability, and sustainable abundance on earth, thus making these nanomaterials suitable for biomedical applications [56]. Table 2 summarizes cellulose materials (CNCs, BNCs, and CNFs) obtained from various natural resources and their biomedical applications. BNC has a unique nanofibril network with a large surface area that mimics an extracellular matrix. It has an exceptional capability to absorb exudates from wounds, strong water retention ability, excellent conformability, and wet strength [51]. Therefore, BNC can be used for wound dressing. Few BNC-based products, such as XCell and Biofill , have been commercialized for wound healing [55]. Compared to traditional cotton-based hemostatic wound dressings, nanocellulose can be functionalized to promote wound healing and prevent secondary infection. Antimicrobial agents such as benzalkonium chloride, silver, and copper nanoparticles can be impregnated into the porous network of BNC's membrane that acts as a physical barrier between the wound and the surrounding BNC has a unique nanofibril network with a large surface area that mimics an extracellular matrix. It has an exceptional capability to absorb exudates from wounds, strong water retention ability, excellent conformability, and wet strength [51]. Therefore, BNC can be used for wound dressing. Few BNC-based products, such as XCell ® and Biofill ® , have been commercialized for wound healing [55]. Compared to traditional cotton-based hemostatic wound dressings, nanocellulose can be functionalized to promote wound healing and prevent secondary infection. Antimicrobial agents such as benzalkonium chloride, silver, and copper nanoparticles can be impregnated into the porous network of BNC's membrane that acts as a physical barrier between the wound and the surrounding environment and facilitates the steady release of preloaded antimicrobials [59][60][61][62][63]. Furthermore, BNC can be integrated with other naturally derived materials to design next generation innovative sustainable nanomaterials. For example, integrating other natural biopolymers (chitosan and hyaluronan (HA)) into BNC can provide additional functionalities during wound healing because chitosan possesses excellent antimicrobial properties [64], and HA facilitates rapid wound healing and reduce scar tissue formation [65]. Tissue engineering holds great promise to utilize biocompatible materials (cells and biomolecules) to repair and restore damaged tissues. Therefore, it is vital to design materials/scaffolds mimicking the native tissue-like environment to promote cellular activities (adhesion and growth) and tissue growth. Such platforms can be developed by incorporating nanofibrils of BNC into hydrogel with superior mechanical strength, biocompatibility, and biodegradability [79,82]. Nimeskern et al. developed BNC-hydrogel platforms that have similar mechanical moduli of human cartilage [79]. They showed that these composite materials can be transformed to complex patient-specific shapes and geometries. Apelgern et al. developed an aqueous counter-collision method to disassemble BNC to create a bioink for cartilage-specific 3D bioprinting. In vivo studies demonstrated that the cellladen BNC structures had good structural and tissue integrity, and excellent chondrocyte proliferation, making BNC suitable for cartilage regeneration [80]. Cañas-Gutiérrez et al. fabricated 3D-printed BNC scaffolds with controlled microporosity between 50 and 350 µm and demonstrated the adherence and proliferation of osteoblasts on a 3D-printed scaffold for bone regeneration [81]. In the work of Backdahel et al., it was found that BNC had a similar stress-strain response to the carotid arteries, thus supporting human smooth muscle cell attachment and proliferation after 14 days of culture [79]. BNC can be transformed into various structures due to its excellent moldability. Therefore, such materials can also be used for designing small-caliber artificial blood vessels. However, modification with anticoagulant agents (heparin) and chimeric proteins (cellulose-binding module and cell adhesion peptides) are needed to improve the properties of BNC in vascular grafts [81,83]. CNFs exhibit non-toxicity to humans and the environment [84][85][86]. Since CNFs also possess a large surface area and surface density, composite materials with antimicrobial characteristics have been designed by incorporating metallic nanoparticles (NPs), such as Ag NPs [72], ZnO NPs [73], and TiO 2 NPs [74]. CNFs have been used as cell culturing scaffolds for tissue engineering applications due to their excellent mechanical strength, ultralow density, and biodegradability. Carlström et al. blended and cross-linked gelatin with wood-derived CNF scaffolds to adjust the degradation time and loaded human bone marrow mesenchymal stem cells into the scaffold [75]. They showed the applicability of CNF scaffolds for supporting cell attachment, spreading, and osteogenic differentiation. In another work, Mathew et al. used a CNF scaffold for ligament and tendon regeneration [29]. CNFs have also been used as an electrochemically controlled ion exchange material due to their large surface area and high mechanical strength [87]. Mihranyan et al. coated a thin layer of polypyrrole (PPy) onto CNF paper to fabricate PPy-CNF composites for DNA extraction and hemodialysis membranes [76]. The use of CNFs as a reinforcing agent for improving the mechanical properties of the hydrogel system has been demonstrated by Maharjan and co-workers. They incorporated CNFs into the chitosan hydrogels to fabricate a CNFs/Chitosan scaffold with a small pore size of uniform distribution. The hybrid system showed increased compressive strength (30.19 kPa) compared to the pure chitosan scaffold (11.21 kPa) and enhanced bioactivity of the scaffold [77]. Furthermore, CNF exhibited strong molecular interaction between poorly soluble drugs and drug-encapsulated nanoparticles with tailored drug release kinetics, making them a promising candidate for drug delivery applications [78].
Like BNC and CNFs, CNCs also possess excellent biocompatibility and mechanical properties. Inorganic NPs (Au, Ag, and Pd) can be incorporated into CNCs to add new functionalities, such as antimicrobial properties, bioimaging, and biosensing capabilities [66]. Ganguly et al. developed an eco-friendly diagnostic tool for detecting unamplified pathogenic DNA using TEMPO-oxidized CNC-capped gold nanoparticles. A dramatic color shift from red to blue was observed in the presence of target DNA molecules [68]. Rueda et al. designed mechanically strong and ductile polyurethane/CNC nanocomposites via in situ polymerization to support the proliferation of fibroblasts [69]. CNCs can also be used as carriers to deliver bioactive drug molecules to the target site because the hydrophilic surface can inhibit the formation of protein corona on CNCs, thus prolonging the CNC half-life in the bloodstream [70]. Seo et al. developed multilayer CNCs for anticancer drug delivery by coating the negatively charged CNCs with cationic doxorubicin (DOX) and anionic hyaluronic acid (HA) polymer as the tumor-targeting ligand. The nanocomposite system showed excellent tumor penetration, cellular uptake, and cancer-killing ability through intravenous injection [71]. To design a bioimaging probe, fluorescein (isothiocyanate (FITC)) can be conjugated onto the CNC surface. The developed FTIC-CNC probe showed high fluorescence intensity and cellular uptake, no cellular toxicity to mouse osteoblasts, and enhanced dispersity in the biopolymer matrix [67].
Despite the great promise of nanocellulose, the economic development of nanocellulosebased biopolymers poses a significant challenge. A balance between the use of crops for food, environmental protection, and the production of raw cellulose materials is required to achieve sustainability in nanocellulose production for future generations. Research efforts have been commenced to genetically modify plants with the ability to produce biopolymers in plant stems. Concurrently, the energy needed to produce and modify the nanocellulose from its original state must be considered to curb the total amount of carbon emission.
Carbonaceous Sustainable Nanomaterials
Carbon is the most abundant non-metal element that appears naturally in various forms, such as coal, diamond, and graphite [88]. Carbonaceous nanomaterials (CNMs) are carbon-based materials designed to reduce environmental impact while retaining their desirable properties. Such materials can be classified based on the number of dimensions. Carbon dots (CDs) and graphene quantum dots (GQDs) are classified as zero-dimensional CNMs. Carbon nanotubes (CNTs), graphene, and diamonds are examples of one, two, and three-dimensional CNMs, respectively. CDs, GQDs, CNTs, and graphene are the most common CNMs with exceptional properties, which make them suitable for various biomedical applications, from biosensing to diagnosis, targeted drug delivery, bioimaging, and tissue engineering. Compared to other nanomaterials such as metal or organic nanoparticles, CNMs with large surface area to volume ratios offer several advantages such as higher drug-loading capacity, higher biocompatibility, straightforward surface functionalization, easier immobilization of macromolecules, and excellent physical properties (electrical, optical, and thermal conductivity) [89][90][91]. Even though conventional strategies have been developed to scale-up manufacturing of CNMs using traditional methods, these strategies are expensive due to the use of energy-intensive processes. In recent years, the focus of research in the field of CNMs has shifted to fabricating CNMs using less energy-intensive methods from sustainable and renewable materials. Biomass and waste residue (derived from industrial effluents, plastics, and agriculture) are good candidates for the sustainable production of CNMs because of their rich geographic availability on earth (Figure 4). The extraction cost of carbonaceous materials from sustainable resources can be varied from few hundreds to several thousand USD per year depending on the type of sustainable resources, synthetic methodologies, and purification process. Table 3 summarizes carbonaceous nanomaterials obtained from natural and sustainable resources and their biomedical applications. resources can be varied from few hundreds to several thousand USD per year depending on the type of sustainable resources, synthetic methodologies, and purification process. Table 3 summarizes carbonaceous nanomaterials obtained from natural and sustainable resources and their biomedical applications.
Carbon Dots and Graphene Quantum Dots
Carbon dots (CDs) are quasi-spherical carbon nanoparticles composed of C, H, N, and O with sizes smaller than 10 nm [92]. The macroscopic carbon has low water solubility, but the presence of hydrophilic carboxylic and amine groups on the CDs' surface makes them well dispersed in water. Graphene quantum dots (GQDs) are crystallized graphene disks usually composed of less than 10 graphene layers with a size between 2 and 20 nm [93]. Both CDs and GQDs show strong photoluminescence emission within the visible light spectrum, high photostability, and resistance to photobleaching [94]. The excellent biocompatibility and ease of surface modification due to the presence of hydrophilic groups make them suitable for biomedical applications.
The traditional approaches for CDs synthesis, such as laser ablation, electrochemical method, pyrolysis, and exfoliation, require expensive initial precursor non-renewable materials, longer reaction times, and harsh reaction environments [95]. In recent years, sustainable approaches have been developed for synthesizing CDs and GQDs using sustainable and renewable sources, such as organic pollutants available in the environment [96,97] and various biomass (vegetables, fruits, wool, cotton) [98]. In order to make manufacturing process less energy intensive, Xu et al. modified the traditional laser method
Carbon Dots and Graphene Quantum Dots
Carbon dots (CDs) are quasi-spherical carbon nanoparticles composed of C, H, N, and O with sizes smaller than 10 nm [92]. The macroscopic carbon has low water solubility, but the presence of hydrophilic carboxylic and amine groups on the CDs' surface makes them well dispersed in water. Graphene quantum dots (GQDs) are crystallized graphene disks usually composed of less than 10 graphene layers with a size between 2 and 20 nm [93]. Both CDs and GQDs show strong photoluminescence emission within the visible light spectrum, high photostability, and resistance to photobleaching [94]. The excellent biocompatibility and ease of surface modification due to the presence of hydrophilic groups make them suitable for biomedical applications.
The traditional approaches for CDs synthesis, such as laser ablation, electrochemical method, pyrolysis, and exfoliation, require expensive initial precursor non-renewable materials, longer reaction times, and harsh reaction environments [95]. In recent years, sustainable approaches have been developed for synthesizing CDs and GQDs using sustainable and renewable sources, such as organic pollutants available in the environment [96,97] and various biomass (vegetables, fruits, wool, cotton) [98]. In order to make manufacturing process less energy intensive, Xu et al. modified the traditional laser method by adopting a femtosecond pulsed laser of low power density for synthesizing CDs at room temperature [99]. Menezes et al. developed a green electrochemical method to produce GQDs using platinum wire as a cathode and a graphite rod as an anode without using toxic chemicals (oxidant salts and acids) [100]. A sustainable and simple one-step hydrothermal carbonization method has been developed to synthesize CDs from oyster mushrooms for selective sensing of Pb 2+ ions, which show specific electrostatic binding to DNA molecules, antibacterial activity, and anticancer activity against MDA-MD-231 breast cancer cells [101]. In the work of Sangam et al., a catalyst-free and scalable hydrothermal method has been used for synthesizing GQDs from agro-industrial waste sugarcane molasses [102]. Drug delivery [111] and transdermal drug delivery [112] Microelectrodes [113] Biosensing and tissue engineering Carbon nanotubes (CNTs) plants (leaves, seeds, roots, and stem), waste cooking oil Electromechanical actuators [114] Biosensors (miRNA [115]) Cellular bioimaging [116] Drug delivery [117] Photothermal cancer therapy [118] Nano tweezers [119,120] CDs and GQDs have extensively been used to develop inexpensive biosensors for rapidly determining analytic concentration with picomolar detection sensitivity in biological samples. CDs have been used for the selective sensing of DNA [103][104][105], protein, and other blood serum components such as cholesterol, glucose, and alcohol [106]. Similarly, GQDs have been used for the sensitive detection of different disease biomarkers such as enzymes, antigens, proteins, DNA, and other biomolecules [109]. In addition to biosensing, CDs and GQDs have been developed for bioimaging applications due to their excellent biocompatibility, high photostability, and high fluorescent intensity [121]. Saranti et al. incorporated CDs into bioactive scaffolds to monitor the bone regeneration process through bioimaging [107]. In another work, Gong et al. synthesized N and Br co-doped GQDs using a rapid and large-scale approach. The results showed high cellular uptake and high-quality fluorescence labeling ability, suggesting their great promise for cellular bioimaging [110]. Since these nanomaterials have hydrophilic surface groups (carboxylic and amine), the covalent or non-covalent surface modification strategies can be used to attach targeting and drug molecules on the surface of CDs and GQDs for designing targeted drug delivery platforms for the treatment of cancerous diseases [108].
Graphene
Graphene is a two-dimensional CNM that consists of a single layer of densely packed sp 2 carbon atoms arranged in a hexagonal structure [122]. Graphene exhibits unique physical properties compared to the bulk structure, including large surface area, high electrical and thermal conductivity, high elasticity and mechanical strength, and tunable optical properties [123]. Graphene oxide (GOX) is chemically functionalized graphene with a surface modified with oxygen-containing groups (carboxy, carbonyl, and hydroxyl). The presence of oxygen-containing groups on the surface enhances the hydrophilicity of the graphene, which contributes to the aqueous solution stability of graphene due to stronger repulsive interactions, such as electrostatic and hydrogen bonding with other molecules besides π − π interaction [124]. Therefore, graphene, especially GOX, has been widely investigated for various biomedical applications. Conventional strategies, such as mechanical or chemical exfoliation of graphite, have been developed for graphene production. However, these methods require toxic and expensive chemical reagents to purify graphene, restricting their ability to scale up production [125]. Alternative strategies have been researched to fabricate graphene from sustainable and renewable materials, including industrial waste, food waste, plants, agriculture waste, and natural carbonaceous wastes (e.g., timber, bagasse, animal bones, and newspapers) through graphitization [126]. Wei et al. developed an aerogel composite using bacterial cellulose and caffeic acid-reduced graphene oxide for designing a bio-pressure sensing-based wearable device [127]. In the work of Somanathan et al., an environmentally friendly strategy has been established for synthesizing GOX via single-step reforming of sugarcane bagasse waste under atmospheric conditions [128].
The one-atom-thick carbon layer provides a high surface area for binding drug molecules and targeting ligands to both sides of graphene [129]. Therefore, graphenebased nanomaterials show enormous potential for drug delivery with higher drug-loading efficiency. However, graphene can aggregate in tissues, which may generate oxidative stress and causes toxic effects on humans [122]. The aggregation issue can be overcome by employing surface modification strategies. Prabakaran et al. functionalized GOX with ovalbumin protein and polymethyl methacrylate through a simple chemical reaction, resulting in the fabrication of a stable and biocompatible composite system [111]. In another work, the incorporation of GOX into the polymeric microneedle-based transdermal drug delivery system has also been shown for delivering anti-melanoma chemotherapeutic HA15 molecules. The results showed significant enhancement in the mechanical strength and moisture resistance of the device fabricated by incorporating GOX into polymeric materials, providing anti-inflammatory properties [112]. Lu et al. developed a flexible cortical microelectrodes array using porous graphene. Superior durability, mechanical strength, impedance, and charge injection properties make these graphene microelectrode arrays suitable for deep brain signal sensing and stimulation [113]. The potential of functionalized graphene has been explored for biosensing [130,131] and tissue engineering applications [132,133].
Carbon Nanotubes
Carbon nanotubes (CNT) are made of a layer of graphene that forms a cylindrical structure with dimensions in nanometers [119]. Depending on the graphene cylinder arrangements, CNTs can be classified as single-walled carbon nanotubes (SWCNTs) and multilayer-walled carbon nanotubes (MWCNTs). CNTs show unique physical properties, such as superior thermal conductivity, high electrical conductivity, and high mechanical strength. While significant progress has been made in the scalable manufacturing of CNTs, traditional methods such as chemical vapor deposition, electric arc discharge, and spray pyrolysis rely on high temperature, low pressure, non-renewable materials, and toxic solvents. These drawbacks can have negative environmental and health impacts [95,134]. Therefore, alternative strategies are needed for fabricating CNTs by utilizing renewable resources. Catalytic chemical vapor deposition is one of the most common strategies to produce CNTs.
Using bioderived precursor materials can make the chemical vapor deposition process less energy-intensive (i.e., reducing the reaction time and temperature) [135,136]. Green synthesis methods have been developed to produce CNTs using renewable resources such as plants (leaves, seeds, roots, and stems) [134]. Duarte et al. synthesized CNTs from waste cooking oil using the CVD method [137]. Microwaves can be considered an environmentfriendly approach that utilizes electromagnetic energy to heat precursor materials [131]. This less energy-intensive approach has enabled the rapid production of CNTs from various carbon sources, catalysts, and substrates [138].
CNTs can be used as electromechanical actuators when the potential is applied to an electrolyte. Ru et al. used a nanoporous CNTs film as the electrode for ionic electroactive polymer actuators. The electrode showed superior conductivity and improved electromechanical and electrochemical properties with enhanced durability under various voltages and frequency ranges, making them suitable for artificial muscle application [114]. CNTs have been used to design optical or electronic biosensors for biomolecular detection, such as DNA, glucose, and proteins. Li et al. developed a field-transistor biosensor using polymer-sorted semiconducting CNT films to detect exosomal miRNA with high sensitivity for breast cancer detection [115]. Because of photoluminescence, Raman scattering, photoacoustic, and echogenic properties, CNTs have been investigated for tracking and bioimaging in biological environments [119]. Singh et al. fabricated MWCNTs by the pyrolysis of a chickpea peel precursor. These MWCNTs showed blue fluorescence signals in human prostate carcinoma cells without cytotoxicity [116]. The application of CNTs has been demonstrated to deliver drugs of low solubility and low bioavailability to the target site via enhanced cellular permeation [117]. Suo et al. functionalized MWCNTs with a layer of the phospholipid-poly(ethylene glycol) and anti-Pgp antibodies to improve the biocompatibility, blood circulation, and ability to target cancer cells. In this work, they showed the effectiveness of the phospholipid-PEG-coated MWCNTs functionalized with anti-Pgp antibodies in generating phototoxicity in cancer cells under photoirradiation without damaging normal cells [118]. CNTs can be used as flexible nanotweezers, facilitating simultaneous changes and variations in biomedical analytical studies, such as nucleic acid based spectroscopy [119,120].
Sustainable Bioceramics
Bioceramic is an important biocompatible ceramic material with excellent bioactivity, chemical stability, thermal resistance, and tissue-like mechanical characteristics. Bioceramics have been used as a nanoporous scaffold material to repair and reconstruct damaged tissues, fill bone defects, and deliver drug molecules [139][140][141][142]. Bioceramics, such as calcium phosphate and hydroxyapatite, can be extracted from natural waste materials (eggshell, bovine bone, fish bone, and seashells), which are widely geographically available at low cost. Since these natural raw materials are derived from sustainable biological systems, they do not possess the inherent toxicity or potential side effects on exposure often shown by synthetic materials. For example, eggshells are an abundant biowaste from food processing industries, produced at an annual rate of approximately 250,000 tons per year. This is ranked as the 15th most common pollutant by the Environment Protection Agency [143]. Given the increased egg consumption worldwide and enrichment with minerals (calcium carbonate, calcium phosphate, 1% magnesium), eggshell biowaste can be used for synthesizing bioceramics, including calcium phosphate and hydroxyapatite. Furthermore, eggshells contain a trace amount of biologically relevant ions (Mg, Na, Si, and Sr), which facilitates the mineralization process and promotes bone growth.
Calcium phosphate (CaP) is the most widely researched synthetic biomaterial used in bone regeneration because of its natural presence in the human bone, superior bioactivity, biocompatibility, and biodegradability. CaP can be classified as αor β-tricalcium phosphate (α-or β-TCP), biphasic calcium phosphate (BCPs), and calcium hydroxyapatite. Various synthesis methods, such as sol-gel, hydrothermal, solid-state reaction, ultrasonic, and microwave, have been investigated to fabricate CaP biomaterials from eggshell biowaste. For example, a wet chemical method followed by heating at high temperatures and ball milling has been explored to synthesize crystalline β-TCP biomaterials derived from the eggshell [144,145]. Hydroxyapatite materials have been produced by the solid-state decomposition of eggshells at elevated temperature (1050 • C) [146]. Among various synthetic strategies, microwave synthesis is the most environment-friendly approach for manufacturing bioceramic nanoparticles with narrow size distribution at a high throughput. The size, shape, and crystallinity of CaP bioceramics can be tuned by varying the microwave parameters, such as microwave power and exposure time [147,148]. Bioceramics derived from eggshells have been shown to exhibit superior biological performance compared to synthetic bioceramics. This is because the eggshells are formed from a complex matrix of calcium carbonate, calcium phosphate, and organic proteins, which contributes to their unique mechanical and biological characteristics. Kumar et al. developed an efficient protein delivery system using natural and sustainable materials. The in vitro results showed improved encapsulation efficiency of protein cargo and enhanced protein delivery compared to synthetic TCP with a similar Ca/P ratio [149]. Hydroxyapatite and β-TCP bioceramics have been investigated as bone graft substitutes due to their excellent biocompatibility, bioactivity, and osteoconductivity. Sangjin et al. demonstrated that scaffolds made from eggshell-derived hydroxyapatite and β-TCP were effective in promoting bone formation in rabbits [150]. The superior biological performance of eggshell-derived materials resulted from the presence of biologically relevant ions in the eggshells. Therefore, sustainable bioceramics hold great potential to address the biomedical challenges posed by synthetic bioceramics. However, there is a need for scalable and continuous manufacturing of bioceramics extracted from eggshells, which requires a government policy regarding the collection of eggshell biowaste and supply to the industry manufacturers and research organizations.
Conclusions and Future Opportunities
The concept of sustainability has been coined to address the societal challenges in the energy, environment, economy, and biomedical fields, supporting the UN's SDGs for present and future generations. This review summarizes the main design principles for fabricating sustainable nanomaterials using renewable resources and environment-friendly scalable manufacturing methods with minimal hazardous waste generation (zero-waste). The use of recycled waste as a renewable resource for fabricating sustainable nanomaterials is a promising strategy towards achieving a zero-waste manufacturing process. By doing so, we can achieve the circular economy model and responsible waste management practices contributing to sustainability. We also discussed sustainable nanomaterial design principles that have been applied to fabricate nanocellulose, carbon, and bioceramic from renewable resources, such as plants, woods, fruits, biopolymers, bacteria, eggshell bio-waste, and recycled waste. The sustainable nanomaterials derived from natural resources offer several benefits over those nanomaterials produced from non-renewable resources, such as versatile surface functionalities, biocompatibility, and biodegradability.
Renewable resources are endless. However, the performance of sustainable nanomaterials produced from renewable resources is still inferior to nanomaterials synthesized from non-renewable resources. In addition, questions concerning the manufacturing process optimization (cost and geographic availability of natural resources) and their actual impact on human health, climate, and the environment are yet to be addressed. There are lessons to be learned from the rapid innovation in nanomaterials production from non-renewable materials, where the high performance of functional materials was prioritized without considering the impact of raw materials' future availability, energy resources, and toxic chemicals on the environment and human health. Moreover, the entire life cycle of nanomaterials, from production to disposal has not been considered. Therefore, there is a need to develop responsible nanomaterials manufacturing practices considering the appropriate selection of renewable materials based on inherent functionality, geographic availability, and cost.
There are several areas where future research in sustainable nanomaterials could be directed. First, the development of environmentally friendly strategies based on green chemistry for synthesizing sustainable nanomaterials is an essential step towards creating an eco-friendly sustainable future. The use of natural resources with rapid renewability should be prioritized. Second, the performance of sustainable nanomaterials should be optimized with respect to appropriate renewable resources (resource type, extraction cost, and geographic availability) and manufacturing methods (energy requirement, waste production, and scalability). The renewable material selection should be justified based on the energy requirement for manufacturing, potential waste production, material source (renewable or non-renewable), biocompatibility, biodegradability, and recyclability. Third, the performance of sustainable materials should be optimized in the intended targeted environment (i.e., optimization of physiochemical properties in a biological environment for establishing the relationship between the nanomaterial design and properties). Fourth, a comprehensive assessment of the environmental and human health impact of nanomaterials throughout their entire life cycle, from production to disposal, should be conducted. Fifth, new sustainable methods for the recycling and disposal of nanomaterials should be encouraged. Finally, new regulations and standards should be established for producing and using nanomaterials that prioritize sustainability and minimize harm to human health and the environment.
Furthermore, a sustainable nanomaterial performance matrix analog to Ashby's materials selection should be set [151]. This performance matrix can include various parameters, such as the behavior of nanomaterials in biological fluid (because the presence of biomolecules or proteins can alter the physiochemical properties), optimization of nanomaterials' performance against intended biomedical applications, and long-term toxicity of nanomaterials. The environmental risk associated with developed nanomaterials can also be included in the performance matrix because releasing nanomaterials into the environment can contaminate the land and groundwater (i.e., decreasing crop productivity and water quality). Developing a performance matrix of sustainable nanomaterials will generate an extensive dataset that can be analyzed using machine learning. In the future, machine-learning-based approaches will facilitate the informed design of sustainable nanomaterials with an optimal performance by analyzing large datasets [152]. This sustainable materials design framework can bridge the gap between research and commercialization. We expect that natural and sustainable resources offer an opportunity for fabricating next generation sustainable nanomaterials using a less energy-intensive eco-friendly method without causing harm to human health and the environment. New nanomaterials design practices will also contribute to the SDGs of the UN and the circular economy model. | 9,378.6 | 2023-03-01T00:00:00.000 | [
"Materials Science"
] |
Grid Search Tuning of Hyperparameters in Random Forest Classifier for Customer Feedback Sentiment Prediction
Text classification is a common task in machine learning. One of the supervised classification algorithm called Random Forest has been generally used for this task. There is a group of parameters in Random Forest classifier which need to be tuned. If proper tuning is performed on these hyperparameters, the classifier will give a better result. This paper proposes a hybrid approach of Random Forest classifier and Grid Search method for customer feedback data analysis. The tuning approach of Grid Search is applied for tuning the hyperparameters of Random Forest classifier. The Random Forest classifier is used for customer feedback data analysis and then the result is compared with the results which get after applying Grid Search method. The proposed approach provided a promising result in customer feedback data analysis. The experiments in this work show that the accuracy of the proposed model to predict the sentiment on customer feedback data is greater than the performance accuracy obtained by the model without applying parameter tuning. Keywords—Classification; grid search; hyperparameters; parameter tuning; random forest classifier; sentiment analysis
I. INTRODUCTION
The Classification is a text mining tasks in which class of a particular input is identified by using a given set of labelled data. Both supervised and unsupervised methods are used for classification. In the first method, learning is done through predefined labelled data. In this, a set of labelled input documents are given to the model by the end-user. The two main categories of supervised learning are parametric and non-parametric classification. The probability distribution of each class is the base of parametric classification. If the density function is known, it will be better to use nonparametric classification. Recently, people are using this classification process especially supervised classification to develop multiple interesting platforms for business. Sentiment analysis is the most attractive platforms which make use of the advantages of supervised classification methods. Sentiment can be described as a person's feeling about a particular thing. It includes the task of binary classification in which documents are classified into two different classes such as positive sentiment or negative sentiment. Due to the fast popularity of social networks [1], people are using it for sharing their views, opinion and ideas. Social networks provide a platform for the people to create a virtual civilization [2]. Sentiment analysis is a mining process based on user-generated comments to identify positive or negative feelings. Opinions are always important to a business. Most of the business decision is performed based on customers' reviews. The analysis of customer or product review involves the extraction of sentiment from product document [3]. Business organizations are very conscious to know whether customers like their product or service, what customers feel about the product, which type of product or service customers like or dislike, etc. Sentiment analysis is usually applied text input which help to identify the sentiment in a particular document and thus it is considered as the main part of text mining. Other than text classification, it requires more knowledge of the language. Generally, machine learning algorithms are considering the occurrence of the words in a document, so it tough to recognize the supreme attitude in that specific document. The sentiment analysis should be the process of identifying the polarity present in the given text or document i.e., positive or negative.
There are number of supervised machine learning algorithms are used for sentiment analysis. The performance of these classification algorithm is depending on its specific domain [4]. Random Forest classifier is largely used for this purpose. It is considered as an ensemble method [5] which generates many classifiers and finally aggregates their result for prediction. This will create a number of decision trees in the training phase [6]. The risk of noise and outliers will be high when having a single tree in classifier and it will definitely reduce the output of the processing. Due to the randomness property of Random Forest classifier, it is highly robust to outliers and noises. This classifier can handle missing values also.
One better approach to increase the outcome of any classifier is to tune the hyperparameters of that classifier [7]. The parameters that are set by the data analysts before the training process is called hyperparameters and it is independent of the training process. For example, in a random forest, a hyperparameter would be how many trees have to be www.ijacsa.thesai.org included in the forest or how many nodes each tree can have. Optimizing these hyperparameters for the classifier is the key to the perfect prediction of unlabeled data. These can only be achieved through trial and error methods. Different values of hyperparameters are used, then compare their result and finally find the best combination of them. The tuning process of hyperparameters is mainly depended on experimental results and not the theoretical result.
In this work, the Grid Search approach is applied for tuning Random Forest classifier and tried to identify the best hyperparameters. The implementation of Grid Search is simple [8]. A set of hyperparameters and their values are feed to it first and then run an exhaustive search overall all possible combination of given values then training the model for each set of values. Then Grid Search algorithm will compare the score of each model it trains and keeps the best one. A common extension of Grid Search is to use cross-validation i.e., training the model on several different folds with different hyperparameter combinations to find more accurate results.
The rest of the paper is organized as follows. In Section II, previous work in these research topics are discussed. Section III explains the proposed system model and architecture. The experimental results are discussed in Section IV and it is followed by a conclusion in Section V.
II. RELATED WORK Rafael G. Mantovani et al. [9] made an investigation on random search and grid search methods. They aimed to tune the hyperparameters of the classifier called Support Vector Machine (SVM). Their experiment was performed by using a huge dataset, finally, they compared the performance of Random Search with four methods such as Particle Swarm Optimization, Genetic Algorithm, Grid Search method and Estimation of Distributed Algorithm. The result of this work reveals that the predictive power of SVM classifier with Random Search is same as the other four techniques used and the advantage of this combination was the lowest computational cost of the model. Xingzhi Zhang et al. [10] effort was to propose an optimized novel of Random Forest Classifier for credit score analysis. For optimizing Random Forest Classifier, the authors developed a system called NCSM which uses grid search and feature selection. The developed model has the capability to overcome the problem of irrelevant and redundant features and got good performance accuracy. The model used the information entropy to select the optimal features. From the UCI database, two sets of data are selected as input to examine the performance of developed model. Their experiments show that proposed system has dominating the performance of some other methods.
A hybrid approach based on Random Forest and Support Vector Machine is proposed by Yassine Al Ambrani et al. in 2018 [11] for identifying Amazon product reviews. Crossvalidation method with fold value 10 has been used for this work. Both Support Vector Machine and Random Forest Classifier are used by authors to do classification of product reviews. The classification result of both classifiers is with the hybrid method. The result shows that the hybrid method of random Forest and SVM outperforms the individual methods.
An ensemble-based customer review sentiment analysis is done in 2019 [12] by Ahlam Alrehili and Kholood Albalawi. The proposed method used a voting system which combines five classifiers Random Forest, Naive Bayes, SVM, bagging and boosting. Six different scenarios are performed by authors to measure the result of the proposed model against five used classifiers. They are using unigram (with/without) stop words removal, bigram (with/without) stop words removal and using trigram stop word removal. Among this, the highest accuracy of 89.87% is given by the Random Forest classifier. Sentiment analysis on the blogs are carried by Prem Melville et al. in 2009 [13]. They combined classification of text with lexical knowledge. A unified framework is proposed by the authors and the framework used lexical information to filter information for a specific domain. The combination of training examples using Linear Pooling with background knowledge is performed well and had an accuracy of 91.21%.
The neural network has more hyperparameters which have to be set by hand. Nauria Rodriguez-Barroso et al. worked on these neural network parameters in 2019 [14]. They used SHADE evolutionary algorithm to perform optimization of different deep learning hyperparameters to perform twitter sentiment analysis. The Spanish tweets are selected as dataset for their work. The findings reveal that hyperparameters selected by SHADE algorithm help to improve the proposed model's performance.
Airline data sentiment analysis is performed by Bahrawi in 2019 [15]. Six airline tweet data from Kaggle is used for this study and Random Forest classifier is used for sentiment prediction. Classifier predicted 63% of tweets as negative, 21% as neutral and 16% as positive. The accuracy achieved by the Random Forest algorithm was only 75%. The author suggested to build model by using some other machine learning algorithms to get a better result.
A new credit scoring model called NCSM is proposed by Xingzhi et al. [16] in 2018. Grid search method and feature selection are applied for this model in order to optimize the Random Forest classifier's performance. This proposed model achieved high prediction accuracy as compared with some other commonly used methods.
III. PROPOSED MODEL
The architectural diagram of the proposed model is depicted in Fig. 1. The collected customer feedback data go through several processing stages and feature extraction is performed. After extracting the necessary features, it is given as input to the Random Forest classifier. Finally, parameter tuning by Grid Search method is applied to increase the classifier's performance. This section gives the detailed description of the proposed model.
A. Pre-Processing of Data
As an initial step, the original input data is examined in pre-processing stage and make the raw data convenient for using in classification process. It is the first and crucial step in creating a model. While creating a machine learning model, it www.ijacsa.thesai.org is not always possible to get clean and formatted data. For this, the data pre-processing task is used. The real-world data may be in an unusable format and contains missing values, noises, etc. This type of data is impossible to use directly for machine learning model. Data pre-processing is an important task to clean the original data for the machine learning model and thereby increase model accuracy and efficiency. The following steps are used for pre-processing: Tokenization-Divided the customer feedback input into a number of individual words called tokens.
Removal of special characters, numbers, stop words and punctuations since it does not any sentiment.
Stemming-It involves normalizing the input data. For example, reducing words like loves, loving and lovable into its root word i.e., love is often used in the same context.
B. Feature Extraction
In this step, new features are extracted from existing dataset and thereby reduce the count of features used for processing task. The new reduced feature set will be capable to represent majority information in the initial feature set. This text feature extraction will directly influence the accuracy of the classification. The two techniques used in this work for feature extraction are. Count Vectorizer: The text data need special preparation before using it for predictive modelling. The number of occurrences of every word in a given document can be identified by using Count Vectorizer. It will provide a vector with frequencies of each token in the given document. Term Frequency-Inverse Document Frequency: TF represent the result after dividing the occurrence of a word in a particular document by the total count of words present in that document. IDF is used to find out the weight of rare words across the entire document in the corpus. When TF is multiplied by IDF it will result in TF-IDF.
C. Algorithms Applied 1) Sentiment classification-random forest classifier:
Random Forest classifier is a flexible supervised algorithm which can be used for text classification. The working of this algorithm is based on tree collection in which every tree depends on different random variables [17]. It uses Divideand-conquer approach. Forest represents a collection of many trees. From random subsets of input data, this algorithm will generate several small decision trees. Consider a random vector of dimension n, where A= (A 1 , A 2 ,.........,A n ) T is a set of real-valued input variables and a random variable B which represent the real value response, then we assume an unknown joint distribution P AB (A, B). The goal of this algorithm is to find a prediction function f(A) for predicting B. A loss function L(B, f(A)) is used to find the prediction function and it should minimize the expected value of the loss.
( ( ( ))) (1) The procedure for the Random Forest classifier is given as follows: 2) Hyperparameter tuning-grid search method: Machine learning model has many parameters [18] to tune and by tweaking these parameters, the performance of the model can improve. Hyperparameter tuning is the best method to execute a different number of parameter combinations to assess a classifier's performance. Assessing a classifier by using training data will cause a fundamental machine learning problem called overfitting. The overfitting is the situation in which a model performs poorly on test data and highly on raining data. Therefore, cross-validation is used with the grid search method for hyperparameter optimization.
RandomForestProc( )
The grid search method is an approach used to identify the optimum parameters of a classifier so that a model can accurately predict some unlabeled data. The Grid Search method is used to tune some hyperparameters which cannot directly learn from the training process. The classification model has many hyperparameters and finding the best combination of these parameters is a challenging process. One of the best methods used for this purpose is the Grid Search method. Suppose, a machine learning model X has hyperparameters h1, h2 and h3. The Grid Search method defines a range of values for each hyperparameter h1, h2 and h3. It will construct many versions of X with all possible combinations of h1, h2 and h3. This range of hyperparameter values is known as a grid. The hyperparameter tuning architecture is depicted in Fig. 2. The input data is divided into a training set, testing set and validation set. The tuning process is executed by separating the data set into n different portions. Then, the Random Forest Classifier trained in n-2 portions for each candidate solution selected by the tuning technique. The validation set is used to validate the developed model and the last portion is used to test the model. The test accuracy and validation accuracy are evaluated by using the model. Then the model is instigated by the training set and the hyperparameter value determined by the tuning technique. These steps are repeated for N times. To guide the search process, the average validation accuracy is used as the fitness value. Finally, it will return the individual with the highest accuracy and the performance of the method is the average test accuracy of that individual. The procedure for the proposed hybrid model of Random Forest classifier and Grid Search method for sentiment prediction is as follows.
IV. EXPERIMENTS AND RESULTS
Labelled customer feedback data on electronic items collected from UCI database and it is used as input for this work. It includes 1500 reviews (750 positive and 750 negative reviews). This work aims to classify these customer feedbacks into two different categories such as positive feedback and negative feedback. 7-fold cross-validation is used for calculating the model's accuracy. First, customer feedback data analysis is performed by using Random Forest classifier with default hyperparameters and achieved 84.53% of accuracy. Table I gives the result of customer feedback analysis using Random Forest classifier.
From the Table I, it is clear that 1268 customer reviews are classified correctly among 1500 and 232 are wrongly classified by the model. To increase the accuracy of the classifier, parameter tuning using Grid Search method is used in this work. The Random Forest classifier has several parameters, which can be adjusted to get optimal performance. Two of those parameters are the number of trees constructed for classifying new data and the maximum number of variables used in individual trees. The class GridSearchCV available in Scikit Learn is used for this study. The GridSearchCV evaluates, all possible combinations of parameter values and finally, the best parameter combination is retained. This work mainly concentrates on two parameters of Random Forest classifier.
The GridSearchCV uses max_features for denoting the maximum number of variables used in independent trees and n_estimators for denoting the total number of trees to be constructed in the forest. The Table II provides the result of parameter tuning of Random Classifier on customer feedback data. The score in Table II represents the accuracy of the classifier using the 7-fold cross-validation method. sqrt and log2 are the two options tried for max_features.
According to Table II, the highest accuracy of the Random Forest Classifier is 90.02% at the parameters 'max_features'='sqrt' and 'n_estimators'=400.
The Table III shows the result of the proposed method (with best parameters) which uses the Grid Search approach for hyperparameter tuning. By using the proposed method, among 1500 reviews, 1353 reviews of customers are classified correctly and 147 are not. The accuracy comparison of two used methods is given in Table IV. A detailed comparison of Random Forest classifier and proposed system which uses the Grid Search method for parameter tuning is depicted in Table V. TABLE II. BEST PARAMETERS IDENTIFIED BY GRID SEARCH Sentiment analysis is essential for a business organization to perform decision making. It can be used for different tasks such as calculating or expressing sentiment on any product or service. In this work, the best parameters are tuned by Grid Search method for Random Forest classifier. Experimental results on customer feedback data show that Random Forest provides the best result with an accuracy of 84.53%. But, by tuning number of maximum trees in the forest and depth of trees, the accuracy of the developed model increases to 90.02%. The result shows that parameter tuning has successfully helped to generate the best model to classify new data. At the same time, the Random Forest classifier take more execution time when the number of trees in the forest is increased. In the future work, the proposed model can use for multi-class sentiment prediction since it concentrated binary classification only. | 4,379.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Correlation functions of scalar field theories from homotopy algebras
We present expressions for correlation functions of scalar field theories in perturbation theory using quantum $A_\infty$ algebras. Our expressions are highly explicit and can be used for theories both in Euclidean space and in Minkowski space including quantum mechanics. Correlation functions at a given order of perturbation theory can be calculated algebraically without using canonical quantization or the path integral, and we demonstrate it explicitly for $\varphi^3$ theory. We show that the Schwinger-Dyson equations are satisfied as an immediate consequence of the form of the expressions based on quantum $A_\infty$ algebras.
Introduction
Homotopy algebras such as A ∞ algebras [1,2,3,4,5,6] and L ∞ algebras [7,8] have been playing a significant role in the construction of string field theory, which can be seen most magnificently in the construction of closed string field theory by Zwiebach [7].When we consider projections onto subspaces of the Hilbert space of the string, homotopy algebras have turned out to provide useful tools.The projection onto on-shell states describes on-shell scattering amplitudes [9], the projection onto the physical sector leads to mapping between covariant and light-cone string field theories [10], and the projection onto the massless sector is relevant for the low-energy effective action [11,12,13,14,15].
We can also describe quantum field theories using homotopy algebras [16,17,18,19,20,21,22].For scalar field theories the description in terms of homotopy algebras is rather trivial, which reflects the fact that there are no gauge symmetries in scalar field theories.However, the relation between the action and on-shell scattering amplitudes is universal, and the description of on-shell scattering amplitudes in terms of homotopy algebras is nontrivial for scalar field theories and provides new perspectives.
In quantum field theory, we also consider correlation functions.Since there does not seem to be any immediate relation between correlation functions and projections in homotopy algebras, we may have an impression that homotopy algebras will not be useful in describing correlation functions.On the other hand, the description of on-shell scattering amplitudes in terms of homotopy algebras is based on the fact that Feynman diagrams are algebraically generated in this approach [9,23,24], and we expect that there is a way to generate Feynman diagrams for correlation functions using homotopy algebras as well.Furthermore, the Batalin-Vilkovisky formalism [25,26,27] can be thought of as being dual to the homotopy algebra, and correlation functions have been discussed in the framework of the Batalin-Vilkovisky formalism [28,29].Therefore, we again expect that there is a way to describe correlation functions using homotopy algebras.In this paper we demonstrate that it is indeed the case that we can describe correlation functions in terms of homotopy algebras, and we present highly explicit expressions for correlation functions of scalar field theories in perturbation theory using quantum A ∞ algebras.
The rest of the paper is organized as follows.In section 2 we explain the description of scalar field theories in terms of quantum A ∞ algebras and we present our formula for correlation functions.In section 3 we calculate correlation functions of the free theory and confirm that our formula reproduces Wick's theorem.In section 4 we consider ϕ 3 theory and we calculate correlation functions in perturbation theory explicitly.We then show that the Schwinger-Dyson equations are satisfied as an immediate consequence of the form of the expressions based on quantum A ∞ algebras in section 5.In section 6 we consider scalar field theories in Minkowski space.Section 7 is devoted to conclusions and discussion.
Correlation functions from quantum A ∞ algebras
We explain the description of scalar field theories in terms of A ∞ algebras in subsection 2.1.We then explain the coalgebra representation of A ∞ algebras in subsection 2.2,1 and we consider projections onto subspaces in subsection 2.3.Finally in subsection 2.4 we present our formula for correlation functions in terms of quantum A ∞ algebras.
Scalar field theories in terms of A ∞ algebras
Let us first consider scalar field theories in Euclidean space.Scalar field theories in Minkowski space will be discussed later in section 6.The action of the free theory is given by where ϕ(x) is a real scalar field in d dimensions and m is a real parameter.
To describe this action in terms of an A ∞ algebra, we introduce two copies of the vector space of functions of x.We denote them by H 1 and H 2 , and we define H by (2. 2) The vector space H is graded with respect to degree.Any element in H 1 is degree even and any element in H 2 is degree odd, but signs from anticommuting degree-odd objects never appear in the calculations to be presented in this paper.
We then introduce a symplectic form and denote it by ω ( ϕ 1 (x), ϕ 2 (x) ) for ϕ 1 (x) and ϕ 2 (x) in H.The symplectic form for ϕ 1 (x) in H 1 and ϕ 2 (x) in H 2 is defined by ω ( ϕ 1 (x), ϕ 2 (x) ) = d d x ϕ 1 (x) ϕ 2 (x) for ϕ 1 (x) ∈ H 1 and ϕ 2 (x) ∈ H 2 . (2. 3) The symplectic form is graded antisymmetric, and ω ( ϕ The symplectic form vanishes for other cases: The last ingredient for the action of the free theory is Q, which is a linear operator on H.The action of Q on ϕ(x) in H 1 is defined by and Q ϕ(x) is in H 2 .On the other hand, the operator Q annihilates any element in H 2 : Let us summarize the nonvanishing part of Q as follows: Since an element in H 1 is degree even and an element in H 2 is degree odd, we say that Q is degree odd.Note that the operator Q has the following cyclic property: where deg(ϕ) = 0 mod 2 when ϕ(x) is degree even and deg(ϕ) = 1 mod 2 when ϕ(x) is degree odd.We also use this notation for operators and maps.For example, we write deg(Q) = 1 mod 2. In the current case the symplectic form (2.9) can be nonvanishing only when both ϕ 1 (x) and ϕ 2 (x) are in H 1 , so the sign factor (−1) deg(ϕ 1 ) in (2.9) is trivial, but it can be nontrivial for more general A ∞ algebras.Using the symplectic form ω and the operator Q, the action of the free theory can be written for ϕ(x) in H 1 as follows: (2.10) Note that the operator Q is nilpotent: This relation is trivially satisfied in the current case as the action of Q can be nonvanishing only when it acts on an element in H 1 but the resulting element is in H 2 and is annihilated by the following action of Q.For more general cases, this property of Q is related to the gauge invariance of the free theory.Let us next consider interactions.The classical action of ϕ 3 theory in Euclidean space S (0) is given by where g is the coupling constant.To describe the cubic interaction in terms of an A ∞ algebra we introduce a linear map from H ⊗ H to H and denote it by b 2 .The action of b 2 on ϕ ) The action of b 2 for other cases vanishes: We can thus summarize the nonvanishing part of b 2 as follows: It follows from this that b 2 is degree odd.Note that the operator b 2 has the following cyclic property: In the current case the symplectic form can be nonvanishing only when all of ϕ 1 (x), ϕ 2 (x), and ϕ 3 (x) are in H 1 , so the sign factor (−1) deg(ϕ 1 ) is trivial, but it can be nontrivial for more general A ∞ algebras.Using ω, Q, and b 2 , the action S (0) can be written for ϕ(x) in H 1 as follows: (2.17) Here we should comment on how we treat the commutative nature of the cubic interaction.While the integral is totally symmetric, we only use the cyclic property (2.16) when we describe the theory in terms of an A ∞ algebra.The symmetric property (2.19) is a consequence of the fact that the product (2.13) is commutative, but this is not always the case for theories described by A ∞ algebras.Some of the calculations in this paper simplify if we use this commutative property of the product, but we never use it in this paper because our primary motivation is to generalize the analysis to open string field theory where the product is not commutative and to evaluate correlation functions in the 1/N expansion.As long as we distinguish b 2 ( ϕ 1 (x) ⊗ ϕ 2 (x) ) and b 2 ( ϕ 2 (x) ⊗ ϕ 1 (x) ), we can unambiguously determine the topology of non-planar diagrams, which can be seen explicitly by generalizing ϕ(x) 3 to ϕ ij (x) ϕ jk (x) ϕ ki (x) for a matrix field ϕ ij (x) and writing Feynman diagrams using the double-line notation, and this is exactly what we do when we perform the 1/N expansion.
For more general interactions, we introduce m n which is a degree-odd linear map from H ⊗n to H in order to describe terms of O(ϕ n+1 ) in the action, where we denoted the tensor product of n copies of H by H ⊗n : When we consider quantum corrections to Q, we use m 1 which is a degree-odd linear map from H to H.As in the case of b 2 , m n ( ϕ 1 (x) ⊗ . . .⊗ ϕ n (x) ) can be nonvanishing only when all of ϕ 1 (x), ϕ 2 (x), . . ., and ϕ n (x) are in H 1 , and in this case We summarize this property as follows: We also consider terms linear in ϕ in the action.To describe such linear terms we introduce a one-dimensional vector space given by multiplying a single basis vector 1 by complex numbers and denote the vector space by H ⊗0 .The vector 1 is degree even and satisfies for any ϕ(x) in H.We then introduce m 0 which is a degree-odd linear map from H ⊗0 to H and m 0 1 is in H 2 .We require the following cyclic property for m n : Again in the current case the symplectic form can be nonvanishing only when all of ϕ 1 (x), ϕ 2 (x), . . ., and ϕ n+1 (x) are in H 1 , so the sign factor (−1) deg(ϕ 1 ) is trivial, but it can be nontrivial for more general A ∞ algebras.
We consider an action of the form for ϕ(x) in H 1 .The action is written in terms of Q and the set of maps { m n }, and it is invariant under the gauge transformation which is also written in terms of Q and { m n } when a set of relations called A ∞ relations are satisfied among Q and { m n }.The relation Q 2 = 0 we mentioned before is one of the A ∞ relations when m 0 and m 1 vanish.Two more examples of A ∞ relations when m 0 and m 1 vanish are given by where I is the identity operator on H.In the current case these relations are trivially satisfied because m n ( ϕ 1 (x) ⊗ . . .⊗ ϕ n (x) ) can be nonvanishing only when all of ϕ 1 (x), ϕ 2 (x), . . ., and
The coalgebra representation
To describe the A ∞ relations to all orders, it is convenient to consider linear operators acting on the vector space T H defined by We denote the projection operator onto H ⊗n by π n .For a map c n from H ⊗n to H, we define an associated operator c n acting on T H as follows.The action on the sector H ⊗m vanishes when m < n: c n π m = 0 for m < n . (2.29) The action on the sector H ⊗n is given by c n : (2.30) The action on the sector H ⊗n+1 is given by The action on the sector H ⊗m for m > n + 1 is given by where An operator acting on T H of this form is called a coderivation. 2It will be helpful to explain the action of c 0 in more detail.For example, the action of c 0 on H is given by but this should be understood as follows.Since ϕ(x) in H can be written as or as the action of c 0 on ϕ(x) should be understood as where m n is the coderivation associated with m n , and the A ∞ relations can be written compactly as where Q is the coderivation associated with Q.When m 0 and m 1 vanish, the nilpotency of Q in (2.11) is reproduced by the condition π 1 ( Q + m ) 2 π 1 = 0 and the relations (2.26) and (2.27) are reproduced by the conditions π 1 ( Q + m ) 2 π 2 = 0 and π 1 ( Q + m ) 2 π 3 = 0, respectively.When a coderivation m is given, we can uniquely determine m n by the decomposition Therefore, the construction of an action with an A ∞ structure amounts to the construction of a degree-odd coderivation m which satisfies (2.39).
Projections
As we wrote in the introduction, homotopy algebras are useful when we consider projections onto subspaces of H.We consider projections which commute with Q, and we denote the projection operator by P .It satisfies the following relations: An important ingredient is h which is a degree-odd linear operator on H and satisfies the following relations: We then promote P and h to the linear operators P and h on T H, respectively.The operator P is defined by where
.44)
The operator h is defined as follows.Its action on H ⊗0 vanishes: The action on H is given by h: The action on H ⊗ H is given by (2.47) The action on H ⊗n for n > 2 is given by (2.48) Unlike the case of coderivations, the projection operator P appears in the definition of h.Note also that the appearance of P is asymmetric and the operator P always appears to the right of h.This property of h will play an important role later.The relations in (2.41) are promoted to the following relations for P and Q: The relations in (2.42) are promoted to the following relations involving h: where I is the identity operator on T H.
When the classical action is described by the coderivation given by the action of the theory projected onto a subspace of H is described by where the inverse of I + h m (0) is defined by (2.54) The general construction of (2.52) from Q + m (0) is known as the homological perturbation lemma, and it is described in detail including the case where m (0) 0 is nonvanishing in [13].The action of the theory projected onto a subspace of H can be constructed from (2.52) via the decomposition analogous to (2.40).
When we consider on-shell scattering amplitudes, we use the projection onto on-shell states.In that case P Q P vanishes, and on-shell scattering amplitudes at the tree level can be calculated from P m (0) f (0) P . (2.55) When the action including counterterms is described by the coderivation given by on-shell scattering amplitudes including loop diagrams can be calculated from where the inverse of I + h m + i h U is defined by The operator U consists of maps from H ⊗n to H ⊗(n+2) .When the vector space H is given by H 1 ⊕H 2 , the operator U incorporates a pair of basis vectors of H 1 and H 2 .We denote the basis vector of H 1 by e α , where α is the label of the basis vectors.For H 2 we denote the basis vector by e α , and repeated indices are implicitly summed over.These basis vectors are normalized as follows: ω ( e α , e β ) e β = e α , e α ω ( e α , e β ) = e β . (2.60) In this paper, we choose e α and e α which appear in T H as The action of U on H ⊗0 is given by and the action of U on H is given by (2.63) The expressions of U 1 and U ϕ(x) are graded symmetric, but this is not generically the case when U acts on H ⊗n with n ≥ 2. The action of U on H ⊗2 is given by
.64)
In this paper the operator U only appears in the combination h U and we will later present a precise form of h U when it acts on a space which is relevant to the analysis in this paper.
Formula for correlation functions
In the case of scalar field theories in Euclidean space, the equation of motion of the free theory is given by The solution is unique and is given by ϕ(x) = 0 . (2.66) The projection onto the cohomology of Q defines a minimal model and plays an important role in homotopy algebras.In the current case, the projection onto the cohomology of Q corresponds to the projection operator P given by P = 0 , (2.67) and the associated operator P corresponds to the projection onto H ⊗0 : The operator P m f P vanishes, and we may consider that the theory is trivial.However, the operator f is nonvanishing and this operator plays a central role in generating Feynman diagrams.In the case of the theory in Euclidean space, 3 the appropriate definition of f is where the inverse of What does the projection with P = 0 mean?If we recall that the projection onto the massless sector discussed in [11,12,13,14,15] corresponds to integrating out massive fields, the projection with P = 0 should correspond to carrying out the path integral completely.This may result in a trivial theory for the classical case, but it can be nontrivial for the quantum case and in fact it is exactly what we do when we calculate correlation functions.We claim that information on correlation functions is encoded in f 1 associated with the case where P = 0.More explicitly, correlation functions are given by 3 Unlike the Minkowski case we do not write explicitly for the Euclidean case because theories in Euclidean space can also be regarded as canonical ensembles of classical statistical mechanics and we can consider them in broader contexts.If we prefer, we can replace h U with h U or with β −1 h U. where The formula may look complicated, but it states that π n f 1 gives the n-point function by simply replacing x with x i in the i-th sector in H ⊗n .For example, when π 3 f 1 takes the form the three-point function is given by (2.74) This can be summarized as the following replacement rule: (2.75) We need to construct h for the case P = 0.The first step is the construction of h satisfying (2.42).As P vanishes, the conditions for h are given by It is easy to construct h satisfying these equations.The action of h on ϕ(x) in H 2 is given by and h ϕ(x) is in H 1 .On the other hand, the operator h annihilates any element in H 1 .Thus the nonvanishing part of h can be described as follows:
78)
The operator h on T H in the case of P = 0 is then given by (2.79)
The free theory
Let us first demonstrate that correlation functions of the free theory are correctly reproduced.We denote correlation functions of the free theory by ϕ(x 1 ) ϕ(x 2 ) . . .ϕ(x n ) (0) .In this case the coderivation m vanishes and f 1 is given by Let us examine the action of the operator h U.It is useful to consider the tensor product of n copies of H 1 and denote it by H ⊗n 1 : We also define the vector space T H 1 by When U acts on H ⊗n 1 , e α is the only ingredient in H 2 for the resulting element in H ⊗(n+2) .The following action of h can be nonvanishing only when h in h acts on e α .Therefore, the action of h U on T H 1 is given by Note that the resulting element is also in Then it immediately follows that π n f 1 vanishes when n is odd.We thus find when n is odd.
The two-point function can be calculated from π 2 f 1.We find (3.7) Following the replacement rule (2.75), the two-point function is given by (3.8) The four-point function can be calculated from π 4 f 1.It follows from the decomposition of h U in (3.5) that Using the action of h U in (3.4) we find The explicit form of the first term on the right-hand side is given by and the contribution to the four-point function is as follows: The second and third terms on the right-hand side of (3.10) can be calculated similarly, and the four-point function is given by We have thus reproduced Wick's theorem for four-point functions.It is not difficult to extend the analysis to six-point functions and further, where Wick's theorem follows from the structure of h U in (3.4).
ϕ 3 theory
Let us next consider ϕ 3 theory and calculate correlation functions in perturbation theory. 4The classical action of ϕ 3 theory in Euclidean space is given by5 and in subsection 2.1 we wrote it in the following form: We consider quantum theory, and we need to add counterterms to the classical action.The action of ϕ 3 theory including counterterms is given by where Y , Z ϕ , Z m , and Z g are constants.The operators m 0 , m 1 , and m 2 for this action are defined by for ϕ(x), ϕ 1 (x), and ϕ 2 (x) in H 1 .The coderivations corresponding to m 0 , m 1 , and m 2 are denoted by m 0 , m 1 , and m 2 , and we define m by The whole action is described by the coderivation Q + m, and correlation functions can be calculated from f 1 given by Let us examine the action of the operator h m, which can be divided into h m 0 , h m 1 , and h m 2 .When m 0 acts on H ⊗n 1 , m 0 1 is the only ingredient in H 2 for the resulting element in H ⊗(n+1) .The following action of h can be nonvanishing only when h in h acts on m 0 1.Therefore, the action of h m 0 on T H 1 is given by Similarly, the actions of h m 1 and h m 2 on T H 1 are given by Note that the resulting element is also in T H 1 for each of the actions of h m 0 , h m 1 and h m 2 on T H 1 .When we expand f 1 in powers of h m and h U, each term in the expansion therefore belongs to T H 1 .For this expansion, it is convenient to decompose h m 0 , h m 1 , and h m 2 as follows: Let us calculate correlation functions in perturbation theory with respect to g.We expand Y , Z ϕ , Z m , and Z g in g as follows: Correspondingly, we expand m 0 , m 1 , and m 2 in g as where m ).We also expand m in g as where The coderivation m (0) describes the interaction of the classical action and is given by where b 2 is the coderivation associated with b 2 .The coderivation m (1) describes counterterms at one loop, and m (1) 0 , m 1 , and m 2 are given by m (1) m (1) for ϕ(x), ϕ 1 (x), and ϕ 2 (x) in H 1 .
One-point function
The one-point function can be calculated from π 1 f 1.Let us expand π 1 f 1 in g.Since h m is of O(g), we find (4.24) It follows from the decomposition of h U in (3.5) that Since π 1 1 vanishes, we obtain We further use the decomposition of h U in (3.5) and the decompositions of h m 0 , h m 1 , and h m 2 in (4.12) to find We then expand m 2 and m 0 in g to obtain (4.28)The explicit form of the terms of O(g) is given by and the one-point function is given by (4.30) We have reproduced the contribution from the one-loop tadpole diagram.See figure 1.Note that the correct symmetry factor appeared. 6The integral over p is divergent in six dimensions, so we need to regularize the integral by introducing a cutoff Λ. 7 We then choose the constant Y (1) to depend on Λ so that the one-point function at O(g) is finite in the limit Λ → ∞.While we can make the one-point function vanish at O(g) by choosing Y (1) to cancel the contribution from the one-loop tadpole diagram, we leave it finite and keep track of the appearance of one-loop tadpoles.
It is convenient to define n We can then write π 1 f 1 as follows: Let us denote the sum n 0 by Γ 0 .It is given by (1) .(4.33) The operator Γ (1) 0 describes the linear term at one loop in the one-particle irreducible (1PI) effective action [31].We write the one-point function as where
Two-point function
The two-point function can be calculated from π 2 f 1.Let us expand π 2 f 1 in g as follows: (4.36) It follows from the decomposition of h U in (3.5) that When we substitute (4.37) into (4.36),π 2 and π 2 h U π 0 on the right-hand side of (4.37) act on 1 or h m.Since π 2 1 and π 0 h m vanish, we obtain We use the decomposition of h U in (3.5) and the decompositions of h m 0 , h m 1 , and h m 2 in (4.12) to find We then expand m 2 , m 1 , and m 0 in g to obtain In addition to the term π 2 h U 1 of the free theory, there are seven terms of O(g 2 ).Let us first calculate two of them which do not involve counterterms.One of them is Then the sum of the two terms is written as The remaining five terms involving counterterms are given by 1 h e α , (4.45) We define Γ (1) 1 by Γ (1) 1 . (4.50) Then π 2 f 1 can be written in terms of Γ (1) 1 and Γ (1) 0 as follows: 1 .The bottom left diagram contains the one-loop tadpole.The disconnected diagrams on the right side correspond to ϕ(x 1 ) (1) ϕ(x 2 ) (1) .
1 on e ikx in H 1 is given by
Three-point function
The three-point function can be calculated from π 3 f 1.Let us expand π 3 f 1 in g as follows: (4.55) It follows from the decomposition of h U in (3.5) that Since π 3 1 and π 1 1 vanish, we obtain We use the decomposition of h U in (3.5) and the decompositions of h m 0 , h m 1 , and h m 2 in (4.12) to find We then expand m 2 and m 0 in g to obtain (4.59)Among four terms of O(g), there are two terms which do not involve counterterms.One of them is C , and the right diagram represents a factorized contribution such as ϕ(x 1 ) ϕ(x 2 ) (0) ϕ(x 3 ) (1) .
The sum of the two terms is given by (4.62) The remaining two terms involving counterterms are given by We therefore find The three-point function is given by where the connected part at the tree level ϕ(x 1 ) ϕ(x 2 ) ϕ(x 3 ) See figure 3.
The Schwinger-Dyson equations
We have demonstrated that our formula presented in subsection 2.4 correctly reproduces correlation functions of ϕ 3 theory.In this section we show that the Schwinger-Dyson equations are satisfied.
In the framework of the path integral, correlation functions are defined by where we obtain the Schwinger-Dyson equations given by Let us show that correlation functions described in terms of quantum A ∞ algebras satisfy the Schwinger-Dyson equations.Since and we have The first term on the left-hand side gives the following (n + 1)-point function: where we have chosen the last argument to be y instead of x n+1 .Let us next consider from the third term on the left-hand side of (5.7).It follows from the decomposition of h U in (3.5) that for n ≥ 1.The action of h U in (3.4) gives factors of the form so that we find p 2 + m 2 ϕ(x 1 ) . . .ϕ(x i−1 ) ϕ(x i+1 ) . . .ϕ(x n ) . (5.12) The second term on the left-hand side of (5.7) is expanded as (5.13) If we compare with In what follows the notation m k ( ϕ(z) ⊗ . . .⊗ ϕ(z) ) with k = 0 represents the function of z obtained by replacing x in m 0 1 with z.We thus conclude that (5.18) Using (5.8), (5.12), and (5.18), the relation (5.7) implies that ϕ(x 1 ) . . .ϕ(x n ) ϕ(y) (5. 19) We then acts the operator (5.20)We have thus shown that the Schwinger-Dyson equations are satisfied.
Scalar field theories in Minkowski space
So far we have considered scalar field theories in Euclidean space.In this section we consider scalar field theories in Minkowski space.The action of the free theory is given by This can be written as with the understanding that ∂ 2 in the definition of Q is changed from g µν ∂ µ ∂ ν with the Euclidean metric g µν to η µν ∂ µ ∂ ν with the Minkowski metric η µν of signature (−, +, • • • , +) .The equation of motion of the free theory is given by for ϕ(x) in H 1 .Unlike the Euclidean case where the solution is unique, solutions to the equation of motion consist of general superpositions of propagating waves.When we consider correlation functions, however, we claim that we should consider the projection onto ϕ(x) = 0 so that the corresponding projection operator P vanishes: As we wrote in subsection 2.4, this should correspond to carrying out the path integral completely, and this should also be the case for the theory in Minkowski space.The conditions for h are again given by To define the path integral of the free theory in Minkowski space, we use the iǫ prescription and as a result we obtain the Feynman propagator.Since we define correlation functions in Minkowski space as vacuum expectation values associated with the unique vacuum in the quantum theory, we use the Feynman propagation to define the operator h.The action of h on ϕ(x) in H 2 is given by and the operator h annihilates any element in H 1 .
We consider an action of the form Note that the overall sign has been changed from the Euclidean case.We claim that correlation functions are given by where Let us show that the Schwinger-Dyson equations are satisfied.In the framework of the path integral, correlation functions are defined by Let us show that correlation functions described in terms of quantum A ∞ algebras satisfy the Schwinger-Dyson equations.Since we have Using (5.8), (5.12), and (5.18) with p 2 + m 2 replaced by p 2 + m 2 − iǫ, we obtain ϕ(x 1 ) . . .ϕ(x n ) ϕ(y) (6.17) We then acts the operator − ∂ 2 y + m 2 to find we find We have thus shown that the Schwinger-Dyson equations are satisfied.
Let us consider the two-point function of the free theory.In this case the coderivation m vanishes and f is given by The two-point function can be calculated from π 2 f 1.We find The two-point function is then given by More examples of calculations for scalar field theories in Minkowski space will be presented in [31].Quantum mechanics corresponds to the case where d = 1.We write the action of a harmonic oscillator as where the parameters m and ω are real and positive.We take H 1 and H 2 to be the vector space of functions of t, and we define Q by for q(t) in H 1 .We define h by The two-point function is given by which is q(t 1 ) q(t 2 ) = 2 mω e −iω (t 1 −t 2 ) for t 1 > t 2 (6.29) and q(t 1 ) q(t 2 ) = 2 mω e iω (t 1 −t 2 ) for t 1 < t 2 .(6.30) As is well known, this reproduces the vacuum expectation value of the time-ordered product 0| T q(t 1 ) q(t 2 ) |0 for
Conclusions and discussion
In this paper we proposed the formula (2.71) for correlation function of scalar field theories in perturbation theory using quantum A ∞ algebras.We then proved that correlation functions from our formula satisfy the Schwinger-Dyson equations as an immediate consequence of the structure in (5.5) for the Euclidean case and in (6.14) for the Minkowski case.Since the description in terms of homotopy algebras or the Batalin-Vilkovisky formalism tends to be elusive and formal, we have presented completely explicit calculations for ϕ 3 theory which involve renormalization at one loop.We hope that this demonstration in this paper helps us convince ourselves that any calculations of this kind in the path integral or in the operator formalism can be carried out in the framework of quantum A ∞ algebras as well.
The important ingredient f is associated with a quasi-isomorphism from the A ∞ algebra after the projection to the A ∞ algebra before the projection.While π 1 f describes the quasiisomorphism and we are usually interested in this part of f , we found that the part f π 0 is relevant for correlation functions.Incidentally, the sector H ⊗0 is often omitted in the discussion of homotopy algebras, but it plays an important role in our approach.For any A ∞ algebra described in terms of a coderivation Q + m, the minimal model theorem [9] states the existence of a quasi-isomorphism from an A ∞ algebra on the cohomology of Q to the A ∞ algebra described in terms of Q + m.Such an A ∞ algebra on the cohomology of Q is called a minimal model of the A ∞ algebra described in terms of Q + m, and the minimal model is known to be unique up to isomorphisms.While we used the perturbative expression of f based on the homological perturbation lemma, we hope that the characterization in terms of f leads to the nonperturbative definition of correlation functions.In particular, it would be interesting to address the question of how the definition of the path integral based on Lefschetz thimbles [34] can be incorporated into the framework of homotopy algebras.
As we mentioned in the introduction, correlation functions were discussed in the framework of the Batalin-Vilkovisky formalism [28,29].Quantum L ∞ algebras, discussed for example in [23], involve symmetrization procedures and are more naturally related to the Batalin-Vilkovisky formalism.We chose quantum A ∞ algebras, and what was surprising was that correlation functions which are symmetric under the exchange of scalar fields are obtained without any symmetrization procedures. 8First, the construction of the vector space T H does not involve symmetrization procedures unlike the corresponding vector space for L ∞ algebras, and elements in H ⊗n are generically not graded symmetric.However, the action of U symmetrizes the resulting element when it acts on a symmetrized element so that elements of the form U n 1, for example, are graded symmetric.Our formula for correlation functions uses U n 1 as building blocks, and this is part of the reason why our formula reproduces symmetric correlation functions, but our formula also involves m and h which obscure the symmetric nature at intermediate steps.As can be seen from the definition (2.32), coderivations in A ∞ algebras are in accord with cyclic properties but do not involve symmetrization procedures, so we do not expect that the coderivation m in our formula preserves the symmetric property of the elements generated by actions of U from 1. Furthermore, the definition of h is asymmetric as we commented below (2.48).Since P = 0 in our formula, only the last term on the right-hand side of (2.48) survives and the rightmost sector of H ⊗n plays a distinguished role.This is reflected in (3.4) for h U and in (4.9), (4.10), and (4.11) for h m 0 , h m 1 , and h m 2 , respectively.Nevertheless, it turned out that π n f 1 is totally symmetric and gives correlation functions which are symmetric under the exchange of scalar fields.This remarkable property has made our formula simpler, and it would be technically useful in the generalization to open string field theory.Since correlation functions based on our formula satisfy the Schwinger-Dyson equations, they must be symmetric under the exchange of scalar fields, but currently we only have this indirect understanding, and it would be important to unveil the hidden structure of f .While the expressions for correlation functions in terms of homotopy algebras are universal, our expressions are restricted to the case where H consists of only two sectors H 1 and H 2 .The property that the operator h annihilates any element in H 1 simplified the calculations, but this is not the case for general A ∞ algebras.It would be important to extend our analysis to more general cases.
Our ultimate goal is to provide a framework to prove the AdS/CFT correspondence using open string field theory with source terms for gauge-invariant operators following the scenario outlined in [35].The quantum treatment of open string field theory must be crucial for this program, and we hope that quantum A ∞ algebras will provide us with powerful tools in this endeavor.
Figure 1 :
Figure 1: One-loop tadpole diagram.The symmetry factor of this diagram is 2, which is correctly reproduced in the calculation of ϕ(x 1 ) . | 9,076.2 | 2022-03-10T00:00:00.000 | [
"Physics"
] |