text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Strong instability of standing waves for nonlinear Schr\"{o}dinger equations with a partial confinement We study the instability of standing wave solutions for nonlinear Schr\"{o}dinger equations with a one-dimensional harmonic potential in dimension $N\ge 2$. We prove that if the nonlinearity is $L^2$-critical or supercritical in dimension $N-1$, then any ground states are strongly unstable by blowup. Introduction In this paper, we study the instability of standing wave solutions e iωt φ ω (x) for the nonlinear Schrödinger equation with a one-dimensional harmonic potential 1) where N ≥ 2, x N is the N-th component of x = (x 1 , ..., x N ) ∈ R N , ∆ is the Laplacian in x, and 1 < p < 1 + 4/(N − 2). Here, 1 + 4/(N − 2) stands for ∞ if N = 2. The Cauchy problem for (1.1) is locally well-posed in the energy space X (see [6,Theorem 9.2.6]). Here, the energy space X for (1.1) is defined by Proposition 1. Let 1 < p < 1 + 4/(N − 2). For any u 0 ∈ X there exist T max = T max (u 0 ) ∈ (0, ∞] and a unique maximal solution u ∈ C([0, T max ), X)∩ C 1 ([0, T max ), X * ) of (1.1) with initial condition u(0) = u 0 . The solution u(t) is maximal in the sense that if T max < ∞, then u(t) X → ∞ as t ր T max . Moreover, the solution u(t) satisfies the conservation laws for all t ∈ [0, T max ), where the energy E is defined by Next, we consider the stationary problem where ω ∈ R. Note that if φ(x) solves (1.3), then e iωt φ(x) is a solution of (1.1). Moreover, (1.3) can be written as is the action. The set of all ground states for (1.3) is defined by Then, we have the following result on the existence of ground states for (1.3). Proposition 2. Let 1 < p < 1 + 4/(N − 2) and ω ∈ (−1, ∞). Then, the set G ω is not empty, and it is characterized by is the Nehari functional, and Although Proposition 2 can be proved by the standard concentration compactness argument, for the sake of completeness, we give the proof of Proposition 2 in Section 3. Here, we remark that by Heisenberg's inequality for any ω ∈ (−1, ∞) there exist positive constants C 1 (ω) and C 2 (ω) such that for all v ∈ X. Now we state our main result in this paper. Notice that Theorem 1 covers the physically relevant case N = 3 and p = 3 as a borderline case. Finally, we consider the nonlinear Schrödinger equations with a partial confinement of the form The typical case is that N = 3 and d = 2. Recently, Bellazzini, Boussaïd, Jeanjean and Visciglia [2] constructed orbitally stable standing wave solutions of (1.11) for the case (see Theorem 1 and Remark 1.9 of [2]). It should be remarked that the bottom of the spectrum of −∆ + (x 2 1 + · · · + x 2 d ) is not an eigenvalue, so that unlike (1.10) with a complete confinement, the existence of stable standing wave solutions for (1.11) is highly nontrivial in the L 2 -supercritical case p > 1 + 4/N. Although it is not clear whether the standing wave solutions constructed by [2] are ground states in the sense of (1.4) (see Definition 1.1 and Remark 1.10 of [2]), it would be safe to conclude from our Theorem 1 that the upper bound on p in (1.12) is optimal for the existence of stable standing wave solutions of (1.11). The rest of the paper is organized as follows. In Section 2, we give the proof of Theorem 1. The proof is based on a virial type identity (2.1) associated with the scaling (2.2), the characterization of ground states (1.5) by the minimization problem on the Nehari manifold, and Lemma 1 below. We remark that the classical method by Berestycki and Cazenave [3] is not applicable to (1.1) directly. Instead, we use and modify the ideas of Zhang [21] and Le Coz [14], which give an alternative approach to the strong instability (see also [17,18,19] for recent developments). In Section 3, we give the proof of Proposition 2. The proof is based on the standard concentration compactness argument. Proof of Theorem 1 We define First, we derive a virial type identity. Proof. We state formal calculations for the identity (2.1) only. These formal calculations can be justified by the classical regularization argument as in [6, Proposition 6.5.1] (see also [16]). Let u(t, x) be a smooth solution of (1.1). Then, we have Moreover, we have Here, we consider the scaling and Thus, we have As stated above, these formal calculations can be justified by the regularization argument. Once we have obtained Lemma 1, the rest of the proof is the same as in the classical argument of Berestycki and Cazenave [3]. is invariant under the flow of (1.1). That is, if u 0 ∈ B ω , then the solution u(t) of (1.1) with u(0) = u 0 satisfies u(t) ∈ B ω for all t ∈ [0, T max ). Proof. This follows from the conservation laws (1.2), Lemma 1, and the continuity of the function t → P (u(t)). Proof. Let u 0 ∈ B ω ∩ Σ and let u(t) be the solution of (1.1) with u(0) = u 0 . Then, it follows from Lemma 2 and Proposition 3 that Moreover, by the virial identity (2.1), the conservation laws (1.2) and Lemma 1, we have 1 16 Finally, we give the proof of Theorem 1. We define Note that by (1.8), there exists a positive constant C 0 depending only on ω and p such that We also remark that by (3.1) and (1.6), we have Proof. Let v ∈ X satisfy K ω (v) = 0 and v = 0. Then, by K ω (v) = 0, the Sobolev inequality and (3.2), there exist positive constants C 1 and C 2 depending only on N, p and ω such that Since v = 0, we have J ω (v) > 0 and J ω (v) (p−1)/2 ≥ 1/C 2 . Thus, by (3.3), we have This completes the proof. The following lemma is a variant of the classical result of Lieb [15] (see also [2,Lemma 3.4]). Lemma 5. Assume that a sequence (u n ) n∈N is bounded in X, and satisfies lim sup n→∞ u n p+1 L p+1 > 0. Then, there exist a sequence (y n ) n∈N in R N −1 and u ∈ X \ {0} such that (τ y n u n ) n∈N has a subsequence which converges to u weakly in X. Then, by the definition of C 3 , we see that for any n ∈ N, there exists y n ∈ Z N −1 such that Here, we define v n = τ −y n u n . Then, we have for all n ∈ N. In particular, v n p+1 L p+1 (Q 0 ) > 0 for all n ∈ N. Moreover, by the Sobolev inequality, we have for all n ∈ N, where C 4 is a positive constant depending only on N and p. Thus, we have Since (v n ) n∈N is bounded in X, there exist a subsequence (v n ′ ) of (v n ) and u ∈ X such that (v n ′ ) converges to u weakly in X. We define the set of all minimizers for (1.6) by Lemma 6. The set M ω is not empty. Proof. Let (u n ) be a sequence in X such that K ω (u n ) = 0, u n = 0 for all n ∈ N, and S ω (u n ) → d(ω). Then, by (3.2) and J ω (u n ) = S ω (u n ) → d(ω), we see that the sequence (u n ) n∈N is bounded in X. Moreover, it follows from K ω (u n ) = 0 and Lemma 3 that Thus, by Lemma 5, there exist a sequence (y n ) in R N −1 , a subsequence of (τ y n u n ), which is denoted by (v n ), and v ∈ X \ {0} such that (v n ) converges to v weakly in X. By the weakly lower semicontinuity of J ω , we have (3.6) Moreover, by the Brezis-Lieb Lemma (see [4]), we have On the other hand, by v = 0 and (3.2), we have J ω (v) > 0. This is a contradiction. Thus, we obtain K ω (v) ≤ 0. Finally, we give the proof of Proposition 2.
2,079
2017-06-07T00:00:00.000
[ "Mathematics", "Physics" ]
Label-free identification of individual bacteria using Fourier transform light scattering Rapid identification of bacterial species is crucial in medicine and food hygiene. In order to achieve rapid and label-free identification of bacterial species at the single bacterium level, we propose and experimentally demonstrate an optical method based on Fourier transform light scattering (FTLS) measurements and statistical classification. For individual rod-shaped bacteria belonging to four bacterial species (Listeria monocytogenes, Escherichia coli, Lactobacillus casei, and Bacillus subtilis), two-dimensional angle-resolved light scattering maps are precisely measured using FTLS technique. The scattering maps are then systematically analyzed, employing statistical classification in order to extract the unique fingerprint patterns for each species, so that a new unidentified bacterium can be identified by a single light scattering measurement. The single-bacterial and label-free nature of our method suggests wide applicability for rapid point-of-care bacterial diagnosis. Introduction Rapid and accurate identification of bacterial species is crucial for diagnosing infectious diseases or screening for poisoning sources in food. Bacterial infection causes a number of severe diseases and syndromes, e.g. sepsis, meningitis, pneumonia, and gastroenteritis, which all require immediate and appropriate treatment based on the detection and identification of the bacterial species [1][2][3]. However, conventional techniques for bacterial identification have significant limitations, mainly due to the long procedural time required for identification. The present gold standard, bacterial blood culture followed by susceptibility testing for drug resistance, requires days for preliminary results and is often negative for fulminant cases. Recently-developed methods based on real-time quantitative polymerase chain reaction (qPCR) followed by sequencing are faster and more robust; however, they still require hours and are often too expensive for resource-limited environments [4][5][6]. Consequently, the treatment of many important diseases and syndromes (such as sepsis, of which mortality rates increase by approximately 9% per hour before appropriate antibiotic therapy [7]) typically employs wide-spectrum antibiotic therapy based on clinical experience. This strategy is inefficient compared to accurately species-targeted therapy and carries the potential dangers of severe side effects or the emergence of antibiotic-resistant microorganisms [8,9]. Various non-invasive optical approaches have been studied to address this issue, as the root cause limiting the speed of the aforementioned biochemical techniques is lengthy and intensive sample processing. Among spectroscopic methods, Fourier transform infrared spectroscopy (FTIR) of bacterial colonies [10][11][12] and Raman scattering (and also surfaced-enhanced Raman scattering) of individual bacteria [13][14][15][16] were extensively investigated. The FTIR approach simplifies the sample processing compared to biochemical methods, though its speed is still limited, as the colonies should be cultured. The Raman scattering method presents excellent sensitivity of the single bacterium level that enables bypassing the culturing step; however, the signals are too weak for high-throughput investigation. Employing the surface enhancement effect compensates for this shortcoming at the cost of labeling with exogenous agents such as metallic nanostructures [17]. In addition to the spectroscopic methods, angle-resolved light scattering (ALS) based on spatial light information has been investigated as well at three different levels: bacterial suspensions, bacterial colonies, and individual bacteria. ALS of bacterial suspensions was pioneered by Wyatt and colleagues in the 1960's [18][19][20] and has been intensively studied in recent years [21][22][23][24][25][26]. This ALS approach possess a similar advantage and disadvantage with those of FTIR: simplified but still lengthy sample preparation. ALS of individual bacteria has received relatively little research attention [27][28][29][30][31], probably because certain characteristics of single-bacterial ALS measurement (extremely small scattering cross section, wide scattering angle range, and significantly high dynamic range) make this method technically challenging [32]. Despite these difficulties, ALS of individual bacteria promises a powerful advantage: label-free identification at the single bacterium level. Here, we present a novel ALS-based identification of individual bacteria using Fourier transform light scattering (FTLS). Quantitative phase imaging (QPI) of individual bacteria in combination with FTLS technique enables single-shot measurement of single-bacterial two-dimensional (2D) ALS maps covering a broad angular range with unprecedented precision, as recently demonstrated by our group [32]. The measured ALS maps, which essentially reflect cellular and subcellular structures and compositions [33], are then analyzed for statistical classification; the ensembles of ALS maps from multiple bacterial species are systematically analyzed to extract the unique fingerprint patterns for each species, so that a new unidentified bacterium can be identified by a single light scattering measurement. Our experimental results demonstrate that detectable and significant patterns in the 2D light scattering spectra can distinguish individual bacteria belonging to four rod-shaped species (Listeria monocytogenes, Escherichia coli, Lactobacillus casei, and Bacillus subtilis), which are indistinguishable with only cellular shapes, achieving cross-validation accuracy higher than 94%. The single-bacterial and label-free nature of our method suggest wide applicability for rapid point-of-care bacterial diagnosis. Model problem: Four rod-shaped bacterial species In order to demonstrate the capability of the present approach, we set and solve a virtual clinical case for the proof-of-concept study: the objective is to identify the species of an unidentified bacterial pathogen, i.e. a single specific bacterium isolated from the sample that is known to belong to one of the four rodshaped bacterial species, Listeria monocytogenes. Escherichia coli, Lactobacillus casei, and Bacillus subtilis. These four species, including both Gram-positive and Gram-negative ones, were chosen for this study as the rod-shaped species are clinically important in general, while they exhibit similar shapes and sizes that make them indistinguishable via simple observation of cellular morphology using conventional bright-field microscopy or phase contrast microcopy. Although this model problem requires a priori knowledge that the target bacterium belongs to one of the four species, the method described throughout this report can be readily extended to a more general approach (see Discussion and Conclusions). In order to demonstrate that the simple morphological information is not sufficient to identify the bacterial species, we illustrate the size distribution of the individual bacteria (characterized by the lengths of major and minor axes of the rod-shaped bacterial cells) in unsynchronized growth states belonging to the four species in Fig. 1 (see the following sections for the imaging method). The size and aspect ratio of the cells in these species are similar to one another, and this result clearly implies that it is impractical to distinguish these bacterial species using morphological parameters, which is the typical accessible information using conventional bright-field microscopy or phase contrast microscopy. It is impractical to identify the species from only cellular shapes, which are observable using conventional optical microscopy. Preparation of individual bacteria samples Four bacterial species, L. monocytogenes, E. coli, L. casei, and B. subtilis, were prepared according to the following procedures.  L. monocytogenes strain (10403S) was grown on Brain-Heart Infusion agar plates without antibiotics.  E. coli strain (ER 2738) with tetracycline-resistance was grown on Luria-Bertani agar plates with tetracycline (50 mg/mL).  L. casei strain (KCTC 2180) was grown on MRS agar plates without antibiotics.  B. subtilis strain (KCTC 1023) was grown on nutrient agar plates without antibiotics. After overnight culturing in a 37°C incubator, a few tips of solid-cultured bacterial colonies were taken and then put into the respective Eppendorf-tubes. A Dulbecco's Phosphate Buffered Saline solution (LB 001-12, Welgene, Republic of Korea) was added in each tube to prepare a bacterial solution. After slight vortex mixing, a small volume (10 μL) of the bacterial solution was sandwiched between standard microscopic cover glasses (C024501, Matsunami Glass Ind., Japan) with a spacer made of double-sided tape with a thickness of 20-30 μm. Imaging was performed after the bacterial cells settled down to the bottom, but no later than 15-20 minutes after preparation. The aforementioned dilution step was performed until the bacterial cells were spread into a single layer when being imaged. Quantitative phase imaging of individual bacteria In order to precisely measure the 2D light scattering maps from individual bacteria, we employed QPI and FTLS techniques [32,[34][35][36][37]. For the first step, the optical field maps (both amplitude and phase maps of the transmitting light field) of isolated individual bacteria are precisely measured using QPI [34,35]: through the sample, at each lateral position. At room temperature, we imaged 67, 69, 78, and 55 individual bacterial cells for L. monocytogenes, E.coli, L. casei, and B. subtilis, respectively, which were prepared as described in the previous section as unsynchronized growth states. Here we employed diffraction phase microscopy (DPM), one of the QPI techniques, for optical field imaging [38,39]. DPM is a common-path interferometry with off-axis spatial modulation, and it provides single-shot full-field imaging capability and high sensitivity for phase measurement. Briefly, a diodepumped solid state laser (λ = 532.1 nm, Cobolt Samba, Sweden) was used as the illumination source. An inverted microscope (IX71, Olympus American Inc., USA), equipped with a high numerical aperture (NA) objective lens (UPLFLN 60X, 1.42 NA, oil-immersion, Olympus American Inc.), was modified with additional optics to be used for a DPM setup. With the additional relay optics, the overall magnification of the system was ×200. A CMOS camera (Neo sCMOS, Andor, UK) was used to record the holograms of the samples. The optical field images were reconstructed from the recorded holograms using a custombuilt field retrieval algorithm implemented in MatLab [40,41]. Each bacterial cell in a field-of-view can be isolated by generating image windows based on phase values for subsequent single-bacterial analyses. More details on the setup and principles of DPM can be found elsewhere [38,39]. The representative amplitude ( ) [32]. The importance of simultaneous measurements of both maps are described in the following sections. The pseudo-FTLS maps by numerical propagation (d) without amplitude information, and (e) without phase information. The results indicate that phase information, a unique feature of QPI, is central to single-bacterial characterization rather than easily accessible amplitude information. Fourier transform light scattering of individual bacteria The measured optical field maps of individual bacteria ( ) E r  are utilized to calculate the single-bacterial ALS maps using FTLS technique [36,37]. FTLS is based on the numerical propagation of the optical field in the sample plane by simply Fourier transforming it to generate the far-field light scattering map: where u  is the lateral spatial frequency vector. A single QPI measurement enables us to attain the corresponding 2D light scattering map at a broad angular range [36,37]. FTLS is the spatial analogy of FTIR, so it provides an unprecedented signal-tonoise ratio due to Fellgett's advantage [42]. The upper limit of detectable light scattering angle is determined by the NA of the objective lens; our system effectively covers the scattering angle ranging from -70° to 70° [32]. It is also worth noting that the measured optical field maps of isolated individual bacteria are numerically centered, rotated, and zero-padded prior to Fourier transformation; this process ensures species identification based on light-cell interaction, independent of the orientation of cells. The scale of zero padding should be appropriately chosen for sufficient but not excessive (a large number of pixels is computationally expensive for the following image analyses) angular resolution of the FTLS maps. [34][35][36][37][43][44][45]. To re-emphasize that the FTLS maps are not merely determined by the cellular morphology, which can also be obtained with conventional bright-field microscopy or phase contrast microscopy, we calculated the pseudo-FTLS maps of the individual bacterium, as shown in Figs , as illustrated in Figs. 2(d) and 2(e), respectively. This result clearly demonstrates that the optical phase information, which is the unique advantage of QPI, is crucial for obtaining the correct ALS spectra. and the amplitude only information only provide limited information. Statistical classification of FTLS maps: Overall procedure The measured FTLS maps were then systematically analyzed in order to extract the unique fingerprint patterns for each bacterial species so that a new unidentified bacterium can be identified by a single FTLS measurement. This training-and-identification strategy is called statistical classification, or supervised machine learning, which is one of the most rapidly evolving fields in computer science with diverse applications including bioinformatics [46]. Statistical classification has been previously exploited in various forms in most of the optical methods for bacterial identification briefly reviewed in the Introduction [10-16, 21-23, 26, 28-31]. The overall procedure for the proposed identification scheme is summarized in Fig. 3. After the FTLS measurement, as described in previous sections, the training step is performed as follows. First, the variables describing each FTLS map, which are called features, are extracted by principal component analysis (PCA). The extracted features are then selected and optimized to construct a statistical classification model (classifier) based on linear discriminant analysis (LDA). This PCA-LDA method has been widely utilized in several statistical classification problems [47]. Here, this PCA-LDA process is conducted for species classification; the unique fingerprint patterns from light scattering spectra are efficiently extracted by selecting the species-dependent features and excluding the features with high relevance to the cell-to-cell shape variations within each species that arise from cell growth and division. After completion of the training step, a new unidentified bacterium can be instantly identified by a single FTLS measurement. We assess the identification accuracy via cross-validation. The following sections describe each step of this process in detail. Fig. 3. Overall procedure for single-bacterial identification. 2D ALS maps of isolated individual bacteria are measured using FTLS. The measured FTLS maps are then systematically analyzed in order to extract the unique fingerprint patterns for each species (classifiers) so that a new unidentified bacterium can be identified by a single light scattering measurement. Feature extraction from light scattering spectra In order to effectively extract the features from light scattering spectra, the region of interest (ROI) containing the bandwidth limited information (i.e. scattered light collected by the objective lens with specific NA) in the measured 2-D FTLS maps is selected. As illustrated in Fig. 2(c), the ROI is defined as a circle whose diameter corresponds to the maximum spatial frequency, which corresponds to NA/λ where λ is the wavelength of the illumination. For the QPI system used in this study, a typical ROI is composed of approximately 10 4 pixels. This high dimensionality of the ROI is computationally expensive, and the analysis would suffer from the 'curse of dimensionality', meaning that the predictive power of classifiers decreases as the dimensionality increases [48]. Thus, the number of variables describing the ROI must be adequately reduced in the following feature extraction procedure. In order to represent a FTLS map as a linear combination of a small but sufficient number of uncorrelated principal patterns, we employed PCA [49]. In principle, the number of principal patterns used to express the scattering spectra is the same as the pixel numbers in the ROI, because PCA is essentially an orthogonal basis transformation. However, because most information or variance of the original pattern is contained in the first few principal patterns after the PCA, we can significantly reduce the dimensionality to manageable numbers for statistical classification. For this study, we performed PCA on the single-bacterial FTLS maps composed of 5,525 pixels each to generate 250 principal patterns and the corresponding principal pattern coefficients, preserving 99.9% of the original variance or information. The first 16 principal patterns are presented in Fig. 4(a), and the full spectra of the principal pattern coefficients for all investigated individual bacteria are illustrated in Fig. 4(b). Well-established mathematical details of PCA and its applications for biological image analysis can be found elsewhere [49][50][51]. Training classifiers with optimized features Our final goal is to construct the classification model (classifier) using the extracted features from the FTLS maps. Here we employed LDA, one of the most traditional and robust linear classifiers [52,53]. LDA draws linear discriminant hyperplanes as optimal boundaries between the bacterial species when each individual bacteria is expressed as a point in the feature space. The features exploited for LDA should be carefully selected and optimized to maximize the accuracy of identification as described below. Since LDA is a linear classifier, the computation is sufficiently fast to enable the iterative construction of the classification models with various subsets of features and samples. After constructing a classification model, we can assess the accuracy of single-bacterial identification via leave-one-out cross-validation [54]. We construct the classifier using the data from all bacterial cells except one and then apply the classifier to check if it identifies the excluded bacterium correctly. Through repeating this process for all individual bacteria, we can measure the identification accuracy. It is mathematically proven that leave-one-out cross-validation is a precise estimation of the real accuracy to identify independently measured data [46]. For the first step for feature selection, we rank the principal patterns according to their usefulness for statistical classification. Because principal patterns are linearly uncorrelated, we can independently investigate each principal pattern, as shown in each column of Fig. 4(b). We employed the analysis of variances (ANOVA), which is a generalization of the Student's t-test into more than two groups, to measure the species-distinguishing capability of each principal pattern [55]. Since the values in each column of Fig. 4(b) belong to one of the four species, we can calculate the p-value between the species for each column that estimates the discriminating power of each principal pattern. In Fig. 5(a), we sorted the principal patterns in the order of ascending p-values. The principal patterns with p-values close to zero are more informative for classification, whereas the principal patterns with p-values close to one are noisy (i.e. these patterns may result in overtraining that lowers accuracy [46]), as illustrated in Figs. 5(b) and (c), respectively. This result establishes a heuristic strategy for feature selection [46]: if the principal patterns are added in a consecutive or accumulative manner, from a low principal pattern index to a high principal pattern index (sorted by p-values), as features for statistical classification, the accuracy of the identification based on each classification model will be maximized at a certain optimal point in the middle. This expected tendency was observed in our data, as shown in Fig. 5(d). The coarse-grained cross-validation accuracy consists of eight detailed parameters (two parameters per species) of identification accuracy: sensitivity (true positive results over all positive inputs) and specificity (true negative results over all negative inputs) for each species. Selecting a classification model with high values of some parameters may give low values for other parameters, and vice versa. Thus, the final selection of the classifier (or feature optimization for classifier training) should consider the specific purpose of application. For example, we might be interested simply in whether a given bacterium is pathogenic L. monocytogenes or not (see Discussions and Conclusions). The identification results are illustrated in Fig. 6. Here, we selected a classification model with homogeneously high values of the accuracy parameters. The overall cross-validation accuracy was 94.05%, with sensitivities of 95.52%, 95.65%, 88.46%, and 98.18% and specificities of 99.51%, 96.50%, 97.91%, and 98.13% for L. monocytogenes, E. coli, L. casei, and B. subtilis, respectively. Fig. 6. Single-bacterial identification with the optimized classifier. The proportion of each output species for a certain input species, where leave-one-out cross-validation is utilized to precisely mimic the independently measured data, is plotted. The overall accuracy was 94.05%, with sensitivities of 95.52%, 95.65%, 88.46%, and 98.18% and specificities of 99.51%, 96.50%, 97.91%, and 98.13% for L. monocytogenes, E. coli, L. casei, and B. subtilis, respectively. Discussions and conclusions We have presented a novel optical method for label-free species identification of individual bacteria using FTLS and statistical classification. Four rod-shaped bacterial species (L. monocytogenes, E. coli, L. casei, and B. subtilis) were indistinguishable using only their cellular shapes due to the limited information in intensity images and the cell-to-cell variations arising from cell growth and division. To address this difficulty in species classification, we have exploited the single-bacterial 2D ALS maps measured by FTLS to systematically extract the unique fingerprint patterns for each species, or the optimized LDA classifier. Then, a new unidentified bacterium can be identified by a single light scattering measurement followed by application of the classifier with accuracy higher than 94% as assessed by cross-validation. The single-bacterial and label-free nature of our method implies wide applicability for rapid point-ofcare bacterial diagnosis. As briefly reviewed in the Introduction, the ALS of individual bacteria has significant advantages for rapid and label-free identification of bacteria, as compared to other existing optical or biochemical approaches. Though several groups have demonstrated similar identification schemes based on single-bacterial ALS measurement and statistical classification [28][29][30][31], the present approach provides ALS measurements in a wide angular range with unprecedented precision [32]. The importance of broad angular range (especially high scattering angle) for bacterial species characterization lies in the capability of accessing the light interacting with small subcellular organelles, which is analogous to side light scattering in flow cytometry analysis. In addition, the present method is based on a singleshot measurement; after the statistical classification for the target species, only a single-shot FTLS measurement per bacterium is required for species identification. Simply attaching a compact QPI unit to a conventional high-speed imaging flow cytometry will enable high-throughput analysis for immediate and efficient therapy [56]. This system can be even miniaturized for point-of-care diagnosis [57]. In particular, the present single bacterium approach can be exploited for identifying unculturable bacteria and avoiding inter-species competition while culturing heterogeneous populations. Furthermore, our concepts can be readily extended to other morphologies such as spherical and spiral bacterial species and will hopefully lead to the addition of other types of microbes or general pathogens to the optical identification regime. Moreover, the non-invasive nature of the proposed method enables parallel investigations for more detailed single bacterium profiling. For instance, the use of various QPI modalities, such as spectral [58][59][60], polarimetric [61], synthetic [62], tomographic [63][64][65], and reflection [66] QPI, will further improve the accuracy and applicability of the proposed method. For more efficient therapy, our method might be co-administered with more robust genotype-oriented qPCR methods [67] or speciesindependent bacterial filtering therapy [68]. There are several limitations of the present method to be addressed in future works. As is common for all single cell approaches, the purification of samples is required to avoid interference by non-bacterial or abiotic components. We expect that droplet microfluidics on a chip might be a solution compatible with the aforementioned point-of-care scheme [69]. In addition, while differentiating viable bacterial cells from dead ones is important for clinical setting, both optical and biochemical methods have limitations. In addition to the technical issues, there are several points to be improved for constructing more a robust bacterial ALS fingerprint database for the practical implementation of our method. Here, we demonstrated the systematic extraction of species-dependent information through overcoming the cell-tocell shape variations, which are 'noise' to identification. However, the following noises also exist and must also be overcome: cell strains within species and environmental factors that alter phenotypes or gene expression including temperature [70], pH [71], osmolality [72], and host-pathogen interactions [73]. Experiments with diverse cell strains under various physiological conditions should be performed in order to suppress these noises and to construct a robust fingerprint database prior to practical implementation of the present method. Since many noise sources contribute simultaneously, employing nonlinear classifiers, such as artificial neural networks [74] or nonlinear support vector machines [75], can be employed to improve the accuracy of identification. Furthermore, the current requirement of a priori knowledge of the species range also needs to be overcome. One strategy is to focus on several pathogenic species that cause major diseases. Another strategy is tree-structured phenotype classification: constructing a classifier for bacteria or non-bacteria, a classifier for morphological categories, a classifier for Gram-positive or Gramnegative bacteria, a classifier for genus, and so on. Tree-structured classification strategy is known to be effective for multiclass classification problems [47]. All of these routine tasks can be conducted in a specialized lab and can be uploaded on the web for wide usage of light scattering spectra for bacterial identification.
5,474
2015-02-18T00:00:00.000
[ "Biology" ]
GENERATION OF A SUITABLE SURFACE OR VOLUME MESH FROM SCAN DATA A procedure to generate a suitable surface and volume meshes from image sequences or other scan data is presented. The methodology gives preference to readily available free software for the final goal of using the generated mesh for computational fluid dynamics or finite element simulations. The steps involve the extraction of a surface mesh from the image sequence, segmentation, trouble-shooting, treatment, refinement/coarsening, smoothing, translation into a volume mesh and post-processing. The user controls the detail of the final mesh through a series of algorithms included in the suggested software. The methodology is illustrated with a computer microtomography data of a carbonate rock. Finally, the mesh is imported in ANSYS ® Fluent to demonstrate that the resulting mesh can be used in simulations. Introduction Reconstructing a suitable volume mesh from image sequences taken from microcomputed tomography (micro-CT) or magnetic resonance imaging (MRI) is a recurrent challenge for fields that deal with objects that have complex geometry or internal or structure, e.g.porous media, internal organs, microstructured devices.These simulations and models are meant for geological applications, developing new chemical products, surgery planning, diagnostics, additive manufacturing (e.g.3D printing) and more. Many commercial software products are able to perform all or several steps described in the procedure, e.g.Amira ® [5], Simpleware ® [32], Avizo ® [5], 3D Doctor ® [1], MATLAB ® [19].Their license costs are, however, prohibitively expensive for occasional projects or small research groups so one of the major concerns in this study is to suggest some free alternatives to complete this task.These free software products usually perform just one or two steps of the procedure, however, they are open-source, which gives the opportunity for developers to tailor the code to their specific needs. The steps involve a series of manipulations that may lower the resolution of the original dataset in order to create a manageable working mesh for the computer fluid dynamics (CFD) simulator, finite element analysis (FEA) or 3D printing.These operations are not mandatory but they are advisable depending on the hardware, time constraints and application.In addition, only relevant features that were of great use in the present work have been described.The information is not exhaustive and the pertinent documentation should be consulted to look for the specifics of each algorithm. Some of the programs mentioned in this work do not offer a directly executable file so the user has to manually compile the code from the source code files in order to produce a .exefile in Windows (e.g.Tetgen [35]).This can be done by making a project for a console application in Codeblocks [7] after downloading and installing MinGW [23].Note that the flags static libgcc, static libstdc++ and static linking should be checked in the compiler settings before building the project.If the interface of the program is directly in the Windows command prompt (e.g.Tetgen [35]), execute cmd to call the prompt, write cd to change the directory to the .exefile to implement the commands.Generates constrained Delaunay tetrahedralizations, Voronoi partitions and reports the details of the output mesh.In addition, it has mesh manipulation tools for both refining and coarsening.If used as a standalone program, its output can be visualized in Tetview, also available from the same developer. ImageJ [14] 1.51j8 with Java 1.8.0_112 Open-source, Java-based software specialized in image processing tools.Its versions and user-written plugins extend its already impressive capabilities by a large amount.Moreover, it can import, read and write many file formats, turning it into an excellent conversion tool.Furthermore, this software is able to remove a selected portion of the image geometrically or using intensity values, alter the intensity values in the image, manipulate the order of images in a stack and more.iso2mesh [16,29] 2017 MATLAB [19] 2015b Versatile commercial software that excels in matrix operations and numerical computing.This software has interface with many other programming languages such as C, C++, Python, Java and Fortran.Uses a high-level programming language and is extensible by a plethora of toolboxes that may be user-written and free (e.g.iso2mesh, stlTools [34]) or developed by Mathworks and sold separately, e.g.Image Processing Toolbox. Meshlab [21] 64bit v1.3.4BETAOpen-source software for the processing, manipulating and fixing of unstructured 3D meshes, offering a plethora of filters and rendering options.Also displays the quality of the mesh according to a criterion (e.g.area/max side). Meshmixer [22] v3.4.35 Free software with powerful mesh generation and manipulation tools.Supports 3D printing and offers a very practical tool for inspecting the mesh for errors and correcting them accordingly, which is useful when correcting bad surface meshes. Extract information from the sequence of images: generate a surface mesh The first step is to segment the representation of the image sequence into something meaningful or easier to analyze.This step determines the boundary of the object, which separates what is of interest in the image sequence.For example, the greyscale scan data is processed by selecting appropriate upper and lower thresholds to binarize the image sequence, isolating the part that ultimately will be converted into the surface or volume mesh.Threshold values serve to differ the shades of gray and can be in many scales, e.g.Hounsfield units (a quantitative scale for describing radiodensity) or pixel values (a number between 0 and 255), where each pixel is be represented by 8 bits. This step is mostly visual, as the segmented part normally appears on the screen of the software and the user can choose a threshold value via trial and error.The thresholding can be done via ImageJ [14], along with other features such as enhance contrast and other image processing tools.There are also variations of ImageJ (e.g.Fiji , BoneJ, Bio7) and the official site offers several plugins that extend this software capabilities.The resulting image can be exported as a Raw Binary file (.raw) to make it readable by ITK-SNAP [17].ImageJ also provide means to convert the image sequence into other formats, like .tif or .tiff,creating an image stack.Depending on format of the original files (e.g.DICOM .dcm), the segmentation procedure can be performed directly in ITK-SNAP, which has a more user-friendly interface. The next step in segmentation is carried out in ITK-SNAP, importing the Raw Binary file exported by ImageJ.ITK-SNAP will request the quantity of pixels in each direction of the imported image file, which is available in ImageJ on the top left side of the screen.ITK-SNAP is equipped with a segmentation feature called Active Contour (a.k.a.Snake) [26], which is divided into 3 sub-steps. The first substep is the presegmentation, which selects of the upper and lower threshold values.ITK-SNAP also allows the user to include the intensity of the neighboring voxels (i.e.3D pixels) as a criterion in the classification. For the second sub-step, called bubble growth, the user has to place bubbles in any of the three planes that will grow into the region of interest (ROI).Depending on the complexity of the ROI, adding enough bubbles to capture a good representation of the whole object can be a time-consuming task.In addition, the ITK-SNAP allows a stepwise region of interest segmentation so the user can also circumvent the problem of adding undesired parts into the final ROI, which may happen due to parts that are too similar in terms of pixel intensity.Thus, it may be helpful to add small regions of interest systematically and update the ROI instead of completing the entire segmentation at once. The third sub-step is the evolution of the bubbles until they hit the boundaries of the object.The user can accept the representation after inspecting if the algorithm could capture a good approximation of its actual shape.If there are regions that the bubbles are taking too much time to reach, the user can go back one sub-step and add more bubbles in the problematic region.Once done, ITK-SNAP can export the obtained ROI as a surface mesh in Stereolithography (.stl) format.This semi-automated routine will also generate a suitable surface mesh that is much easier to process later on than the surface mesh directly extracted from the image sequence, an option that also exists in ImageJ. Mesh processing and repairing The segmentation and surface mesh extraction process explained in ITK-SNAP produces a better surface mesh because this eliminates most of undesired details of the object.The resulting surface mesh, however, must be processed to reduce the number of elements and fix possible issues (e.g.duplicated or non-manifold elements, zero area faces, intersecting faces, holes).The aim is to obtain a workable, manifold mesh without defects and has just enough detail (elements) for the final task. Any corrections must be applied iteratively during each mesh reconstruction (e.g.mesh reducing, subdividing, smoothing).Mesh simplifying makes editing faster and decreases the size of the file at the cost of the quality of the surface, but also may create new problems during the operation.Thus, it is paramount to look for mesh errors at least after every major operation.Furthermore, any filters may slightly deform the topology of the mesh so it is necessary to judge if the change in geometry is acceptable. Meshlab [21] and Meshmixer [22] are free tools to for editing, cleaning, healing, inspecting and converting meshes (e.g.tri to quad).The surface mesh should be imported in both of these programs for remeshing and repairing any issues that may arise, complementing the shortcomings of each software product.Note that mesh reduction can be smarter than simply choosing a triangle budget or percentage reduction if the user opts for an algorithm like "max deviation" or "adaptive", which allows the algorithm to reduce more faces where detail is less important (e.g.flat surfaces) instead of reducing the quality of the surface uniformly.On the contrary, if the mesh is too coarse, algorithms like Butterfly subdivision in Meshlab may be convenient.In addition, enabling the wireframe view facilities troubleshooting and aids the decision-making process. Meshmixer is a powerhouse in terms of mesh manipulation tools.It can import and export surface meshes in formats other than .stl(e.g..obj,.3mf,.plyand .vrml),some of which can preserve color data.It can also scale the surface mesh in the desired units (e.g.milimeters), as .stlfiles do not store this information.Another feature includes a managing tools to delete separate shells (e.g.closed surfaces), e.g. to eliminate pieces of the object that are floating in midair.Furthermore, it is trivial to inspect the mesh for errors (e.g.holes) and correct them accordingly, if the problems are not critical.If they are, the user can try to remesh the whole model according to another criteria (e.g.mesh density and regularity) or use the reduce feature for decimating the triangles and inspect the mesh again for errors.Lastly, another robust option that can correct badly formed .stlfiles is the "make solid" function using a low offset distance, what also leads to a compromise with the quality of the mesh, especially around sharp edges. An important way to reduce the number of elements in Meshlab is the Quadric Edge Collapse Decimation, where the user can choose the output number of faces along with options to preserve topology of the mesh.Moreover, the user can check the mesh volume, area and if it is watertight by computing geometric measures.Meshlab also provides visual information on the triangle quality, i.e. using the violet-side of the spectrum to color well-shaped triangles (nearly equilateral) and using red to stress triangles with very small or large angles (distorted faces).If the mesh has too badly shaped triangles, smoothing algorithms such as the MLS Projection, Laplacian Smoothing and Taubin Smoothing may mitigate the problem.It is important to notice that Meshlab does not have an "undo" feature so the user must save the file periodically and use the preview option, when offered.If there are issues that could not be solved either in Meshmixer or Meshlab, the user may try the Netfabb Cloud [24] to repair the file. At this point, the surface mesh would be ready for 3D printing using, for example, a slicer like Simplify 3D [33] or Meshmixer itself. 2.3 Conversion from surface mesh to NURB If the intended use of the mesh is finite element analysis, one can import the surface mesh as a .stlfile into the commercial software Rhinoceros ® [30] (e.g.Rhino 6) for a reconstruction process triggered by the command MeshToNurb in Rhino. Mesh and NURB (Non-Uniform Rational B-Splines) are different ways to represent 3D objects [20], as a surface mesh is a series of facets, i.e. the surface is discretized into smaller elements.Contrastingly, NURBS are mathematical representations of the surfaces, so they can generate complex free form surfaces that are smooth per se. The command MeshToNurb will translate a faceted mesh to a degree 1 NURBS structure by creating one NURBS surface for each mesh face in the original object and assemble the result to form a polysurface [20].NURBS files are larger and have some caveats regarding their usefulness but it allows saving the resulting solid in any of the available 3D solid formats (e.g.sat) to export it to ANSYS Multiphysics [4].A complete example of a 3D reconstruction of a liver and FEA using NURBS can be found in [18]. 2.4 Conversion from surface mesh to volume mesh The process of generating an acceptable surface mesh is just a step towards creating a suitable mesh for CFD, as the surface mesh only carries information about the boundaries of the object and the CFD software cannot discretize space in terms of finite volumes, where the discretized form of important equations may be applied (e.g.Navier-Stokes, Darcy).Thus, a CFD simulation often needs a volume mesh. The most practical tools to obtain a volume mesh from a surface mesh are Tetgen and the MATLAB ® [19]/GNU Octave [11] toolbox iso2mesh [16].Using iso2mesh the user can specify the detail of the final unstructured tetrahedral mesh via the command s2m (a.k.a.surf2mesh, short for surface to mesh) by choosing the maximum volume of each tetrahedral, the algorithm used for the conversion and the ratio of elements that the user desires to keep in the final mesh.The toolbox iso2mesh also offers some commands to fix degeneracies and mesh problems, e.g.meshcheckrepair, which is capable of eliminating duplicated elements, holes and self-intersecting faces. Other advanced commands include all options Tetgen or GCAL offer, e.g.ISO2MESH_TETGENOPT='-A -q1.0a5 -C', which refines the mesh to improve quality and checks the final mesh for consistency.The user can verify the quality of the final mesh by the command meshquality.Finally, the volume mesh is saved in ABAQUS format (saveabaqus), which can be imported into the commercial ANSYS ICEM ® 17.2 and export the mesh (.mesh file) to ANSYS ® Fluent [2].Another option is saving the mesh as DXF, which can be imported by Rhino3D (savedxf). An additional toolbox, e.g.stlTools [34], is needed for reading the .stlfile and extract its information.This toolbox is available in the MATLAB Central File Exhange.Toolboxes like stlTools and iso2mesh can be installed in MATLAB ® permanently using the command pathtool or temporaly by the command addpath. It is important to notice that iso2mesh and ANSYS ® Fluent use different metrics to evaluate the quality of the final mesh.This means further processing my be required to correct eventual mesh problems, e.g.near degenerated elements.ANSYS ® ICEM 17.2 [3] offer many possibilities to smooth the mesh acording to a user-defined quality parameter and and repair the volume mesh globally. Results and discussion The procedure explained in the previous sections was applied to a micro-CT scan performed by LAMIR-UFPR (Laboratório de Minerais e Rochas) of a carbonate rock mainly made of calcite, extracted from the region of Pamukkale (Turkey).The composition of the rock simplified the selection an appropriate threshold for presegmentation in ITK-SNAP, after saving the 10 initial layers of the scan as a Raw Binary File in ImageJ.Due to its complex shape, more than a hundred bubbles were needed during the Active Contour algorithm but it produced a good representation of the rock using 4.5 million faces.Direct extraction of the surface mesh using ImageJ would lead to 6.5 million faces.The first layer of the scan is shown in Figure 1. The resulting mesh was imported into Meshmixer for reducing, remeshing and repairing and eliminating small shells (e.g.<< 1% of total faces), producing a mesh with only one hundred thousand faces.The mesh was then exported to Meshlab, where the Laplacian Smooth and Taubin Smooth were applied.The mesh was verified for its quality using the area/max side as a metric, resulting in the mesh displayed in Figure 2. Next, the mesh was read by stlTools and iso2mesh was used to convert the surface mesh into a tetrahedral volume mesh using all the commands aforementioned.Finally, the volume mesh was post-processed by ANSYS ® ICEM to smooth elements according to Quality, Min Angle and Skew, resulting in the mesh shown in Figure 3, where the minimum orthogonal quality is 0.2, according to ANSYS Fluent.The bottom part of the mesh was labeled as a different region so as to facilitate the application of contour conditions.Lastly, the mesh was exported to ANSYS ® Fluent for a simple simulation of heat transfer, shown in Figure 4.The initial condition for the bottom part was 500 K whereas upper part has an initial temperature of 300 K. Additionally, the upper part is exposed to a convection coefficient of 10 W m -2 K -1 .The energy under-relaxation factor was set to 0.5 to improve the robustness of the solution. Conclusions A detailed procedure for converting scan data into a feasible volume mesh for CFD simulations was presented.The intermediate steps show a clear connection with technologies as additive manufacturing and finite element analysis.The CFD simulation of the calcite rock included only heat transfer but it was completed without any warnings, indicating that the final mesh is indeed suitable for simulations using finite volumes. Instead of following the present methodology, the licenses for commercial software would imply in major costs for a small businesses.Information regarding this topic is difficult to retrieve and compile into a comprehensive guide that could suit all needs, so some assertiveness is recommended to judge the advantages and the drawbacks of each filter in terms of, for instance, preserving the topology or sharp edges in the model. Figure 1 Figure 1 First layer of the micro-CT scan after thresholding. Figure 2 Figure 2 Snapshot from Meshlab showing the quality (area/max side) of each face. Figure 3 Figure 3 Snapshot from ANSYS ICEM of the repaired volume mesh. Figure 4 Figure 4 Snapshot from ANSYS Fluent of the heat transfer simulation. Table 1 List of programs and tools to generate a volume mesh from scan data
4,253.2
2018-03-13T00:00:00.000
[ "Computer Science" ]
Ion and Molecule Transport in Membrane Systems 3.0 and 4.0 This book is a collection of papers published in the 3rd and 4th Special Issues of the International Journal of Molecular Sciences under the standard title, "Ion and Molecule Transport in Membrane Systems" [...]. This book is a collection of papers published in the 3rd and 4th Special Issues of the International Journal of Molecular Sciences under the standard title, "Ion and Molecule Transport in Membrane Systems". The book extends the series that began with the 1st [1] and 2nd [2] Special Issues. The primary focus of this Special Issue is to bring together papers describing ion and molecule transportation in biological or artificial membrane systems. Biological Membrane Systems The paper by N.-P. Foo et al. [3] investigates how midazolam affects the K + current through cell membranes. The study demonstrated that midazolam suppressed the amplitude of delayed-rectifier K + current. The results considered that midazolam could affect lymphocyte immune functions. Langó et al. [4] developed an effective experimental method for accurately and comprehensively characterizing individual cells' surface proteins. Such proteins play a crucial role in several critical cellular processes, and identifying surface-associated protein segments has broad applications for molecular biology. Efimova et al. studied the influence of chromone-containing allylmorpholines on ion channels formed by pore-forming antibiotics in lipid membranes [5]. This effect was correlated with allylmorpholine's ability to affect membrane boundary potentials and lipid-packing stress. Ion and molecule transport in membranes studied by NMR The self-organization of fullerene derivatives in solutions and biological cells was studied by Avilova et al. [6] using pulsed gradient nuclear magnetic resonance (NMR). This investigation contributes to a more comprehensive understanding of the mechanisms behind the aggregation of fullerene-derived molecules; it provides the size and stability of associates. This study's focus is enhanced by the fact that fullerene derivatives are known for their pronounced anticancer and antiviral effects, as well as their antibacterial properties. The following paper [7] is a review prepared by Dr. V. Volkov and colleagues on the longterm results obtained in the field of NMR. This review contains comprehensive information on ion and molecular transport in ion-exchange resins and membranes, such as Nafion, MF-4SK, and MK-40. Self-diffusion coefficients of protons and Li + , Na + , and Cs + cations are reported along with the ionic conductivity data. The transport channel morphology, ionic hydration, and charge site formation are also discussed. The applications of various NMR modes, high-resolution NMR, solid-state NMR, NMR relaxation, and pulsed field gradient NMR techniques are explored. Artificial Membranes: Gas Molecule Transport The separation of pairs of gas molecules (He/N 2 and O 2 /N 2 ) and a methanol(MeOH)cyclohexane(CH) mixtures using a Ultem ® polyetherimide (PEI) membrane with an addition of the perovskite oxide La 0.85 Yb 0.15 AlO 3 (LYA) was studied by Pulyalina et al. [8]. They found that the selective separation of the gas pairs increased with the growth of LYA content in the membrane. The separation of the MeOH-CH mixture was effective due to the high sorption of MeOH in the PEI/LYA membrane. The paper by Petriev et al. [9] is devoted to enhancing the performance of composite Nb-based membranes through hydrogen purification via diffusion. Modifications of gas diffusion PdCu-Nb-PdCu membranes with a nanostructured crystalline coating were obtained. It was found that the flux of pure hydrogen through the modified membranes was 1.73 times higher than through the non-modified composite membranes at 300 • C. The mechanisms of the hydrogen flux enhancement due to the modification are discussed. Such high fluxes must be obtained at relatively low temperatures, which, along with cost-efficient niobium-based membranes, provides a promising approach for the economical production of pure hydrogen. Artificial Membranes: Ion and Molecule Separation The dehydration of ethanol-water mixtures in the broad concentration range was investigated in a pervaporation process studied by Burts et al. [10]. The authors applied thin film composite (TFC) membranes with a polyvinyl alcohol (PVA) selective layer. The effect of modification of the PVA layer with aluminosilicate (Al 2 O 3 ·SiO 2 ) nanoparticles and PVA-Al 2 O 3 ·SiO 2 /polyacrylonitrile (PAN) thin film nanocomposite (TFN) membranes was investigated. The new PVA-Al 2 O 3 ·SiO 2 /PAN TFN membranes were found to be more stable in the ethanol dehydration process compared to the reference membrane. The simultaneous recovery and concentration of 1-Ethyl-3-methylimidazolium chloride ([Emim]Cl) ionic liquid were performed using electrodialysis by Babilas et al. [11]. Heterogeneous ion exchange membranes were also applied. The effects of varying the [Emim]Cl concentration, applied voltage, linear flow velocity, and the dilute-to-concentrate volume ratio were investigated, leading to the discovery of optimized operational parameters. Another electrodialysis process using a microfluidic system with ion-exchange membranes was studied by Tichý and Slouka [12] where the separation performance was tested by desalting a model KCl solution spiked with fluorescein to directly observe desalination. It was demonstrated, both visually and by measuring the output solution conductivity, that the system can work in three operation modes as follows: continuous desalination, desalination by accumulation, and unsuccessful desalination. The possibility of independent control of different parameters and visualization encourage the consideration of the proposed system as a versatile platform for investigating the electrodialysis process. The separation of hydrochloric acid and Zn 2+ , Ni 2+ , Cr 3+ , and Fe 2+ salts was studied by Merkel et al. [13] using a spiral-wound diffusion dialysis module. It was established that this process recovers 68% of free HCl from the spent pickling solution contaminated with heavy-metal-ion salts. The effect of different input parameters was investigated. It was shown that diffusion dialysis is an effective and economical method for the treatment of spent acids. The separation of another industrial effluent, that of fermentation broths, was studied by Tomczak and Gryta [14]. The authors applied reverse osmosis (RO), a membrane process, more suitable for this aim. They established that the retention of carboxylic acids increases with increasing molecular weight; the following order was found: succinic acid > lactic acid > acetic acid > formic acid. Pismenskaya et al. [15] looked for ways to enhance the separation performance of weak polybasic acid salts using electrodialysis. The authors discovered that electroconvection makes a significant contribution to the increase in the mass transfer rate, although this contribution is less significant than in the case of electrodialysis of strong electrolytes. This is because a higher generation of H + ions is released during the dissociation of singly charged acid anions at the membrane's surface. Experiment and Mathematical Modeling of Ion and Molecule Transport Processes Filippov and Shkirskaya theoretically and experimentally studied osmotic and electroosmotic water transports [16]. The theoretical study is based on the well-known cell model [17] of a charged membrane involving the principles of irreversible thermodynamics. Exact and approximate analytical formulas for calculating the membrane osmotic and electroosmotic permeability are presented. The theoretical results are verified using experimental data for a cation exchange membrane. Skolotneva et al. [18] studied the transport of ammonium cations through anionexchange membranes, comparing them with the transport of K + cation. Even though the mobility of both cations in a solution is narrow, the diffusion flux of ammonium through an anion-exchange membrane is significantly higher than that of potassium. The reason is the additional transport of NH 3 molecules, along with the ammonium cations, since a part of the membrane consists of NH 4 + cations which are transformed into NH 3 molecules. The authors developed a mathematical model describing the transport of ammonium complicated by a parallel proton-exchange reaction. The comparison of the simulation with the experiment shows a satisfactory agreement. The following three papers [19][20][21] also presented a mathematical description of ion transport in membrane systems. Gorobchenko et al. [19] developed a new mathematical model to describe the competitive transport of cations through a bilayer membrane. The substrate layer was a cation exchange, and a thin surface layer was an anion exchange. The anion-exchange layer has a higher resistance for cation transport than for anion transport; moreover, the resistance for divalent cations is much higher than for those that are monovalent. This explains why such bilayer membranes have selective permeability for monovalent cations. The model predicts that the permselectivity coefficient vs. current density dependence has an upper limit. This dependence is explained by a change in the membrane layer, which controls the rate of ion transport. Another paper reporting the results of the mathematical simulation of ion transport in an anion-exchange membrane modified with a perfluorosulfonated ionomer is published by Kozmai et al. [20]. The idea of the modification was to "clog" the macropores of a commercial membrane and thereby block the transport of co-ions through the membrane. After such a modification, only the highly selective microporous gel phase of the membrane was available for ion transport. The modification allowed reducing the co-ion transport number from 0.11 to 0.02. A new version of the known microheterogeneous model [21] was developed to describe the change of membrane characteristics caused by its modification. The transport phenomena occurring in reactive electrochemical membranes during the anodic oxidation of organic compounds are investigated by Mareev et al. [22]. The mathematical model developed by the authors examines how the formation of gas bubbles that are generated by electrochemical reactions inside the membrane pores affects the performance of the anodic oxidation of organic compounds, such as paracetamol. An interesting phenomenon of autowaves in a magnetic fluid was studied experimentally and theoretically by Chekanov and Kovalenko [23]. Magnetic fluids are a colloidal system, like a liquid membrane. They involve ferromagnetic nanoparticles suspended in a carrier fluid's dispersion medium (for example, kerosene). The system of Nernst-Planck-Poisson and Navier-Stokes equations were applied to describe the dependence of the frequency of concentration fluctuations on the value of a steady-state voltage applied between two electrodes. A review of the ion and water transport in ion-exchange membranes applied in power generation systems is published by Mareev et al. [24]. The review provides guidelines for modeling the ion and water transport, describing the main structural elements of such membranes and their influence on transport, and considers the features determined by the application area. Significant attention is paid to models with the greatest impact and are most frequently used in the literature. In Conclusion The original articles and reviews, overviewed in this editorial, provide novel insights into membranes' ion and molecule transport mechanisms. The readers can also find applied aspects and understand the practical problems that can be solved with the help of membranes. The guest editors would like to thank all authors for their excellent contributions. Having collected the papers published in the Special Issues mentioned above, this book will be an asset to the community of researchers and engineers working in the biological and artificial membranes field.
2,501.8
2023-05-01T00:00:00.000
[ "Biology" ]
Synthesis of New Organoselenium Compounds Containing Nucleosides as Antioxidant LAILA Selenium containing nucleosides derived from some heterocyclic moieties such as Pyridineselenol, and pyridazineselenol is described herein. Ribosylation of selenol compounds were prepared in good yield by silyation of selenol derivatives with 1-O-acetyl-2,3,5-tri-O-benzoyl-D-ribofuranose followed by debenzoylation to afford the corresponding free N-nucleosides  and -1-(2,3,5-trihydroxy--D-ribofuranosyl)-2-seleno-4,6-dimethylpyridine-3-carbonitrile (6a,7a); b and a-1-(2,3,5-trihydroxy--D-ribofuranosyl)-3-seleno-5,6-diphenylpyridazine-4-carbonitrile (6b,7b). Newly synthesized compounds were characterized using the well known spectroscopic tools (IR, 1HNMR, 13CNMR and mass spectroscopy). Antioxidant activity of six selenonucleoside compounds (1a; 6a; 7a; 1b; 6b and 7b) was evaluated by animal assay model using experimental mice. The resulted data revealed that compounds 6a and 7b showed to be more active as antioxidant with a better performance of scavenging ability than the other compounds. Stimulated by our recent work on the synthesis of selenium containing nucleoside analogues 2 , sulfa drugs 3 , and the synthesis of selenium containing amino acid analogues 4 , we decided to expand our interest to the introduction of an organoselenium compounds in the nucleoside framework and screened their biological activity as antioxidants. The chemical structures of the nucleoside derivatives 4a, 4b, 5a, 5b, 6a, 6b, 7a and 7b were established and confirmed on the basis of their elemental analyses and spectral data (IR, 1 H and 13 C NMR) (see the Experimental section). The IR spectra of compounds (4a, 5a, 4b and 5b) were observed at 2210 cm -1 due to CN group and stretching vibration frequencies of the benzoyl carbonyl groups C=O appeared at  1740, 1730, 1727 and 1724 cm -1 of compounds 4a, 5a, 4b and 5b respectively.In addition signals at 1625, 1620 cm -1 for the C=N group of compounds 4a, 5a, and at 1630 cm -1 of compounds 4b and 5b. The IR spectra and the most important peaks for compounds (6a, 7a, 6b and 7b) were observed at  3400-3450 cm "1 due to (OH group) for compounds 6a and 6b respectively and signals at 3380 cm -1 due to (OH group) for compounds 7a and 7b. Biological activity Toxicity studies Toxicity parameters including LD 50 ; GPT Fig. 1: Effects of synthesized compounds on activities of GPT and LDH enzymes and LDH activities were determined and ranged in normal limits compared to infected untreated group with concentrations range up to 1000 mg kg-b.wt.The GPT, an enzyme which allows determining the liver function as indicator on liver cells damage and LDH enzyme is often used as a marker of tissue breakdown (Butt et al., 2002) [6]. Antioxidant activity evaluation Hepatic GSH-Rd and serum activities of SOD, GSH-S-transferase levels were measured as an indicator of antioxidant activity and result are present in Table 1.SOD and GSH-S-transferase are antioxidant enzymes that protect cells from oxidative stress of highly reactive free radicals and induces on the generation of free radicals in living cells.Result indicated that significant increasing (p < 0.05) in SOD, and GST activities in the treated groups at doses of 100 and 200 mg/kg compared to un-treated control group.No significant deference found between used doses (100 and 200 mg/kg) with all tested compounds.The highest SOD and GST, activities rather than GSH-Rd levels was monitored in animals treated with compounds 6a and 7b.(See Table 1, Fig. 1). General Melting points were determined by using the Kofler melting point apparatus, and were uncorrected.IR (KBr, cm -1 ) spectra were recorded on a Pye-Unicam SP3-100 instrument at Taif University.1H NMR spectra were obtained on a Varian (400 MHz) EM 390 USA instrument at King Abdel-Aziz University by using TMS as internal reference.13C NMR spectra were recorded on a JNM-LA spectrometer (100 MHz) at King Abdel-Aziz University, Saudi Arabia.Elemental analyses were obtained on an Elementar Vario EL 1150C analyzer.Mass spectra were recorded on a JEOL-JMS-AX 500 at Cairo National Research Center, Cairo, Egypt.Purity of the compounds was checked by thin layer chromatography (TLC) using silica gel plates.General Procedure.A mixture of 2-seleno-4,6-dimethylpyridine-3-carbonitrile or 3-seleno-5,6diphenylpyridazine-4-carbonitrile (1a,b) (0.02 mol) and hexamethyl di-silazane (20 ml) was heated under reflux for 24h with a catalytic amount of ammonium sulfate (0.01g).After that, the clear solution was cooled and evaporated till dryness to give the silyated derivative (2a,b), which directly was dissolved in 20 ml of dry 1,2-dichloroethane and then 1-O-acetyl-2,3,5-tri-O-benzoyl-Dribofuranose (3) (5.05 g, 0.01 mol) was added.The mixture was added dropwise onto a mixture of (10 ml trimethylsilyl trifluoromethanesulfonate (Triflate) in dry 1,2-dichloroethane (50 ml)).All mixture was stirred at room temperature for 24 h, and then washed with a saturated solution of aqueous sodium bicarbonate (3 × 50 ml), washed with water (3 × 50 ml), and dried over anhydrous sodium sulfate.The solvent was removed in vacuum and the residue was chromatographic on silica gel with chloroform: ethyl acetate (9: 1) as eluent to afford a white crystal from pure anomeric b and colorless crystals from a anomeric (4a,b and 5a,b) respectively. Deprotection of 4 a,b and 5a, b. Synthesis of nucleosides 6a,b and 7a,b respectively. General Procedure A mixture of each protected nucleoside 4a,b and 5a, b (0.001 mol for each), absolute methanol (20 ml) and sodium methoxide (0.055 g, 0.001mol) was stirred at room temperature for 48 h.The solvent was evaporated under vacuum to give a colorless solid, which was dissolved in hot water and neutralized with acetic acid.The precipitate compound was chromatographic on silica gel with chloroform: ethyl acetate (9: 1) as eluent to afford colorless and white crystals of the corresponding nucleosides 6 a,b and 7a,b respectively. Biological experiments. Chemicals Dimethyl sulfoxide (DMSO) and vitamin E were obtained from Sigma Chemical Co.(St.Louis, MO, USA).All enzymatic kits were purchased from Bioassays system, USA. Experimental animals Male Albino mice (20 ± 2 g) were obtained from department of animal science, cairo university and animals were handled under standard laboratory conditions with a 12-h light/dark cycle in a temperature of 25 ± C and a relative humidity of 55 ± 5 % controlled room.The basal diet used in these studies was certified feed to research laboratories animals.Food and water were available adlibitum.Cairo university animal care and use committee approved all protocols for the animal studies research. Toxicity experiment Male Albino mice of 6 animals per group and weighing between 25± 5 g were administered after overnight fasting with graded doses of (100-1000) mg kg -1 b.wt.intra peritoneal of each individual synthesized compounds suspended in DMSO.The toxicological effects were observed after 72 h of treatment in terms of mortality and expressed as LD 50 (Ghosh, 1984) [8].Others biochemical parameters determined after 14 days of administration according to methods of Reitman and Frankel (1957) [9] for GPT activity and Bergmeyer (1974) [10]for LDH activity Bioassays model design The animals were randomly divided into fourteen groups of 6 mice each.The first group served as untreated normal control.Group 2 to Group 13 on 7th day, animals were pre-treated with individual synthesized compounds at 100 and 200 mg/ kg b, wt, per day p.o., respectively for 7 days.Group 14, animals were pre-treated with standard drug Vitamin E (100 mg/ kg b, wt, per day p.o) for 7 days.(Rai et al., 2006) 11 . Twenty-four hours after the last administration, mice were sacrificed.Blood samples were collected and centrifuged at 4000×g at 4ae%C for 10 min for serums preparation.The liver was removed rapidly, washed and homogenized in icecold physiological saline to prepare 10% (w/v) homogenate.Then, the homogenate was centrifuged at 4000×g at 4ae%C for 10 min to remove cellular debris, and the supernatant was collected for biochemical analysis. The biochemical assays. Measurement of Glutathione-S-Transferase activity (GST) GST activity was determined as described by Habig et al., (1974) 12 .Reaction mixture containing 50 mM phosphate buffer, pH 7.5, 1 mM of 1-chloro-2, 4 dinitrobenzene (CDNB) and a appropriate volume of compound solution.The reaction was initiated by the addition of reduced glutathione GSH) and formation of S-(2, 4-dinitro phenyl) glutathione (DNP-GS) was monitored as an increase in absorbance at 334 nm.The result was expressed as µmol of CDNB conjugation formed /mg protein /min. Measurement of Super Oxide Dismutase (SOD) activity SOD activity was measured through the inhibition of hydroxylamine oxidation by the superoxide radicals generated in the xanthinexanthine oxidase system.(Kakkar et al. 1972) [13].The results were expressed in units/mg protein. Measurement of Glutathione Reduced (GSH-Rd) levels GSH in liver and kidney tissues was determined according to the Ellman method (Ellaman, 1959) 14 , which measures the reduction of 5,50-dithio-bis (2-nitrobenzoic acid) (DTNB) (Ellman's reagent) by sulfhydryl groups to 2-nitro-5-mercaptobenzoic acid, which has an intense yellow color.The results were expressed in mg per g protein (mg/g protein). Measurement of Protein content Protein levels were determined spectrophotometrically at 595 nm, using comassie blue G 250 as a protein binding dye (Bradford, 1976) 15 .Bovine serum albumin (BSA) was used as a protein standard.showed to be more active as antioxidant with a better performance of scavenging ability than other compounds.The SOD activity of these molecules was compared with standard antioxidant (vitamin E).Selenonucleoside compounds are active sites of a large number of selenium dependent enzymes, such as antioxitant enzymes [Spallholz J.E 1994] [7].The configuration structure of compounds 6a and 7b may more suitable for SOD and GST enzymes active center, so these compounds induce the antioxidants enzymes activity.
2,125.2
2014-12-31T00:00:00.000
[ "Chemistry" ]
Constraints on a Very Light Sbottom We investigate the phenomenological viability of a very light bottom squark, with a mass less than half of the Z boson mass. The decays of the Z and Higgs bosons to light sbottom pairs are, in a fairly model independent manner, strongly constrained by the precision electroweak data and Higgs signal strength measurements, respectively. These constraints are complementary to direct collider searches, which depend in detail on assumptions regarding the superpartner spectrum and decays of the sbottom. In particular, if the lightest sbottom has a mass below about 15 GeV, compatibility with these measurements is possible only in a special region of parameter space in which the couplings of the lightest sbottom to the Z and Higgs are suppressed. In this region, the second sbottom is predicted to be lighter than about 300 GeV and can also be searched for directly at the LHC. We also survey relevant collider searches for canonical scenarios with a bino, gravitino, or singlino LSP in the compressed and stealth kinematic regimes and provide suggestions to cover remaining open regions of parameter space. I. INTRODUCTION With the discovery of the Higgs boson [1,2] and a vibrant experimental program to measure its couplings and quantum numbers, significant progress is being made in our understanding of the physics of electroweak symmetry breaking. However, the existence of a Higgs particle with Standard Model (SM)-like properties accentuates the hierarchy problem. Weak scale supersymmetry (SUSY) provides one of the most compelling solutions to the hierarchy problem, and it is intriguing that the Higgs boson mass, m h 0 ∼ 126 GeV, lies within the range predicted by the Minimal Supersymmetric Standard Model (MSSM) [3][4][5][6][7][8][9][10][11][12][13][14]. However, with the completion of the 8 TeV run at the LHC, new strong limits have been placed on a variety of SUSY scenarios and spectra. For example, searches based on several jets in association with large missing transverse energy have lead to bounds on squarks and gluinos in the TeV range [15,16]. While we continue pursuing even heavier superpartners in canonical SUSY models, it is critical that we do not overlook possible loopholes in the experimental searches that may be present in non-standard scenarios. One interesting example in this regard is a very light bottom squark with a mass well below the kinematic LEP bound of 100 GeV. Such a light sbottom has been considered on numerous occasions in the past, including studies of its effects on the precision electroweak data [17], the Tevatron measurement of the bottom quark cross section [18], new Higgs boson decay channels [19], and myriad related phenomenology [20][21][22][23][24][25][26][27][28][29]. More recently, it was observed that a light sbottom can mediate large spin-independent dark matter scattering cross sections which may be relevant for some of the anomalies in the direct detections experiments [30][31][32][33]. Regardless of any particular phenomenological motivation, it is of general interest to understand the constraints on this scenario and the allowed regions of parameter space. The focus of the present work is an investigation of the constraints on the sbottom parameter space implied by the precision electroweak and Higgs signal strength data. As we will discuss in detail below, these constraints only weakly depend on the assumed decay mode of the lightest sbottom. Therefore, conservative, robust statements can be made with regard to the allowed parameter space. This is in contrast with limits from direct searches at colliders, which strongly depend on the assumptions regarding the superpartner spectum and sbottom decay channels. We find that the combination of electroweak and Higgs data restrict the parameter space to a special region in which the sbottom couplings to the Z and Higgs boson are suppressed. Furthermore, these constraints imply that the second sbottom should be lighter than about 300 GeV if the lightest sbottom has a mass of 15 GeV or less. Concerning the precision electroweak data, we stress the importance of the measurement of the total hadronic cross section at the Z pole, σ 0 had , which, independent of any particular model, places a strong constraint on any new decay modes of the Z boson. In light of these model independent constraints, we provide a survey of the existing collider constraints on a canonical scenario in which the LSP is a neutral fermion, such as a bino, gravitino, or singlino. We particularly emphasize the importance of the direct searches for the second sbottom. Searches are suggested to cover the remaining holes in the sbottom parameter space. The outline of the paper is as follows: We begin in Sec. II with a description of the sbottom sector and the radiative corrections to the bottom Yukawa coupling. In Sec. III and Sec. IV we investigate the constraints from the precision electroweak and the Higgs signal strength data, respectively, on the sbottom parameter space. In Sec. V we discuss the effects of the stops on the ρ parameter and Higgs mass. We then combine these model independent constraints in Sec. VI. In Sec. VII we discuss the collider constraints on light sbottoms. Finally, in Sec. VIII we present our conclusions. II. SBOTTOM SECTOR We begin by describing our conventions for the sbottom sector. In terms of the gauge eigenstates (b L ,b R ), the sbottom mass matrix is given at tree level by where m 2 Q 3 , m 2 D 3 are the left-and right-handed squark soft mass parameters, A b is the soft trilinear coupling, µ is the supersymmetric Higgs mass parameter, tan β is the ratio of up and down type Higgs vacuum expectation values, and D L = m 2 Z cos 2β − 1 2 + 1 3 s 2 W , D R = m 2 Z cos 2β − 1 3 s 2 W . For simplicity, we will assume all parameters are real. The physical sbottom mass eigenstates are related to the gauge eigenstates through the orthogonal transformation: where the mixing angle θ b satisfies and lies in the range [−π/2, π/2]. A. Radiative corrections to y b The bottom Yukawa coupling y b can receive substantial corrections at one loop [34][35][36][37], which are important for the light sbottom regime considered in this work. The correction can be written in terms of the parameter ∆ b , such that This corrects the sbottom mass matrix in Eq. (1). The dominant effect occurs in the offdiagonal term, which becomes m b (A b − µ tan β)/(1 + ∆ b ). Rather than compute ∆ b directly, we will instead find it useful to define the effective parameters, In this way, one can absorb the radiative corrections to the sbottom masses into a redefinition of A b and tan β and use the tree-level equations for the masses and mixing angles in the sbottom sector. We note that a complete removal of the ∆ b dependence is not possible, since it appears also in the diagonal mass entries in (1). However, the term proportional to m 2 b is numerically small and the D-term contribution is proportional to cos 2β ≈ −1 + 2/ tan 2 β which is approximately constant for tan β 1, so that we can safely use Eq. (5) in the regime of interest. One may further worry that a mismatch can occur in the couplings of the sbottom to other fields. For our purposes, the couplings of the Higgs to the sbottoms will be important when we examine the constraints imposed by the Higgs signal strength data. In particular, the coupling h 0 −b 1 −b * 1 , which is given in the Appendix in Eq. (A3) contains the same factor m b (A b − µ tan β), that appears in the off-diagonal entry of the sbottom mass matrix, and thus the ∆ b correction can be absorbed in this coupling using Eq. (5). Furthermore, the stop mass matrix and couplings to the Higgs are independent of tan β in the large tan β limit. Therefore, we can safely use the effective parameters to absorb the radiative corrections to the bottom Yukawa coupling. For the remainder of this paper we will therefore use the tree level formulae for the sbottom masses, mixing angles, and couplings, substituting the effective parameters defined above in (5) and neglecting small corrections to this approximation which are of order y 2 b and 1/ tan β. We will also drop everywhere the "eff" subscript, although the use of the effective parameters (5) is implied. As we will see, this approach allows for an analytical comprehension of the predictions and constraints on the parameters of the model, which is more difficult to obtain with a parameter space scan. III. PRECISION ELECTROWEAK DATA In this section we study the impact of sbottoms lighter than m Z /2 on precision electroweak measurements. Such sbottoms can be produced in the decays of Z bosons via Z →b 1b * 1 and through the continuum reaction e + e − → γ * , Z * →b 1b * 1 and will therefore enter into the predictions for a variety of precision observables. However, in order to make sharp predictions, it is necessary to understand how the sbottom decay products are "counted" -that is, whether the sbottom events populate the signal regions defined in the various analyses underlying the measurements of the precision observables. This is a difficult question on two fronts. First, the reconstruction of the sbottom is model dependent and requires assumptions about the decay channel of the sbottom and masses of the sbottom and its decay products. Second, the LEP experiments employed a complex and evolving set of strategies to perform these measurements. For instance, for the measurements of R b and A b F B at ALEPH, several algorithms were developed to effectively identify b quarks, including the presence of a high momentum lepton (presumed to originate from a B-hadron) [38,39], a lifetime tag based on longevity of the B-hadron [40], and multivariate analysis of event shape variables [41]. It is likely that certain sbottom decays would populate these signal regions, but to properly address this issue would require a detailed simulation of these algorithms that goes beyond the scope of this paper. To simplify the analysis, we will make a reasonable assumption about how the sbottom is "counted" by the experiments at LEP and SLC. We can conceive of the following possibilities: 3. Sbottom counted as a b quark. This will happen, for example, in the stealth kinematic regime, mb 1 m b mχ0 and the sbottom decays viab 1 → b +χ 0 . In the next subsection, Sec. III A, we will describe the predictions for the precision electroweak observables for the three sbottom reconstruction scenarios listed above. Following this we will present a quantitative analysis of the constraints obtained through a global fit to the precision data. In particular, we will see that for very light sbottoms with masses below about 15 GeV, the lightest sbottom must be largely decoupled from the Z boson in order to provide a good description of the data. This conclusion is conservative and robust, i.e., it does not depend on which hypothesis is made regarding the sbottom reconstruction. A. Sbottom contributions to precision observables "Invisible" sbottom If the lightest sbottom is "invisible", i.e., it does not populate any of the visible search channels that enter in the precision measurements, it will still affect the prediction for the total Z boson width, which is exquisitely measured from the Z lineshape [43]. A light sbottom leads to the new decay channel, which is kinematically allowed provided mb 1 < m Z /2. The partial decay width for the process (6), Γbb * ≡ Γ(Z →b 1b * 1 ), is given by where the coupling g Zb 1b * 1 is defined as Note that this coupling vanishes for a mixing angle θ b ≈ ±0.4. Thus, for mixing angles near this decoupling region, the new decay mode (6) is suppressed and the predictions for the precision observables are close to their SM values. Away from the decoupling region, however, there will be dramatic departures from the SM predictions due to the new decay mode (6). As an example, a light purely right-handed sbottom (mb yields a contribution to the total width of Γbb * ≈ 5 MeV. For comparison, the measured value is (Γ Z ) exp = 2.4952 ± 0.0023 GeV [43], while the SM prediction from the Gfitter group (Γ Z ) SM = 2.4954 ± 0.0014 GeV [44]. Thus, a pureb R is marginally allowed by this measurement alone. In fact, a more constraining measurement in this scenario is the total hadronic cross section at the Z pole, σ 0 had . The prediction for this observable is also modified by the presence of light sbottoms, since the cross section on the resonance depends on the total Z boson width. The prediction for this observable is given by where Γ e + e − ≡ Γ(Z → e + e − ), Γ had is the total hadronic width of the Z, and in the last step we have used Γbb * (Γ Z ) SM . The measured value and SM prediction for this observable are [43], [44]: which disagree at the 1.5σ level. One observes from Eq. (9) that the presence of the light sbottom only serves to increase the tension as it strictly decreases the theory prediction. For example, a 15 GeV purely right-handed sbottom leads to a prediction (σ 0 had ) = 41.313 nb, which is at odds with the measured value of the 5 − 6σ level and is therefore disfavored. We note that this observable was not considered in the recent paper [33] invoking a light sbottom to mediate a large neutralino direct detection cross section, and strongly constrains their scenario. We pause here to emphasize that, independent of any particular model, that the total hadronic cross section σ 0 had yields a stronger constraint on a new contribution to the Z boson invisible width. If we take the data and SM theory predictions at face value, then demanding that these observables agree to within 2σ we find that δΓ Z,inv < 0.5 (5.6) MeV from σ 0 had (Γ Z ). Alternatively, one might be skeptical about the 1.5 sigma discrepancy in σ 0 had . Even in this case, if we demand that the new physics contribution to these observables is less than 2 times the combined experimental and theoretical uncertainty, then we obtain δΓ Z,inv < 2.4 (5.4) MeV. Hadronic sbottom If the sbottom decays to hadrons then one expects that it will contaminate the hadronic width of the Z boson, Γ had . The hadronic width enters into a number of precision observables, including Γ Z and σ 0 had already discussed above, as well as R ≡ Γ had /Γ , with Γ the leptonic width of the Z, R b ≡ Γ b /Γ had and R c (defined analogously). As in the case of "invisible" sbottom decays discussed above, the total peak hadronic cross section measurement is particularly constraining. The prediction for σ 0 had in this case is where in the second line we have assumed Γbb * (Γ had ) SM < (Γ Z ) SM and have used the SM predictions for the hadronic and total widths. Note that the hadronic sbottom leads to a lower value of σ 0 had than in the SM, but the suppression is not as severe as in the case of the invisible sbottom (see Eq. (9)). For example, a 15 GeV purely right-handed sbottom leads to a prediction (σ 0 had ) = 41.432 nb, which is lower than the experimental value by about 3σ. 3.b 1 is counted as b Finally, it may happen in some scenarios that the sbottom mimics a b quark through its decay. In addition to the effects described in the hadronic sbottom case above, there are a few additional observables which are affected in the case that a sbottom is reconstructed as a b quark, as we now discuss. The first observable to consider is the forward-backward asymmetry of the bottom quark In general the forward-backward asymmetry is defined as with σ F,B = ± ±1 0 dcos θ(dσ/dcos θ). If the sbottom is counted as a b quark, then the reaction e + e − →b 1b * 1 will contribute to this asymmetry. Since the sbottom is a scalar, the forward and backward cross sections for sbottom pair production are identical, and thus sbottom pair production will only contribute to the total cross section for bb production. In other words, the effect of the sbottom is to strictly increase the denominator in Eq. (12), thus lowering the prediction for A b F B with respect to the SM prediction. This is quite intriguing given the longstanding 2.4σ discrepancy in the measured and predicted values of A b F B , which are respectively given by [43], [44]: Since the SM prediction (14) is larger than the measured value (13), a small sbottom pair production cross section will improve the agreement between theory and data. It follows from Eq. (12) that the prediction for Here, σ bb (σbb * ) are the total bottom (sbottom) production cross sections on the Z-pole, given by for the fermion f , and g Zb 1b * 1 is the coupling of the Z boson to the lightest sbottom mass eigenstate given in Eq. (8). As numerical examples, we find that for mb 1 = 5.5 GeV, a purely right-handed sbottom (θ b = 0) eases the tension in A b F B to the 1.5σ level, while for a mixing angle θ ∼ 0.7, the discrepancy disappears completely. Unfortunately, in both cases there are other observables which are in tension with the measured values due to the effects of the sbottoms, as we will see below. Another observable that is affected in this scenario is R b , the ratio of the Z → bb partial width to the total hadronic width. The prediction is given by where Γbb * is given in Eq. 7. As of September 2013, the predictions and measured values for which are consistent at the 1.2σ level. From Eq. (18) above it is clear that the sbottom contribution can only lead to a larger prediction for R b than the SM, which if small, will improve the agreement even further. B. Global fit to the precision data We now investigate quantitatively the constraints from the precision electroweak data on a light sbottom under each of the possible sbottom reconstruction hypothesis presented at the beginning of Sec. III. Our results are based on a global fit to the precision electroweak data, which closely follows the fit of the Gfitter group [44]. There are 19 observables entering into the fit and 6 fit parameters (five SM parameters plus the sbottom mixing angle θ b ) leading for 13 degrees-of-freedom (d.o.f.). We will investigate four different sbottom masses mb 1 = (5.5, 15, 25, 35) GeV. A detailed description of the experimental observables and SM theory predictions that enter into the fit can be found in Ref. [46]. For the SM theory predictions for Γ Z and σ 0 had we employ the recent numerical parameterizations presented in Ref. [47] which include the complete two-loop electroweak corrections. An important issue is the treatment of α s (m Z ) in the fit. The standard procedure, which is followed by the LEP Electroweak Working Group [43], the Particle Data Group (PDG) [48], and Gfitter [44] is to allow α s to float in the fit, thereby providing an independent determination of this parameter. However, in the light sbottom scenario there is typically a strong preference for low values of α s , which raises the SM prediction for σ 0 had towards the measured value. Due to this strong pull towards low values, we constrain α s in our fit as we now discuss. Besides the precision electroweak data, there are a number of other measurements of α s , including determinations from tau decays, lattice QCD, deep inelastic scattering (DIS), heavy quarkonia, and hadronic event shapes; see the PDG [48] and the recent review [49] for more details. The most recent world average, which includes a subset of the various α s measurements, is α s = 0.1184 ± 0.0007 [49]. However, the central value of α s must take into account the light sbottom contribution to its running, which is relevant in the extraploation of the dominant low energy determinations of α s to the scale m Z . Running from the scale Q = (5.5, 15, 25, 35) GeV to Q = m Z , we find that the sbottom induces an upward shift ∆α s ∼ (0.0009, 0.0008, 0.007, 0.0006). In our fits we add this shift to the central value, and constrain α s in our fit with the quoted uncertainty 0.0007. Results In Figs First, as a reference point we give the results for the fit to the SM. The SM fit (with α s determined by the fit) has 18 observables and 5 fit parameters for 13 d.o.f., and yields χ 2 SM = 17.8, which corresponds to a p-value of 0.17, in good agreement with the results of Gfitter [44]. The SM therefore provides an acceptable description of the precision data. The SM value χ 2 SM is displayed in Figs. 1,2,3 by the solid black line. Next we examine the fit results for a light sbottom assuming that it is "invisible", i.e., not counted in visible channels, which are displayed in Fig. 1. Especially for very light sbottoms, we observe that consistency with the data requires a mixing angle that is close to the decoupling value, θ b ∼ 0.4. Away from this value, e.g., for mostly right-handed sbottoms (θ b ∼ 0) or for highly mixed and left-handed sbottoms, the total peak hadronic cross section σ 0 had is much too small compared to the measured value, yielding a poor global fit. Another general feature that is obseved in Fig. 1 is that the allowed parameter space opens up for larger sbottom masses, which can easily be understood as a consequence of the phase space suppression in the Z →b 1b * 1 width in Eq. (7). For the case of the hadronic sbottom, the results of the fit are presented in Fig. 2. As in the case of the "invisible" sbottom just discussed, there is a strong preference for a mixing angle near the decoupling value θ b ∼ 0.4. Large mixing angles θ b 0.7 are disfavored regardless of the sbottom mass, while for light sbottoms below about 25 GeV, purely righthanded sbottoms θ b ∼ 0 are also disfavored. Again, the main culprit is σ 0 had , which becomes much smaller than the experimental value for mixing angles away from the decoupling value. Note that in comparision to the "invisible" sbottom there is a broader region around the decoupling value that provides a good description of the data. This is because the increase in Γ had as θ b becomes smaller raises the prediction for R , bringing it in better agreement with the measured value. This partly compensates the increased tension in σ 0 had and R b as θ b is decreased. Lastly, we consider the case in which the sbottom is counted as a b quark. The results for this case are presented in Fig. 3. Again, for very light sbottoms below about 15 GeV, there is a window around the decoupling value θ b ∼ 0.4 that provides a good description of the data. Interestingly, if the sbottom is counted as a b quark, the global fit can actually be improved with respect to the SM, as the predictions for R b and A b F B are in better agreement with the measured values for mixing angles θ b ∼ 0.2 − 0.3 and θ b ∼ 0.5. This is indicated by the two local minima in the total χ 2 (solid grey) in Fig. 3. We also observe that the tension in A b F B can be moderately reduced (e.g., the disagreement is at the 1.8σ level for mb 1 = 5.5 GeV, θ b = 0.15) although the discrepancy cannot be fully explained by this hypothesis without spoiling the agreement in σ 0 had . For heavier sbottoms, about about 25 GeV, the region near θ b ∼ 0 also is allowed. We stress, however, that it is likely challenging to construct a scenario in which such a heavyb 1 can mimic a b-quark, since the emitted b quarks may be softer or more acoplanar than if directly produced. On the other hand, it is possible for a lightb 1 to mimic a b quark, and we will discuss such a scenario in Sec. VII. C. Summary We have seen that the precision electroweak data impose strong constraints on the possible existence of a very light sbottom. Regardless of how the sbottom is counted, light sbottoms with mass below 15 GeV must have a mixing angle near the decoupling value θ b ∼ 0.4 in order to provide an acceptable global description of the data. In particular, purely righthanded sbottoms θ b ∼ 0 and highly mixed or left handed sbottoms θ b 0.7 are generically in tension with the precision data for mb 1 15 GeV. Purely right handed sbottoms can be consistent for larger masses due the phase space suppression in the Z →b 1b * 1 width. The main observable responsible for the constraints is the total hadronic cross section at the Z-peak, σ 0 had . This SM prediction for σ 0 had is smaller than the experimental value by about 1.5σ, and a light sbottom can only lower the prediction and thus increase the tension. A summary of our results are presented in Table I, where we display the 95% C.L. allowed range for the mixing angle θ b , the p-value for the best-fit mixing angle, and for comparison the p-value for a purely right-handed sbottom θ b = 0. The results are displayed for mb 1 = 5.5, 15, 25, 35 GeV for each sbottom reconstruction hypothesis. instance, if the sbottom mimics the b quark through its decay, then there will also be an apparent increase in the branching ratio in the bb channel. To quantify the impact of the sbottoms on the Higgs signal strength data, we perform a fit using the results of Ref. [50]. In that work, from the raw ATLAS, CMS, and Tevatron data the authors derive a χ 2 function for eight channels, one for each combination of two combined production modes and four final states. The combined production modes are 1) gluon-gluon fusion and t − t − h (ggF+ttH), and 2) vector boson fusion and associated production (VBF+VH), while the final states considered are γγ, V V , bb, and ττ . With the current level of precision in the bb signal strength measurement, we expect that the constraints will not be very sensitive to how the sbottom is counted. To confirm this, we have investigated in a model independent fashion the allowed size of a new contribution to the h → bb partial width δΓ(h → bb) as well as the invisible width Γ(h → invisible), using two fits to the Higgs signal strength data. In the first fit, we add a new contribution to the h → bb partial width and determine δΓ(h → bb) < 1.6 MeV at 95% C.L. In the second fit, we consider a new invisible width, obtaining Γ(h → invisible) < 1.7 MeV at 95% C.L. The fact that these limits are so similar confirms our expectation that the principal effect of the new decay modes is to dilute the γγ, ZZ, and W W signal strengths. For simplicity, in the numerical results below we have assumed that the new decay modes of the Higgs to sbottoms are invisible. While the most important constraint comes from the the new decay mode h 0 →b 1b * 1 , another effect that can be numerically important is the modification of the the h → gg partial width (and thus the gluon fusion cross section) and to a lesser degree the h → γγ partial width from new one loop diagrams involving sbottom and stop exchange. In the next two subsections we will describe in detail the new contributions to the Higgs decay modes and the h → gg, h → γγ partial widths. A. New Higgs decay modes Since the mass of the lightest sbottom under consideration is well below m h 0 /2, there is a new decay mode h 0 →b 1b * 1 . The partial width for this decay is given by In the decoupling limit (m A 0 m Z ), the coupling of the Higgs boson to the lightest sbottom mass eigenstate is given by (see also the Appendix (A3)) where v = 174 GeV, s b ≡ sin θ b , etc. This new decay mode is already strongly constrained by the Higgs signal strength data. In Fig. 4 we display the allowed parameter space in the θ b −mb 2 plane, for the inputs µ = 200 GeV, tan β = 10, mb 1 = 5.5 GeV, m U 3 = 2.5 TeV, A t = 2 TeV. The region in yellow is excluded by considering only the effect of the decay h 0 →b 1b * 1 on the signal strength data. The thin white strip corresponds to the region where the coupling λ h 0b 1b * 1 in Eq. (22) is small and the decay is suppressed. Using the tree level relation between Lagrangian and physical parameters in the sbottom sector, we observe that the coupling approximately vanishes along the curve This relation (24) must be obeyed to a good approximation in order to evade the constraints from the Higgs data. If the second sbottom is light enough, mb 2 < m h 0 − mb 1 , then the Higgs can also decay via h 0 →b 1b * 2 ,b * 1b 2 . The partial decay widths are where λ(a, b, c) = a 2 + b 2 + c 2 − 2ab − 2ac − 2bc, and the coupling λ h 0b 1b * 2 is given by (in the decoupling limit) where v = 174 GeV. The green regions in Fig. (4) are excluded by considering only the effect of the decays h 0 →b 1b * 2 ,b * 1b 2 on the signal strength predictions. The first thing to observe is that the constraints vanish for second sbottom masses heavier than about 120 GeV, corresponding to the value of m 0 h − mb 1 in this example. One can understand the allowed regions by again focusing on where the coupling (26) is small. Using the relation (23), we find that the coupling λ h 0b 1b * 2 (26) is proportional to sin 2θ b and thus vanishes as θ b → 0, explaining why there are no constraints in this region. Furthermore, there is a second region where the coupling is small, centered around the curve which is situated between the two green regions in Fig. (4). As one can observe from Fig. (4), the new contributions to the gluon fusion cross section can shift the allowed parameter space as the enhancement in this cross section can compensate to a certain degree the dilution in the branching ratios to the γγ, W W, ZZ channels. We discuss these effects next. For the parameters chosen in this plot, the stop contribution to r g is negligible. In fact, as we will discuss in the next section, this is motivated by requirement of a small ∆ρ parameter, which typically coincides with a small coupling of the Higgs to the lightest stop. The dominant modifications to the gluon fusion rate originate from the diagrams containing sbottoms. As is clear from Fig. (4) there are two distinct behaviors of the contours depending on whether mb 2 is larger or smaller than m h 0 . In the regime mb lightest sbottomb 1 gives the largest correction, which can be written as In the second line above we have kept only the leading terms from the general formulae for r g (A1), the sbottom-Higgs coupling λ h 0b 1b * 1 (A3), and the scalar loop function, and employed the relation (23) to write the last formula above in terms of the physical sbottom mass mb 2 and mixing angle θ b . This approximation is valid for moderate mixing angles, i.e., to the right of the white band in Fig. (4). We can see that for a fixed value r g traces out the curve mb Finally, there are also modifications to the h → γγ rate, although in comparison with h → gg, the effects are very small. Over the range of viable parameter space where the new decays of the Higgs to sbottoms are suppresed, we find that there is a small 5 − 10% suppression in the partial width. C. Summary Putting together the effects of the new Higgs decay modes and the one loop modifications to the h → gg, γγ couplings, we obtain in Fig. (4) the blue hatched region, which represents the parameters allowed by the Higgs signal strength data set at 95% C.L. (or equivalently, the parameters that yield a p-value greater than 0.05 in the global fit). When the second sbottom is heavy, mb 2 m h 0 , the allowed region is governed entirely by the requirement that the decay h 0 →b 1b * 1 is suppressed, as discussed in Sec. IV A. Near masses mb 2 ∼ 100 GeV, the presence of the new decay modes h 0 →b 1b * 2 , b * 1b 2 causes a dilution of the branching ratios of the Higgs decays to SM particles, although the corresponding production modes are compensated to a certain degree by an enhancement in the gluon fusion cross section 1 . 1 We note that if the second sbottom is very light, mb 2 < m Z − mb 1 , there will be further precision electroweak constraints coming from the new decay modes of the Z boson, Z →b 1b * There is also a "hole" that is excluded near mb 2 ∼ 90 GeV, θ b ∼ 0.4 because the gluon fusion rate is too large, and the new decays of the Higgs to sbottoms are suppressed. In the example above, we have chosen parameters in the stop sector such that the coupling of the lightest stops to the Higgs is small. This is motivated by the phenomenological constraint of the ∆ρ parameter, which affects the precision observables. In the next section, we well explore this requirement in more detail, examining how a a non-zero ∆ρ parameter affects the fits to the precision electroweak data. V. EFFECTS OF THE STOPS In the previous two sections we have examined in detail the effects of light sbottoms on the precision electroweak and Higgs signal strength datasets. As alluded to in the previous section, it is also important to investigate the effects of the stops for several reasons. First, depending on parameters in the stop sector, there can be a large custodial symmetry breaking and thus a ∆ρ parameter which will alter the predictions for the precision observables. Furthermore, if we restrict to the MSSM, then in order to obtain the observed value of the Higgs mass, the second stop should be fairly heavy, and there should be a tight correlation between the soft parameters m U 3 and A t . Finally, the stops can also contribute to the one loop decays h → gg, γγ, as discussed in the previous section. A. ∆ρ Here we consider the implications of a nonzero contribution to ∆ρ to the precision electroweak observables. The predictions for these observables in the presence of ∆T ≡ ∆ρ/α can be obtained from Ref. [51], which we have incorporated into our global fit. We also include the contributions of the light sbottom, as detailed in Sec. III. In Figure. 5 we illustrate the effects of a non-zero ∆ρ parameter on the fit. Here we have taken a lightest sbottom mass of mb 1 = 5.5 GeV, and assumed it is counted as a b-quark. This is the most optimistic scenario, while the cases of "invisible" and hadronic sbottoms are more constrained; see Sec. III for a discussion. In these fits, we have treated ∆T as a free parameter and have shown the results for four values ∆T = 0, 0.05, 0.1, 0.15. A small ∆ρ parameter gives a comparable fit to the case of ∆ρ = 0. For example, in the case of ∆T = 0.05 the fit is slightly improved as the W boson mass prediction is in better agreement with the measured value. However, one clearly observes that as ∆T becomes larger than about 0.1, there is increasing tension in the fit, which is driven mainly by the observables had . Next, we turn to the contributions of the stop -sbottom sector. The one loop contribution to ∆ρ can be found in Ref. [52], and is given by where the function F is defined as In Fig. 6 we display in the m U 3 − A t plane isocontrours of ∆T , for the inputs µ = 200, tan β = 10, mb 1 = 5.5 GeV, mb 2 = 200 GeV, θ b = 0.15. We observe that the T parameter is minimized as the second stop mass, mt 2 ∼ m U 3 becomes larger, and for values of A t ∼ m U 3 . The reason for the latter is related to custodial symmetry breaking: In the limit, , while the second sbottom is mostly left-handed with mass mb 2 m Q 3 . Therefore, the custodial symmetry breaking, and hence ∆ρ is minimized for parameters A t ≈ m U 3 . B. Higgs mass In the MSSM, large contributions from the stops are required to raise the Higgs mass from the tree level value, which is less than m Z , to the observed value of 126 GeV. To estimate the range of parameter space that gives the correct Higgs mass, we employ the following two loop approximate formula appropriate for a hierarchical stop spectrum: where m t is the running top quark mass, and we have defined This equation contains the dominant leading log effects from the stops. We are neglecting subdominant effects from two-loop thresholds, sbottoms loops, and electroweak corrections. We have performed numerical checks with FeynHiggs [53] and CPsuperH [53,54] and find agreement to within about 3 GeV. Note that in the limit mt 1 → mt 2 reproduces the analogous approximate formula for the case of degenerate stops from Ref. [8]. In Fig. 6 we represent in orange the region of parameter space that yields 122 GeV < m 0 h < 128 GeV (accounting for the O(few GeV) uncertainty in Eq. (32) noted above). We observe that the parameter region A t ∼ m U 3 yields the correct Higgs mass, while simultaneously minimizing the ∆ρ parameter. C. Stop contributions to Higgs couplings In the region of stop parameter space consistent with the observed Higgs mass and a small ∆ρ parameter, the coupling of the Higgs to the lightest stop is suppressed. Taking the limit mt 2 X t mt 1 ∼ m t m Z in the coupling λ h 0t 1t * 1 in Eq. (A3) becomes where X t ≡ A t − µ cot β. As discussed above, the ρ parameter is minimized in the regime X t ∼ mt 2 , in which case the coupling λ h 0t 1t * 1 is suppressed. Thus, we do not expect large corrections to the loop induced Higgs couplings h → gg, γγ from stop loops. VI. CONSTRAINTS ON THE SBOTTOM PARAMETER SPACE In this section we combine the constraints from precision electroweak data and the Higgs signal strength measurements, identifying the allowed parameter space for light sbottoms. Our aim here is to present conservative constraints, and as such we will assume that the lightest sbottomb 1 is counted as a b quark in the analyses concerning the precision electroweak observables. As can be seen from comparing Figs. 1,2,3, the electroweak constraints are the weakest in this case. If instead the sbottom is counted as hadrons or is "invisible", the constraints will be considerably stronger than those presented in this section. In Fig. 7, we display the constraints in the θ b − mb 2 plane for four cases of the lightest sbottom mass, mb 1 = 5.5, 15, 25, 35 GeV. In these plots, we have also fixed µ = 200 GeV, tan β = 10, m U 3 = 2.5 TeV, and A t = 2 TeV. The allowed region from the combined constraints from the Higgs signal strength data and the precision electroweak data is shown in white. In order to understand the shape of this region, we have also displayed the regions allowed by consideration of only a subset of these effects (see the caption of Fig. 7 for a detailed explanation). Finally, the gray hatched region indicates where the Higgs mass is too small in the MSSM. If the first sbottom is very light, Fig. 7, suggests that the second sbottom must also be quite light to be compatible with both electroweak and Higgs data. In particular, for Conversely, if the lightest sbottom is somewhat heavier than 15 GeV, the precision electroweak data no longer poses a constraint near θ b ∼ 0 due to the phase space suppression in the decay Z →b 1b * 1 (except possibly due to the ∆ρ parameter, which depends in detail on the values of m U 3 and A t ). Therefore, in this regime the second sbottomb 2 can be much heavier, as is clearly seen in Fig. 7. Again we emphasize that these are conservative constraints, and if the sbottom is "counted" as hadrons or is invisible the constraints are stronger, particularly near the θ b ∼ 0 region of parameter space. VII. COLLIDER CONSTRAINTS We have seen in the previous sections that the precision electroweak data and the Higgs signal strength measurements impose tight constraints on the possibility of a light sbottom with mass less than m Z /2. Furthermore, while the bounds from the precision electroweak data depend on how the sbottom is reconstructed, and thus its precise decay mode, conservative, robust limits are obtained under the assumption that the sbottom is counted as 300 GeV, and 2) if the lightest sbottom is somewhat heavier than 15 GeV, then the second sbottom can be made heavy while being consistent with these data. In this section, we will discuss the constraints coming from searches at colliders, which provide a more direct probe of the light sbottom scenario, but are strongly dependent on the details of the spectrum and specific decay channels of the sbottom. We will focus on a canonical scenario with a neutral fermion LSPχ 0 , which could be a bino, gravitino, or singlino, and a sbottom NLSP. In particular, we will emphasize the importance of the direct constraints on the second sbottom, which, as just mentioned, is predicted to be light if mb 1 15 GeV. In the scenario under consideration, the lightest sbottom will decay viã Direct searches for superpartners at e + e − colliders, such as TRISTAN [55], and LEP [56][57][58][59], rule out such light sbottomsb 1 unless a degeneracy in the spectrum exists, in which case some of the decay products will be soft. Let us therefore consider these cases in some detail. A. Compressed regime In the compressed regime, mb 1 mχ0 m b , the LSP carries most of the momentum of the sbottom while the b quark is fairly soft. However,b 1b * 1 pairs will, in the absence of initial state radiation (ISR), be produced back-to-back, and therefore the missing momentum in such events will be suppressed since the two LSPs emitted in the decay also travel backto-back. Because of this, standard sbottom searches have low efficiencies in this kinematic regime. To get a handle on this region, one can take advantage of events with hard ISR. In such events the sbottoms pairs are boosted and the LSPs are therefore misaligned, leading to a significant missing momentum. For example, Ref. [30] investigated the constraints from the CMS monojet search [60], concluding that sbottoms heavier than about 24 GeV are ruled out. Furthermore, the LHC sbottom searches [61][62][63][64] looking for 2 b-jets and missing transverse energy often consider signal regions with an additional hard jet to attack the compressed regime and could be sensitive to such a light sbottom, particularly because the production rate is enormous. We are aware of a forthcoming study considering the limits from the LHC sbottom searches on very light sbottom pairs produced in association with a hard jet [65]. Besides direct production of the lightest sbottom, one can look for signatures of the second sbottomb 2 , which as emphasized already, must be lighter than about 300 GeV if the first sbottom is below 15 GeV. For instance, the ATLAS sbottom search [61] places a limit mb > 650 GeV, under the assumption of a 100% branching ratio for the decayb → bχ 0 . In the presence of a very light first sbottomb 1 , there will be additional decay modesb 2 → Zb 1 , 1 . This will lead to an O(1) suppression in the branching ratiob 2 → bχ 0 which, however, will generally not be sufficient to evade the constraints on a 300 GeV sbottom. One way to evade the constraints onb 2 pair production from these searches would be to consider a gravitino or singlino LSP, since the decay rateb 2 → bχ 0 would be suppressed in such a case. However, in this case, one should consider alternative channels involving the Z boson in theb 2 decay. In particular, the signature may show up in SUSY searches with leptons or searches for heavy B quarks (which decay via B → Zb). We will return to the constraints from B searches momentarily. Looking forward, it would be useful for ATLAS and CMS to develop searches with the aim of explicitly probing the lightest sbottom, perhaps by taking advantage of events with hard ISR. It would also be desirable to search for the second sbottom decaying viab 2 → Zb 1 which are relevant for a gravitino or singlino LSP. Such searches could exploit the dilepton pairs from the Z, in addition to the b-jets and missing transverse energy. B. Stealth regime Besides the compressed regime, another kinematic region which is not covered by sbottom searches at e + e − colliders is the stealth regime, mb 1 m b m 0 χ , so named following Refs. [66][67][68]. In this regime, the LSP emitted in the decay of the sbottom is very soft, and the sbottom is essentially indistinguishable from a b quark. Note that this provides one concrete mechanism by which a sbottom would be counted as a b quark in the precision electroweak measurements. First, we note that a very light bino in this case is ruled out by searches for invisible decays of Υ(1S). Sbottom exchange will mediate the annihilation decay Υ →BB. Using the results of Ref. [69] we obtain the branching ratio Br Υ→BB ≈ 3.5 × 10 −3 (1 − 3 2 sin 2 θ b ) 2 . A search from BaBar [70] in events containing Υ(3S) → π + π − Υ(1S) leads to the constraint Br(Υ(1S) → invisible) < 3.0×10 −4 . In order to evade the bound, a mixing angle of θ b 0.76 is needed, which is well outside the region allowed by the precision electroweak data (see Fig. 7 for mb 1 = 5.5 GeV). Thus, a pure bino LSP is ruled out in this scenario. In principle, it may be possible to weaken this bound by allowing a large admixture of Higgsino in the lightest neutralino eigenstate, but then one must contend with other challenges, such as new Z and Higgs decays to neutralino pairs and the expectation of very light charged states. Instead, one can consider a gravitino or singlino LSP. In this case, we expect that the second sbottom will preferentially decay viab 2 → Zb 1 ,b 2 → h 0b 1 rather than directly to the LSP. Therefore, since theb 1 will be reconstructed as a b quark, the signature of second sbottomb 2 will be very similar to that of a heavy B quark which decays via B → Zb, b 2 → h 0 b. Searches for B → Zb have been carried out by ATLAS [71,72], CMS [73][74][75], and CDF [76]. While the CDF [76] search does not constrain the second sbottom, the ATLAS 7 TeV search [72] explicitly covers masses heavier than 200 GeV, and the quoted upper limit on the cross section appears to rule out the second sbottom in the mass range of 200-230 GeV. Furthermore, the ATLAS 8 TeV search [71] explicitly covers masses heavier than 350 GeV, and rules out sbottoms in the mass range 450-450 GeV. The CMS searches explicitly cover masses above 350 GeV but may also have sensitivity to lighter sbottoms. Naive extrapolation of these limits to lowerb 2 masses would seem to rule out additional parameter space, although a more detailed study is needed to draw a firm conclusion. The collaborations should carry out searches to explicitly cover the possibility of a second sbottom which mimics a B quark, but has a lower production rate. Furthermore, since the lightest sbottom is mostly left-handed, there is also a light stop in the spectrum, which will dominantly decay viã t 1 → Wb 1 . Therefore, the stop will mimic in all respects a fermionic top partner, T , but with a much lower production rate. C. Other possibilities We have given a cursory discussion of the collider constraints on the very light sbottom scenario in the case that the LSP is neutral fermion. The scenario appears to face strong constraints, particularly if mb 1 15 GeV, since in this case the second sbottom should be lighter than about 300 GeV and is constrained by various searches at colliders. However, a detailed study and recasting of existing searches should be performed to see if any allowed regions exist. One possible way out of these limits is to consider additional light states in the spectrum, e.g., electroweakinos, which may open up new decay modes for the second sbottom. Additionally, one could consider the lightest sbottom to be the LSP and decay through a small R-parity violating coupling. For example, if the sbottom decays through a U DD operator to a pair of jets, then the direct collider constraints on the scenario will be quite weak due to the huge QCD background. VIII. DISCUSSION AND CONCLUSIONS In this paper we have investigated a scenario containing a very light sbottom with mass mb 1 m Z /2. We have focused on the indirect, but less model dependent constraints obtained from the sbottom contributions to the precision electroweak and Higgs signal strength measurements. Particularly for a light first sbottom with mass below about 15 GeV, these combined datasets leave open a small region of parameter space and predict that the second sbottom is below about 300 GeV. However, if the first sbottom is a bit heavier, then the mass of the second sbottom can be raised to much higher values. The conclusions above are drawn under the assumption that the sbottom is "counted" as a b-quark. In this case the precision constraints coming from the production of sbottoms through the new decay Z →b 1b * 1 and the continuum process e + e − →b 1b * 1 are the weakest. However, the sbottom may be counted as hadrons or alternatively may be "invisible", i.e., not populate the signal regions in the searches entering the precision measurements. If so, the constraints from the precision data are much stronger. We emphasize that the measurement of the total hadronic cross section at the Z peak, σ 0 had , plays an important role in constraining a light sbottom and provides a much stronger constraint than the total Z boson width. This provides a new constraint on the dark matter scenario of Ref. [30], and in particular disfavors their benchmark models. With the discovery of the Higgs boson, the viable parameter space for a light sbottom has been whittled down to a small region in which the coupling of the Higgs to the first sbottoms is suppressed. With the current precision in the h → bb signal strength measurements, these constraints are not sensitive to the manner in which the sbottom decays. Rather, it is the dilution of the branching ratio in the γγ, ZZ, and W W channels that is the primary source of the constraints. These constraints are complementary to those provided by direct searches at colliders, since the latter are more strongly dependent on the assumed decay mode of the sbottom. We have presented a qualitative overview of the collider constraints for a canonical scenario with a bino, gravitino, or singlino LSP, with minimal assumptions about the spectrum of the lightest states. Dedicated searches should be carried out by ATLAS and CMS to cover the compressed and stealth kinematic regimes in this scenario.
12,572
2013-12-09T00:00:00.000
[ "Physics" ]
ANALYSIS OF ORGANIZATIONAL DEVELOPMENT PROCESSES AND INTERVENTIONS IN THE DIGITAL TECHNOLOGY INDUSTRY: AN OVERVIEW OF THE PRACTICES TO POLICIES FRAMEWORK ABSTRACT Introduction. Organizational development processes and interventions are essential to unlocking the substantial benefits that digital can offer society and the economy, and these challenges must be addressed by government and industry leaders.The use of detailed analysis when evaluating the outcomes of assistive technology challenges and innovations is extremely valuable to experienced OD practitioners (Smith, R. O. 1996). A systematic review on methods for evaluating area-wide and organisation-based interventions in health and health care indicates that many of the best practices can be applied by OD professionals (Ukoumunne, O. C., Gulliford, M. C., Chinn, S., Sterne, J. A., and Burney, P. G. 1999).As digital technology transforms most industries and creates new challenges, it is essential for business leaders to understand the processes and interventions connected with organization development.The pace at which changes occur, the processes and interventions for cultural organizations that are being developed, outdated regulations, the identification of skills needed for the future, the overcoming of shortcomings in legacy systems, and the need to fund both digital and physical infrastructure development processes and interventions are among the factors that contribute to these challenges. It is important to note that the digital transformation of organizational development processes and interventions in the field of information technology does not happen in a vacuum.There is a significant role to be played by external OD practitioners.There are some cases where digitalization can be accelerated, but there are also cases where it may be hindered (Tiwari, 2023).An important feature of organizational development processes and interventions within the information technology industry has been the quantification of the value that can be derived from the digitalization of the industry for both business and society for the next decade.There are several sectors in this area, including aviation, travel, and tourism.There is no doubt that the digital revolution is one of the most fundamental forces of change in our lives today, as it offers us a unique chance to shape the future.It is imperative that organizations in the information technology industry are aware of the importance of organization development processes and interventions.In addition, they must know how these processes and interventions can be applied to them.Consequently, as a result of the OD interventions, IT organizations have been given the tools they need to understand the implications of these interventions and to become more productive and to create better opportunities for themselves on their journey towards creating better social and business opportunities. Driving Value through Organizational Development Processes and Interventions. In the IT industry, there has been an excellent opportunity for organizations to develop organizational processes, as well as for the aviation, travel and tourism ecosystem to be impacted.This is a very significant opportunity for the aviation industry as well as society as a whole to unlock billions of dollars of value over the next decade.The digital transformation is having a significant impact on all elements of the aviation, travel, and tourism value chain.There has been a radical change in demandside dynamics as a result of platforms such as Grab in South East Asia and Lazada in Southeast Asia, making it possible for smaller entrepreneurs to compete with larger players in terms of organizational development processes and intervention.In contrast, other mobile and internet-based travel companies, based on the information provided by their customers, are changing the way in which travellers discover travel offers by making use of the most up-to-date information.The travel ecosystem of today is in the midst of a transformation, with blurring boundaries, changing roles, and a change in organization development processes that are transforming the industry landscape as a whole. The digital transformation of manufacturing organizations has resulted in new advances in technology that are revolutionizing both the development process and the interventions, and enabling the optimization of real-time asset deployment and the augmenting of the workforce as a result of the digital transformation.A company's core operational processes will be changed by innovative ways of working and new digital platforms that can be used by artificial intelligence (AI), the Internet of Things (IoT), augmented reality and virtual reality technologies, as well as new digital platforms.Over the course of the next decade, specific digital initiatives will be identified within each theme that will serve as the building blocks for digital transformation in the aviation, travel, and tourism sectors, as well as other sectors in the future.There will be a number of initiatives that will be an integral part of the organization's development processes and interventions within each of its themes.In their research, the authors demonstrate how organization development processes and interventions can have a significant impact on organizations as a result of the interventions that are currently being implemented by the authors. Elements of Organizational Development Processes and Interventions in the Digital Technology Industry. A well designed organization development process and intervention approach integrates, aligns and strengthens the interconnections between the hard-wired components within a firm by integrating, aligning and strengthening their interrelationships.Typically, organization development strategies and interventions in the information technology industry consist of a structure, a process, a policy, an implementation, a process, a practice, an information technology system, and performance metrics.In addition to these soft-wired elements, there are also soft-wired elements such as a common goal, shared values, human abilities, beliefs, and behaviors that contribute to a successful organization. As organization development processes in the information technology industry are increasingly taking place in a digital environment, there's an onus on firms to improve levels of collaboration, interand intra-teamwork as well as focusing on adding value to their clients in order to stay competitive.A key lever here is the development of organizational development processes and interventions in the information technology industry.There are two ways in which we can use the term 'organisational architecture'.First, there is the aspect of your organization's physical space and the impact of the physical environment on your employees as a result of their work environment.In the second part of the sentence, I would like to discuss the organisational structure of your company and the establishment of hierarchical roles, procedures, and formal reporting relationships within your company.The study of behavior psychology shows that employees' behavior is directly influenced by the organisational architecture of a company in both senses of the word. In the information technology industry, organization development processes and interventions in the field of information technology have already undergone substantial changes and improvements in light of the fast spread of technologies and their impact on human work.It is for this reason that it is necessary for managers and employees to be provided with instructions on how to organize their working processes, how to share their experiences, and how to make use of the information that they receive. In the information technology industry, there can be silos that exist in the organization development processes and interventions that are made to harness knowledge-based skills, specific job functions, or they can exist geographically.As a matter of fact, the authors argue that silos are one of the most crucial factors in enhancing productivity in many industries, especially in the IT sector.Thus, silos can make it more difficult for the very parts of your company that need to work together when organizational transformation is needed because they are unaccustomed to it, and in some cases even unable to communicate with one another due to cultural misalignment or an inherent sense of distrust and territoriality in the organization.The effects of these problems can complicate the process of bringing about change, or they can delay or sluggish it down in order to reach the hoped-for results. Development of research and design methods. Through the lens of a qualitative research study, the aim of this study was to explore how organization development processes and interventions operate in the information technology industry through the lens of a qualitative research design.As a part of this article's qualitative research, the relationships between the variables in the survey were examined and analyzed to provide a deeper insight into the issues in the survey. Involved participants. In order to formulate the study's recommendations and conclusions, 205 professionals from 35 organizations involved in the study were interviewed in-depth and participated in focus group discussions.The researchers studied 205 professionals from 35 organizations working in areas such as technology, organizational development, and change management.These findings were based on the results of in-depth interviews conducted throughout the research process as well as focus group discussions held during the course of the study.As an additional note, the authors (Dr Marivic Castillo and Siddhartha Paul Tiwari) would like to point out that the respondents were chosen based on the level of knowledge and experience they had about the topic of the study when they were selected, as well as their level of experience with the topic at that time.Several factors of organization development processes and interventions in the information technology industry were asked of the respondents, including: trends in throughput, the increase in economic activity as a result of organizational development processes and interventions in the information technology industry, development of human resources, education, and social protection, among other factors related to organizational development.Instruments used in the study. We developed a questionnaire during the process and discussed it with experts in the field who were involved in the process.There was a primary expert, a representative of UNESCO, who was interviewed for this paper, and a secondary expert, a representative of the American Chamber of Commerce, who was interviewed for this paper.The purpose of this interaction was to gather feedback from them, which was highly valued, since they are involved in organization development processes and interventions in the information technology industry on a daily basis as part of their regular work duties.There was a responsibility on the part of the experts to ensure that the questionnaire would be acceptable to the general public and that they would understand it, in order to make sure that it would be acceptable and understandable to the general public.An expert who was enlisted to assist us in checking the quality of the responses provided by the participants in the interview was consulted after the interview had taken place to check on the quality of the responses provided by the participants.Due to the fact that they provide us with the information that the participants have provided in their responses, they provide us with the opportunity to draw conclusions about what they have said in their answers. The procedure involves. The authors of the study conducted face-to-face interviews with representatives of the organizations for the purpose of gathering the necessary information for the study in order to gather the necessary data through face-to-face interviews in order to gather the necessary information for the study.The purpose of this research is to improve the overall response rate to this research that focuses on organization development processes and interventions in the IT industry in order to increase the response rate.This face-to-face interaction was also a great opportunity for respondents to clarify any confusion they may have had about the research, as well as to verify their responses were accurate as the face-toface interaction allowed respondents to clarify any confusion they may have had.There is no doubt that face-to-face interviewing is one of the best methods for making sure that accurate responses are obtained, facilitating an accurate interpretation of survey tools, and enhancing the quality of data collected as a result of face-to-face interviewing.This enhanced the validity of the findings as a result of the fact that this was done. Analyzing the data. This study aimed to examine how organizations develop their organizational development processes and interventions in the information technology industry, using Microsoft Excel and Super Decision software along with the source data collected from targeted respondents regarding the development processes of organizations and interventions in such industries.The purpose of this study is to analyze the data collected from those respondents in order to make a conclusion.According to the analysis conducted by the authors, the interviews were qualitatively evaluated specifically for organization development processes and interventions in the information technology industry, and this has been summarized in the results and conclusions section of this paper as well.Result and discussion are the next sections of the paper in which we will examine the outcome of the interview and the discussion that has taken place in conjunction with it as well. Fig. 1: A framework developed by the authors of this study (Dr Marivic Castillo and Siddhartha Paul Tiwari) on using a tiered OD intervention process to improve operational efficiency in technology companies like (Apple, Alphabet Inc., Microsoft, Amazon, Samsung Group, Tencent Holdings, Meta Platforms). Challenges. Identifying and addressing challenges related to the development of organizational processes and interventions within the information technology industry are related to how policies and practices are developed.The majority of the challenges that organizations face are attributed to the employees, strategies, and ineffective systems that are in place within them.The rapid pace of change and the availability of new technology have made it possible for organizations to easily to the challenges of modern society.Nevertheless, it has been noticed that many companies fail to build a pipeline of qualified leaders as a result of lack of effective strategies, for the same reason many companies fail to build a pipeline of qualified leaders.The researchers argue, on the other hand, that the development of a business is mainly determined by the workforce.However, if a company will have a multigenerational workforce, it will face challenges in addressing the needs of the workers for career development.According to the authors, based on the results of their research, management faces difficulties when it comes to communicating the organizational priorities of a firm to employees, stakeholders, and the general public in a strategy oriented fashion.The engagement and retention of a diverse workforce, on the other hand, may prove a very challenging or risky endeavor for an organization, which might inhibit its ability to continue to develop innovative approaches for organizational development as a result of issues with engaging and retaining a diverse workforce. In order to make innovative decisions, we need to have a diverse workforce as they come from different perspectives, ideologies, cultures, and languages.Globalisation together with advances in technology, have made the process of retaining employees from different countries easier, on the other hand.The majority of these employees work from home, and managing such a workforce can sometimes be a challenge as well.Research has shown that retaining a new workforce can be challenging due to issues with policies, or the human resources department of a company.A company's low budget can also limit its ability to advertise job openings.There are usually challenges involved in implementing a change in the organizational activities or culture of a company from any sector such as construction, healthcare, retail, education, etc.It is imperative for an enterprise to adopt significant top-down changes in order to ensure that it does not lose its grip on the market competition.As a result of having less funds, an enterprise is unable to develop skill enhancement programs for its employees.The challenge is to develop the productivity of employees as well as the productivity of the company as a whole.As a final note, companies are often challenged when it comes to developing talent when they do not have proper policies and strategies in place.Developing and managing talent is essential for companies to ensure that their customers are satisfied, leading to a more positive brand image and value as a result of a competitive market environment.As a result of the increasing organizational challenges, which disrupt the entire business operation due to a lack of leadership skills and poor management skills, organization challenges are on the rise.Unless leaders are aware of the existing cooperation, it can result in the lack of alignment between the team members which has a ripple effect throughout the entire organization as a whole. Recommendations. Following are some recommendations developed by the authors based on their research: 1. Technological advances give organizations the ability to implement a reorganization of structures, activities, and cultures as a result of technological developments.As a result, an organization is able to achieve a much higher level of effectiveness as a result.In order for this to become a reality in the future, it is imperative that the use of emerging technology be exploited at its fullest extent to ensure that maximum results are achieved.It is likely that the increase in productivity can be attributed to the fact that organizations are able to grasp, appreciate and absorb current technological advances into their structure, their creation and their culture in order to raise productivity.Businesses can save money and time by implementing efficient business processes in their daily operations.As a part of their strategy to maintain market share, organizations may also try to integrate as much new technology as possible into their organization. 2. After the pandemic, a number of newly industrialized countries around the world are facing severe competition in both their local and international markets as a result of this turbulent economic situation.As artificial intelligence and emerging technologies become more and more prevalent, industries will be required to upgrade to the newest technologies and no longer be able to rely on cheap labor costs which have been the underlying factors of their competitiveness for decades.As a result of this situation affecting a number of global countries as a whole, there is the question as to how industries are coping with the problem of incorporating the newest technologies into their manufacturing processes.In this paper, we offer recommendations regarding how organizations can systematically investigate the trends in organizational change and technological innovation in relation to quality management practices throughout the world in order to improve their quality management practices. 3. As the authors advise, tinkering around the edges of existing organizational models will not be sufficient to cure this problem.It has always been the case that organizations were designed for stability, efficiency, and predictability through tightly defined jobs, hierarchies, and organizational boundaries, which are now able to be fully enabled by new technologies that have emerged.Adapting to new technologies and bringing with them new expectations and skills means that the fundamental logic of organizational mechanics which we have relied on for more than 80 years is now being upturned as a direct result of the development of new technologies. 4. It has been shown that the majority of discussions about emerging technologies focus heavily on the human implications of the technology in particular, and the number of jobs that will be replaced as a result.Although it is a good idea to use emerging technologies to replace employees in the future, doing so is simply a limiting concept for the time being.In order for organizations to leverage their unique strengths on both sides and unlock their potential, they must shape how people and technology work together to take advantage of the unique strengths on both sides. 5. It is imperative that we fundamentally redesign the way work is done in order to be able to prepare ourselves for the arrival of new technologies in an increasingly technological world to ensure success in an increasingly technological world as traditional operations become more human designed and run than human-designed.Currently, there is a constant shift going on between humans and technology, so it is important to leave room for ongoing changes in order to keep the boundaries between them open.In order to successfully fulfill this goal, it will be necessary for the organization to constantly rethink its organizational structure, as well as to accelerate the planning cycles, in order to keep up with the changing needs of the workforce in terms of skills, work, and emerging technologies in order to keep up with the changes in the skills, work, and emerging technologies.As part of the reskilling process, people will also need to be retrained on a continuous basis as opposed to only being restrained when necessary.Finally, it will take an overall way of working that will be more iterative, cross-functional, inherently human and capable of taking decisions on the basis of decentralized decision making within empowered teams, rather than centralized decision making on all levels within a department or organization. Conclusion. In the conclusion of this research, a proposed framework for improving operational efficiency in the technology sector using a Tiered Operations Development Intervention Process has been developed.Detailed descriptions of the four stages of the process of organizational development and interventions in the IT industry are as follows: Stage One -Feasibility, Stage Two -Exploratory, Stage Three -Efficacy, and Stage Four -Dissemination and Implementation.A significant outcome of the study was the categorisation of the projects based on the impact that they had, as shown on the 'X' axis, and the requirement for a change management plan in the early stages of an OD process and intervention.As a final learning, the authors discovered that a successful OD intervention of this scale requires a process for tracking and managing change that is designed to enhance the performance of the organization.OD interventions aim to influence leadership, organizational structures, and behavioral patterns as a result of changing leadership, organizational structures, and behavioral patterns. .
4,915.4
2023-05-05T00:00:00.000
[ "Business", "Computer Science" ]
Application of Electrospun Nanofiber Membrane in the Treatment of Diabetic Wounds Diabetic wounds are complications of diabetes which are caused by skin dystrophy because of local ischemia and hypoxia. Diabetes causes wounds in a pathological state of inflammation, resulting in delayed wound healing. The structure of electrospun nanofibers is similar to that of the extracellular matrix (ECM), which is conducive to the attachment, growth, and migration of fibroblasts, thus favoring the formation of new skin tissue at the wound. The composition and size of electrospun nanofiber membranes can be easily adjusted, and the controlled release of loaded drugs can be realized by regulating the fiber structure. The porous structure of the fiber membrane is beneficial to gas exchange and exudate absorption at the wound, and the fiber surface can be easily modified to give it function. Electrospun fibers can be used as wound dressing and have great application potential in the treatment of diabetic wounds. In this study, the applications of polymer electrospun fibers, nanoparticle-loaded electrospun fibers, drug-loaded electrospun fibers, and cell-loaded electrospun fibers, in the treatment of diabetic wounds were reviewed, and provide new ideas for the effective treatment of diabetic wounds. Normal Wound Healing Process Normal wound healing can be divided into four stages: hemostasis, inflammation, proliferation, and remodeling. There are complex and dynamic interactions among the four stages [14][15][16]. The wound-healing process is shown in Figure 1. Hemostasis Stage The hemostasis stage of wound healing refers to the body first promoting the contraction of vascular smooth muscle cells through the neural reflex mechanism, which causes rapid contraction of the damaged blood vessels and triggers hemostasis [18]. Then platelets aggregate to form blood clots at the wound and start hemostasis [19]. Finally, platelets rupture and release growth factors (such as platelet-derived growth factor (PDGF), transforming growth factor-β (TGF-β), and epidermal growth factor (EGF)), thus attracting neutrophils, macrophages, and fibroblasts, which play a role in the subsequent healing phases [20][21][22][23]. [17], ACS, 2019. Hemostasis Stage The hemostasis stage of wound healing refers to the body first promoting the contraction of vascular smooth muscle cells through the neural reflex mechanism, which causes rapid contraction of the damaged blood vessels and triggers hemostasis [18]. Then platelets aggregate to form blood clots at the wound and start hemostasis [19]. Finally, platelets rupture and release growth factors (such as platelet-derived growth factor (PDGF), transforming growth factor-β (TGF-β), and epidermal growth factor (EGF)), thus attracting neutrophils, macrophages, and fibroblasts, which play a role in the subsequent healing phases [20][21][22][23]. Inflammation Stage The inflammation stage is the process of protecting the wound from infection by microorganisms by forming an immune barrier. The early inflammatory stage occurs 24-36 h after injury, with neutrophils playing a major role. Neutrophils engulf microorganisms and foreign bodies and then are squeezed to the surface of the wound, after which they are swallowed by macrophages [24]. The later inflammatory stage occurs 36-72 h after injury, and macrophages play a role [25]. Macrophages phagocytose microorganisms, necrotic tissue, and fragments, and secrete a large number of growth factors (such as PDGF, interleukin-1 (IL-1), and tumor necrosis factor (TNF)). Macrophages can also promote granulation tissue formation by activating keratinocytes, fibroblasts, and endothelial cells [26,27]. Proliferation Stage The proliferation stage mainly refers to the process of granulation tissue formation, angiogenesis, and re-epithelialization [28,29]. Under the action of growth factors produced by platelets and macrophages, fibroblasts proliferate and secrete collagen, and increase the amount of ECM components in the wound, thus promoting granulation tissue formation [30]. Angiogenesis is the key process of wound healing. The anoxic environment of wound tissue and growth factors secreted by macrophages can stimulate the proliferation of endothelial cells. Under the action of angiogenic factors, such as the vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF), endothelial cells promote the formation of new blood vessels [31]. At 48-72 h after injury, keratinocytes in the wound proliferate under the action of EGF, TGF-β, and other cytokines, which stimulate re-epithelialization [32,33]. Remodeling Stage The remodeling stage is the final stage of wound healing. Under the action of cytokines, fibroblasts increase expression of α-smooth muscle actin (α-SMA) and transform into myofibroblasts. Under the action of myofibroblasts, the wound continuously contracts to form a scar, cell apoptosis and cell regeneration reach a balance, and components Inflammation Stage The inflammation stage is the process of protecting the wound from infection by microorganisms by forming an immune barrier. The early inflammatory stage occurs 24-36 h after injury, with neutrophils playing a major role. Neutrophils engulf microorganisms and foreign bodies and then are squeezed to the surface of the wound, after which they are swallowed by macrophages [24]. The later inflammatory stage occurs 36-72 h after injury, and macrophages play a role [25]. Macrophages phagocytose microorganisms, necrotic tissue, and fragments, and secrete a large number of growth factors (such as PDGF, interleukin-1 (IL-1), and tumor necrosis factor (TNF)). Macrophages can also promote granulation tissue formation by activating keratinocytes, fibroblasts, and endothelial cells [26,27]. Proliferation Stage The proliferation stage mainly refers to the process of granulation tissue formation, angiogenesis, and re-epithelialization [28,29]. Under the action of growth factors produced by platelets and macrophages, fibroblasts proliferate and secrete collagen, and increase the amount of ECM components in the wound, thus promoting granulation tissue formation [30]. Angiogenesis is the key process of wound healing. The anoxic environment of wound tissue and growth factors secreted by macrophages can stimulate the proliferation of endothelial cells. Under the action of angiogenic factors, such as the vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF), endothelial cells promote the formation of new blood vessels [31]. At 48-72 h after injury, keratinocytes in the wound proliferate under the action of EGF, TGF-β, and other cytokines, which stimulate re-epithelialization [32,33]. Remodeling Stage The remodeling stage is the final stage of wound healing. Under the action of cytokines, fibroblasts increase expression of α-smooth muscle actin (α-SMA) and transform into myofibroblasts. Under the action of myofibroblasts, the wound continuously contracts to form a scar, cell apoptosis and cell regeneration reach a balance, and components such as proteins and collagen in the ECM tend to be stable. This process takes months or even years to complete the remodeling phase [34]. Local ischemia and hypoxia lead to skin dystrophy, and skin dystrophy causes diabetic wounds [39,40]. The hemostatic stage and inflammatory stage of diabetic wound healing follow normal wound healing. However, some internal factors (such as vascular disease and neuropathy caused by diabetes) and external factors (such as persistent wound infection) Pharmaceutics 2022, 14, 6 3 of 27 make it difficult for diabetic wounds to transition from the inflammation stage to the proliferation stage, resulting in slow healing [41]. Complications of diabetic vascular disease lead to lack of necessary oxygen and nutrients for wound healing, which makes angiogenesis difficult and hinders the transition of wound healing from the inflammation stage to the proliferation stage [42][43][44][45]. In the later inflammation stage of normal wounds, macrophages change from pro-inflammatory M1 macrophages to repair-promoting M2 macrophages, which accelerates the process of inflammation. However, it is difficult for macrophages to change from M1 to M2 in diabetic wounds, which leads to the production of pigment epithelium-derived factor (PEDF) and inhibits angiogenesis. The hyperglycemia environment of diabetic wounds is conducive to bacterial proliferation, and diabetic wounds have the problem of repeated bacterial infection for a long time. Diabetic patients are in a state of hyperglycemia for a long time, and their immune function is abnormal. When bacterial infection occurs in the wound, abnormal immune function often leads to an excessive inflammatory reaction, which keeps the wound in the inflammation stage for a long time [46][47][48][49]. Diabetic wounds are in the stage of inflammation for a long time, which causes neutrophils and macrophages to continuously produce a large number of inflammatory cytokines and ROS, further damaging normal tissues and cells in the wound [50][51][52]. At the same time, the presence of a large amount of ROS will cause fibroblasts to lose their normal function and slow down the deposition of ECM. The persistent inflammation stage of diabetic wounds also leads to overexpression of MMP-2 and MMP-9, which leads to the rapid degradation of ECM. The slow deposition and rapid degradation of ECM affect the adhesion of fibroblasts, resulting in slow wound healing. In addition, the expression of TGF-β and other growth factors in diabetic wounds also significantly decreases because of the long-term inflammation stage of diabetic wounds, which hinders the proliferation and migration of keratinocytes and slows down the process of re-epithelialization. Electrospinning Electrospinning is a technology for the preparation of nanofibers. The morphology and mechanical properties of electrospun fibers can be adjusted by properties of the polymer solution (such as the relative molecular mass of the polymer, solution concentration and viscosity, and solvent properties), process parameters (such as applied voltage, solution injection speed, and receiving distance of the fiber), and environmental conditions. The relative molecular mass of the polymer is an important parameter affecting electrospinning, which directly affects the rheological and electrical properties of the electrospinning solution. Electrospinning can be carried out only when the relative molecular mass of the polymer reaches a certain value. Polymers with low molecular weight tend to form beads during the electrospinning process, while long and continuous nanofibers can be prepared by electrospinning polymers with high molecular weight. The larger the molecular weight of the polymer, the larger the fiber diameter [53]. (2) Solution Concentration and Viscosity The viscosity of the polymer solution can affect the morphology of electrospun fibers. In the process of electrospinning, a solution with low viscosity can readily form beads, and the increase of solution viscosity is conducive to the formation of nanofibers [54]. The entanglement of polymer molecular chains in the solution enables the solution to reach a certain viscosity. The larger the molecular weight of the polymer, the more easily chains entangle, and the higher the viscosity of the solution. When the molecular weight of the polymer is fixed, the concentration of the solution becomes an important factor affecting (3) Solvent Properties Solvent properties have an important effect on the electrospinning process. Firstly, the solvent should have good solubility for the electrospun polymer. In addition, it should have good volatility. In the process of electrospinning, solvents that volatilize quickly can ensure the continuity of the electrospinning fiber. Process Parameters (1) Applied Voltage In the process of electrospinning, the voltage applied to the polymer fluid must exceed a certain critical value to generate sufficient electrostatic repulsion to overcome its surface tension, resulting in the formation of tiny jets to form fibers. The diameter of the fibers prepared at higher voltage is smaller, but too high a voltage will lead to increase in bead defects and fiber diameter. The type of voltage (direct current or alternating current) will also have an impact on electrospinning. When using direct current, the jet is unstable and the fiber is difficult to deposit on the receiving device, while alternating current can reduce the instability of the jet and the diameter of the fiber is smaller [55]. (2) Solution Injection Speed Increasing the solution injection speed will lead to the expansion of the jet diameter, which increases the fiber diameter, while too slow a solution injection speed can prevent fibers forming or lead to fiber discontinuity. (3) The Receiving Distance of the Fiber The receiving distance of the fiber will affect the volatilization of the solvent, which directly affects the diameter and morphology of the electrospun fiber. When the receiving distance is small, solvent volatilization is not complete, resulting in uneven diameters of fibers. When the receiving distance increases to a certain extent, electrospun fibers with small and uniform diameters can be obtained. When the receiving distance is large, the electric field intensity will decrease with increase in receiving distance, which will reduce the jet velocity and weaken the tensile effect, resulting in an increase in fiber diameter. In addition, if the receiving distance is too close or too far, it will lead to the formation of beads. Environmental Conditions The temperature and humidity of electrospinning will affect the deposition behavior of fibers on the collector. Lower temperature will reduce the rate of solvent volatilization, leading to the incomplete solidification of fibers. Increasing the temperature could accelerate the evaporation of solvent and produce continuous fiber, but too high a temperature will block the spinneret. The environmental humidity of the electrospinning process should be suitable; lower humidity leads to the formation of bead defects in electrospun fibers, and the fiber surface tends to adopt a porous structure. The properties of the polymer solution, process parameters and environmental conditions, together affect the electrospinning process. In the process of electrospinning, the solvents used in some electrospinning systems were toxic solvents with poor environmental friendliness. It is necessary to consider the receiving distance when adjusting the electrospinning applied voltage. Some humidity-sensitive electrospinning systems have strict requirements on humidity. It is necessary to comprehensively consider the influence of electrospinning parameters on the electrospinning process to obtain environmentally friendly electrospun fibers with excellent properties. Advantages of Electrospun Nanofiber Membranes in the Treatment of Diabetic Wounds At present, dry gauze and hydrophilic gel are mainly used to treat diabetic wounds. Dry gauze is a common wound dressing. However, its high absorption capacity of tissue fluid can easily lead to wound dehydration, which is not conducive to wound healing. When the gauze is removed, it can also easily cause damage to the newly formed skin. Hydrophilic gels can keep the wound in a moist environment, and their structure is beneficial to gas exchange and exudate absorption at the wound site. However, hydrophilic gel materials cannot simulate the natural ECM structure, which is not conducive to the attachment, growth, and migration of fibroblasts and affects the wound-healing process. Nanofiber membranes are one type of nanostructured material prepared by electrospinning technology. Electrospinning nanofiber membrane refers to nanofibers in aggregate, where electrospinning nanofibers with a diameter below 1000 nm interconnect with each other to form a web structure. Electrospinning nanofiber membrane has the characteristics of large specific surface area, high porosity, small pore size, and adjustable composition. Electrospun nanofiber membranes have the advantages of adjustable composition, structure, and size. The structure of electrospun nanofibers is similar to the structure of ECM, and its porous structure is conducive to gas exchange and exudate absorption at the wound site, which leads to the regeneration of skin tissue in the wound area. Electrospun fiber membranes can also load therapeutic drugs or active ingredients and achieve controlled and sustained release of drugs through structural adjustment, which offers great prospects for application in the treatment of diabetic wounds. Uniaxial Electrospinning Uniaxial electrospinning refers to the preparation of electrospun nanofibers by spinning the solution with a single nozzle. The operation of uniaxial electrospinning is simple, and the compounding of many active components can be achieved by adjusting the composition distribution ratio [56]. Xie added vascular endothelial growth factor (VEGF) and platelet-derived growth factor (PDGF) to polylactic-glycolic acid (PLGA) solution, prepared VEGF-and PDGF-loaded PLGA nanoparticles by compound emulsion technology, and then dispersed the nanoparticles in a mixed solution of chitosan (CS) and polyethylene oxide (PEO) to prepare the electrospinning solution. VEGF-and PDGF-loaded PLGA/CS/PEO nanofiber films were prepared by uniaxial electrospinning, with fiber diameters ranging from 130 nm to 150 nm. The fiber membrane can continuously release two kinds of growth factors within 7 d, which can significantly improve the wound healing effect in diabetic rats [57]. Uniaxial electrospinning requires that the polymers and active ingredients can dissolve in the same solvent, so the selection of solvent is more stringent. In addition, uniaxial electrospinning tends to lead to uneven distribution of drugs in the fiber, and sudden release of drugs always occurs, so stable active ingredients with high toxicity threshold are generally selected. Emulsion Electrospinning Polymer and active ingredients which are difficult to co-dissolve, as well as active ingredients which are susceptible to inactivation, can be treated by emulsion electrospinning to prepare drug-loaded fibers. Emulsion electrospinning refers to the method of first mixing the aqueous phase and organic phase to form emulsions, and then electrospinning the emulsions to prepare nanofibers with a core/shell structure [58]. By using emulsion electrospinning, active components which can easily be deactivated and unstable can be loaded into the fiber core layer to improve their stability. Raghunath added easily oxidized vitamin C, easily deactivated EGF, and insulin into a PLGA and collagen mixed solution to make an emulsion. PLGA/collagen nanofiber membranes loaded with various bioactive substances were prepared by emulsion electrospinning. The fiber diameters were 210 ± 62 nm. The release of EGF from this fiber membrane in 8 h was as high as 97%, the release of insulin in 25 h was about 80%, and the release of vitamin C in 12 h was 30%. The prepared fiber membrane can simultaneously deliver a variety of bioactive substances, and the synergistic effect of various bioactive substances can promote the proliferation of keratinocytes and fibroblasts, which is conducive to diabetic wound healing [59]. The construction of a stable emulsion system is key to emulsion electrospinning. At present, the emulsion is mainly prepared by a high-energy emulsification method, which consumes much energy and may destroy the active components of drugs. The exploration of lowenergy emulsification methods is currently a research focus in emulsion electrospinning. Coaxial Electrospinning For easily inactivated protein drugs, coaxial electrostatic spinning can be used to protect the easily inactivated ingredient. Coaxial electrospinning refers to a method of preparing nanofibers with a core/shell structure by spinning the solution with a coaxial nozzle. Coaxial electrospinning can simultaneously spin two different component solutions, which can prevent mixing interference between different active components. The core/shell structure of coaxial electrospinning fibers can also achieve controlled drug release [60]. Lee dissolved PLGA in 1,1,1,3,3,3-hexafluoro-2-propanol (HFIP) to obtain a shell spinning solution and used insulin glargine as the nuclear layer spinning solution. Insulin-loaded core/shell nanofibrous scaffolds with insulin as the core layer and PLGA as the shell layer were prepared by coaxial electrospinning. Core/shell structure nanofibrous scaffolds can protect the activity of insulin and the fiber membrane can slowly release insulin for 28 d. In vivo experiments in Sprague-Dawley rats showed that nanofibrous scaffolds can increase expression of TGF-β in the wound and promote wound healing in diabetes mellitus [61]. Coaxial electrospinning is widely used in biomedical fields, but requires that the core and shell solutions must be solidified synchronously in the electrospinning process, which means the method has higher requirements for the preparation process [62][63][64]. Application of Electrospun Nanofiber Membranes in the Treatment of Diabetic Wounds Polymer electrospun fibers, nanoparticle-loaded electrospun fibers, drug-loaded electrospun fibers, and cell-loaded electrospun fibers prepared by uniaxial, emulsion and coaxial electrospinning can be used to treat diabetic wounds. Treatment of Diabetic Wounds with Electrospun Synthetic Polymer Fibers Synthetic polymer electrospun fibers have good mechanical strength and stability, and the fibers, with a specific structure and properties, can be directly applied in the treatment of diabetic wounds. Maggay prepared an electrospinning solution by dissolving polyvinylidene fluoride (PVDF) and zwitterionic polymer poly(2-methacryloyloxyethyl phosphorylcholine-co-methacryloyloxyethyl butylurethane) (PMBU) in a mixed solvent of dimethylformamide (DMF)/acetone (v/v = 6:4), and zwitterionic PVDF membranes (P5) were constructed by uniaxial electrospinning. PMBU can improve the hydration of the P5 membrane, reduce the biological pollution of protein and bacteria, improve blood fusion, and thus promote wound healing. A 24 h bacterial adhesion experiment showed that the bacterial adhesion number of P5 was 1500 cells/mm 2 , which can effectively prevent bacterial adhesion. The diabetic wounds of mice were treated with P5, PVDF fiber membrane (P0), and the commercial wound dressing DuoDerm, respectively, and wounds treated with 3M Tegaderm Film were used as a control. After 14 d of wound treatment, the wound closure rate of the P5 group was 85%, while that of the P0 group and the DuoDerm group was 81 and 90% respectively. The wound-healing effect of the P5 group was similar to that of the DuoDerm group, and had a better diabetes wound-healing effect [65]. The photos of the treatment of diabetic wounds with the P5 membrane are shown in Figure 2. rial adhesion. The diabetic wounds of mice were treated with P5, PVDF fiber membrane (P0), and the commercial wound dressing DuoDerm, respectively, and wounds treated with 3M Tegaderm Film were used as a control. After 14 d of wound treatment, the wound closure rate of the P5 group was 85%, while that of the P0 group and the DuoDerm group was 81 and 90% respectively. The wound-healing effect of the P5 group was similar to that of the DuoDerm group, and had a better diabetes wound-healing effect [65]. The photos of the treatment of diabetic wounds with the P5 membrane are shown in Figure 2. Natural/Synthetic Polymer Blended Electrospun Fibers in the Treatment of Diabetic Wounds Natural polymer (such as CS, type I collagen, gelatin (Gel), and silk sericin (SS)) fibers have good biocompatibility, but the mechanical strength of natural polymer fibers is poor in the environment of body fluids, which limits the application of natural polymer fibers to some extent. Based on the advantages of easy degradation and good biocompatibility of natural polymers, blending natural polymers with synthetic polymers (such as polyvinyl alcohol (PVA) and polycaprolactone (PCL)), or modifying synthetic polymer fibers with natural polymers on their surface, can prepare natural/synthetic polymer fibers with high mechanical strength, which can improve the poor mechanical strength of natural polymer fibers while maintaining the advantages of natural polymers, and expanding the application of natural polymers. Gholipour-Kanani prepared CS/PVA and PCL/CS/PVA electrospinning nanofiber membranes using uniaxial electrospinning, which were used to treat diabetic wounds. Diabetic rats were randomly divided into three groups: the CS/PVA fiber membrane treatment group (S1 group), the PCL/CS/PVA fiber membrane treatment group (S2 group), and the untreated diabetic wound group (the control group). The initial wound area of diabetic rats was 50.25 ± 0.01 mm 2 . After 20 d, the wounds of S1 and S2 groups were basically healed (the wound areas of S1 and S2 groups were 1 ± 0.5 mm 2 and 1.8 ± 0.7 mm 2 , respectively), while the wound area of the control group was larger (14.3 ± 0.5 mm 2 ). In addition, the pathological results after 20 d showed that there was inflammation in the Natural/Synthetic Polymer Blended Electrospun Fibers in the Treatment of Diabetic Wounds Natural polymer (such as CS, type I collagen, gelatin (Gel), and silk sericin (SS)) fibers have good biocompatibility, but the mechanical strength of natural polymer fibers is poor in the environment of body fluids, which limits the application of natural polymer fibers to some extent. Based on the advantages of easy degradation and good biocompatibility of natural polymers, blending natural polymers with synthetic polymers (such as polyvinyl alcohol (PVA) and polycaprolactone (PCL)), or modifying synthetic polymer fibers with natural polymers on their surface, can prepare natural/synthetic polymer fibers with high mechanical strength, which can improve the poor mechanical strength of natural polymer fibers while maintaining the advantages of natural polymers, and expanding the application of natural polymers. Gholipour-Kanani prepared CS/PVA and PCL/CS/PVA electrospinning nanofiber membranes using uniaxial electrospinning, which were used to treat diabetic wounds. Diabetic rats were randomly divided into three groups: the CS/PVA fiber membrane treatment group (S1 group), the PCL/CS/PVA fiber membrane treatment group (S2 group), and the untreated diabetic wound group (the control group). The initial wound area of diabetic rats was 50.25 ± 0.01 mm 2 . After 20 d, the wounds of S1 and S2 groups were basically healed (the wound areas of S1 and S2 groups were 1 ± 0.5 mm 2 and 1.8 ± 0.7 mm 2 , respectively), while the wound area of the control group was larger (14.3 ± 0.5 mm 2 ). In addition, the pathological results after 20 d showed that there was inflammation in the control group, but there was no inflammation in S1 and S2 groups [66]. PCL/ type I collagen nanofiber membranes with different fiber spatial arrangements (random, aligned and crossed) were prepared by uniaxial electrospinning. The prepared fiber membranes were used to treat the wounds of diabetic rats. After 7 d of treatment, the wound healing rate of diabetic rats was 70% in the crossed group, 62% in the aligned group, and 56% in the random group, while it was only 40% in the control group. After 14 d, the wound healing rate of the crossed group was the highest, in excess of 95%. The results showed that the fiber arrangement has a great influence on the diabetes wound-healing effect [67]. Natural polymer gels can maintain the moist environment of a wound and promote wound healing, but Gel fiber immediately dissolves after contact with water and loses its fiber form [68]. To solve the problem of Gel nanofibers ready solubility in water, Sanhueza used a poly-3-hydroxybutyrate (PHB) (8% w/v) chloroform solution and Gel (30% w/v) acetic acid solution as spinning solutions, and dual-sized Gel/PHB nano/microfibers (Gel/PHB) was prepared by dual-jet electrospinning with double needles. The wounds of diabetic rats were treated with the Gel/PHB fiber membrane as well as the PHB micron fiber membrane and Gel nanofiber membrane obtained by uniaxial electrospinning, and the wounds treated only with saline solution were used as a control. The results showed that the Gel fibrous membrane immediately dissolved after contact with tissue fluid, and the formed viscous substance was left in the wound, which hindered the diabetes wound healing while there was no viscous substance formed in the wound treated with the Gel-PHB fibrous membrane. After 7 d, the wound healing rate of the Gel-PHB group was 30%, while that of the control group and the PHB group were 28 and 26% respectively [69]. SS is a kind of natural protein biomaterial, which has great potential in tissue regeneration due to its excellent antioxidant and antibacterial activity. A high level of ROS will delay the process of wound healing, and the antioxidant activity of SS helps eliminate ROS produced by senescent cells in the process of chronic inflammation, thus promoting wound healing. Gilotra prepared PVA/SS nanofiber films with fiber diameters of 130-160 nm by uniaxial electrospinning. This fibrous membrane can slowly release SS for 28 d, which can promote the transition of the wound healing process from the inflammation stage to the proliferation stage, which is conducive to diabetic wound healing [70]. Chouhan first mixed a PVA solution (13% w/w) and Antheraea assama silkworm silk fibroin (AaSF) solution (3% w/w) in equal volumes to obtain a PVA/AaSF solution, then an AaSF nanofiber membrane was prepared by uniaxial electrospinning, and then recombinant spider silk fusion proteins (FN-4RC and Lac-4RC) were coated on the AaSF fiber membrane to obtain the AaSF-FN-Lac film. The prepared fiber membranes ((1) AaSF membrane, (2) AaSF-FN membrane (coated only with FN-4RC), (3) AaSF-Lac membrane (coated only with Lac-4RC), and (4) AASF-FN-Lac membrane) were used to treat the wounds of diabetic rabbits. Commercially available wound dressing Duoderm and the untreated group (UNT) were used as controls. The AASF-FN, AASF-Lac, and AASF-FN-Lac groups promoted faster wound healing than the AaSF and Duoderm groups. Among all treatment groups, the ratio of remaining wound area after 12 d was 8% in the AASF-FN-Lac group, 15-18% in the AASF-FN and AASF-Lac groups, 24 ± 2.09%, 69 ± 6.45%, and 88 ± 6.39% in the AaSF, Duoderm, and UNT groups. Wound healing in the AASF-FN-Lac group was the fastest, with wounds completely healing within 14 d. The results showed that recombinant spider silk fusion proteins can promote granulation tissue formation, re-epithelialization, and ECM deposition at the wound, resulting in a better diabetic wound healing effect [71]. The experimental schematic diagram is shown in Figure 3. Nanoparticles/Synthetic Polymer Electrospun Fibers βG can activate the innate immune system by binding to Dectin-1 receptors on macrophages, dendritic cells, and neutrophils, which contributes to the transformation of M1 macrophages into M2 macrophages and promotes chronic wound healing. Grip added βG into the mixed solution of PEO and HPMC to prepare a spinning solution. HPMC/PEO nanofiber films loaded with βG were prepared by uniaxial electrospinning. The woundhealing effect of nanofibers was evaluated using the wounds of male diabetic mice. Diabetic mice were randomly divided into six experimental groups: four groups were treated with nanofibers, one group was injected with 50 µL water as the negative control group, and one group was injected with 50 µL growth factor solution (10 µg PDGF and 1 µg TGF-α dissolved in 0.5% (w/v) hydroxypropyl methylcellulose solution) as the positive control group. The therapeutic effects of three different doses of βG nanofibers (containing 190, 370, and 990 µg βG, respectively) and blank HPMC/PEO nanofibers without βG were evaluated. The results showed that the wound healing of the four nanofiber groups was better than that of the negative control group. After 4 d, the remaining wound area ratio of the βG nanofiber group was 76.8-82.3%, which was lower than that of the positive control group (97.9%), indicating that βG nanofibers can promote diabetic wound healing [72]. In the process of wound healing, the introduction of exogenous NO can promote angiogenesis and collagen deposition in the wound. However, high concentrations of NO (over 400 nM) can lead to apoptosis, which is not conducive to wound healing. In order to regulate the release behavior of NO, Zhang first prepared MOFs (HKUST-1) nanoparticles, and then the prepared HKUST-1 and 4-MAP were solvothermally reacted in a reactor to prepare secondary-amino-modified HKUST-1 nanoparticles. The modified HKUST-1 nanoparticles were activated at 120 • C for 10 h in a vacuum, cooled to room temperature, and then exposed to NO for 1 h at a pressure of 2 atm by a pressurizing device to load NO, and NO@HKUST-1 was obtained. NO@HKUST-1 was dispersed in HFIP, and then PCL was added and stirred to obtain the core layer spinning solution, while gel was dissolved in HFIP to obtain the shell layer spinning solution. NO@HKUST-1/PCL/Gel nanofiber films with a core/shell structure were prepared by coaxial electrospinning. The fiber membrane can release NO for 14 d with an average release rate of 1.74 nmol/l/h. The wounds of diabetic mice were treated with PCL/Gel fiber membranes (PG), HKUST-1/PCL/Gel (HPG), and NO@HKUST-1/PCL/Gel (NO@HPG) fiber membranes, respectively, untreated diabetic wounds were the control group. The results showed that the NO@HPG nanofiber membrane can promote angiogenesis and inhibit inflammation. NO and Cu 2+ released by NO@HPG nanofiber membranes can cooperatively promote endothelial cell growth. The wound healing rates of the NO@HPG group at 11 and 13 d were 97.80 and 99.57%, respectively, which were significantly higher than those of other groups (HPG: 87%, 89%, PG: 77%, 81%, Control group: 62%, 81%) [73]. The preparation process of NO@HPG fiber membranes and their mechanism of promoting diabetic wound healing are shown in Figure 4. BGs can change the cell microenvironment by releasing inorganic ions (such as Si 4+ ), and Si 4+ can stimulate expression of HIF-α, thereby promoting endothelial angiogenesis, which is conducive to diabetic wound healing. Elshazly prepared nanofibers loaded with BGs (BGnf) by uniaxial electrospinning. The prepared BGnf was applied to wounds of diabetic rabbits, and untreated wounds of diabetic rabbits were used as the control group. Immunohistochemical analysis showed that the percentage of VEGF expression in the BGnf group (14.08 ± 3.88%) was higher than that in the control group (3.92 ± 0.221%) after one week. After three weeks, the percentage of VEGF expression in the BGnf group (18.48 ± 1.458%) was increased compared to that in the control group (16.81 ± 1.65%). After three weeks, the wounds in the BGnf group were completely closed and new blood vessels were formed, and there was no inflammation. In the control group, there were purulent exudates and less neovascularization in the wounds [74]. Jiang prepared polydopamine (PDA)-modified PLA/PCL fiber loaded with BGs (BGs/PDA/PM) by uniaxial electrospinning combined with a PDA coating method. The cumulative release concentration of Si 4+ of BGs/PDA/PM was 0.517 µg/mL, 1.347 µg/mL, and 2.416 µg/mL on d 1, 3 and 7, respectively. PLA/PCL fiber (PM) (uniaxial electrospinning PLA/PCL solution), PDA modified PLA/PCL fiber (PDA/PM), and BGs/PDA/PM fiber were used to treat the wounds of diabetic mice, and the wounds of untreated diabetic mice were the control. After 7 d, the wound healing rates of the control group and the PM group were 48.9 and 43.7% respectively, which was significantly higher than that of the PDA/PM group (34.7%) and the BGs/PDA/PM group (24.8%). After 15 d, the wounds in the BGs/PDA/PM group basically healed, and the residual wound area rate in the BGs/PDA/PM group was the lowest (0.98%), followed by the PDA/PM group (1.33%), the PM group (4.83%), and the control group (8.13%). The above results suggest that the release of Si 4+ from BGs/PDA/PM fibers can effectively treat diabetic wounds [75]. The effects of PM, PDA/PM, and BGs/PDA/PM on wound tissues are shown in Figure 5. while gel was dissolved in HFIP to obtain the shell layer spinning solution. NO@HKUST-1/PCL/Gel nanofiber films with a core/shell structure were prepared by coaxial electrospinning. The fiber membrane can release NO for 14 d with an average release rate of 1.74 nmol/l/h. The wounds of diabetic mice were treated with PCL/Gel fiber membranes (PG), HKUST-1/PCL/Gel (HPG), and NO@HKUST-1/PCL/Gel (NO@HPG) fiber membranes, respectively, untreated diabetic wounds were the control group. The results showed that the NO@HPG nanofiber membrane can promote angiogenesis and inhibit inflammation. NO and Cu 2+ released by NO@HPG nanofiber membranes can cooperatively promote endothelial cell growth. The wound healing rates of the NO@HPG group at 11 and 13 d were 97.80 and 99.57%, respectively, which were significantly higher than those of other groups (HPG: 87%, 89%, PG: 77%, 81%, Control group: 62%, 81%) [73]. The preparation process of NO@HPG fiber membranes and their mechanism of promoting diabetic wound healing are shown in Figure 4. BGs can change the cell microenvironment by releasing inorganic ions (such as Si 4+ ), and Si 4+ can stimulate expression of HIF-α, thereby promoting endothelial angiogenesis, which is conducive to diabetic wound healing. Elshazly prepared nanofibers loaded with BGs (BGnf) by uniaxial electrospinning. The prepared BGnf was applied to wounds of diabetic rabbits, and untreated wounds of diabetic rabbits were used as the control group. Immunohistochemical analysis showed that the percentage of VEGF expression in the BGnf group (14.08 ± 3.88%) was higher than that in the control group (3.92 ± 0.221%) after one week. After three weeks, the percentage of VEGF expression in the BGnf group (18.48 ± 1.458%) was increased compared to that in the control group (16.81 ± 1.65%). After three weeks, the wounds in the BGnf group were completely closed and new blood vessels were formed, and there was no inflammation. In the control group, there were purulent exudates and less neovascularization in the wounds [74]. Jiang prepared polydopamine (PDA)-modified PLA/PCL fiber loaded with BGs (BGs/PDA/PM) by uniaxial electrospinning combined with a PDA coating method. The cumulative release concentration of Si 4+ Hypoxia is one of the main causes of poor vascularization in diabetic wounds. Hypoxia leads to the lack of HIF-1α, while HIF-1α can regulate oxygen homeostasis and a long-term hypoxic environment caused by impaired blood supply in diabetic wounds. The lack of HIF-1α will make chronic wounds difficult to heal. Zehra prepared PCL nanofiber membrane loaded with SPC (PCL-SPC) by uniaxial electrospinning, which can produce oxygen continuously for 10 d. Diabetic rats were randomly divided into three groups: the untreated diabetic wound control group, the PCL nanofiber treatment group (uniaxial electrospinning PCL solution), and the PCL-SPC fiber treatment group. A wound healing experiment showed that, compared with the control group and the PCL fiber group, the PCL-SPC fiber group can effectively improve the structure of the epidermis and dermis, and accelerate the rate of epithelialization and wound healing. In addition, the relative expression of the HIF-1α gene was analyzed by quantitative polymerase chain reaction. The results showed that expression of the HIF-1α gene in the PCL-SPC group was 2.52 ± 0.26 times higher than that in the control group and was significantly higher than that in the PCL fiber group (expression of the HIF-1α gene in the PCL fiber group was 1.68 ± 0.03 times higher than that in the control group). The oxygen supply of SPC plays an important role in diabetic wound healing [76]. PLA/PCL fiber (PDA/PM), and BGs/PDA/PM fiber were used to treat the wounds of diabetic mice, and the wounds of untreated diabetic mice were the control. After 7 d, the wound healing rates of the control group and the PM group were 48.9 and 43.7% respectively, which was significantly higher than that of the PDA/PM group (34.7%) and the BGs/PDA/PM group (24.8%). After 15 d, the wounds in the BGs/PDA/PM group basically healed, and the residual wound area rate in the BGs/PDA/PM group was the lowest (0.98%), followed by the PDA/PM group (1.33%), the PM group (4.83%), and the control group (8.13%). The above results suggest that the release of Si 4+ from BGs/PDA/PM fibers can effectively treat diabetic wounds [75]. The effects of PM, PDA/PM, and BGs/PDA/PM on wound tissues are shown in Figure 5. presented as means ± standard error. Differences were considered significant when p < 0.05 (*), p < 0.01 (**), or p < 0.001 (***). Reproduced with permission from [75], ACS, 2020. Hypoxia is one of the main causes of poor vascularization in diabetic wounds. Hypoxia leads to the lack of HIF-1α, while HIF-1α can regulate oxygen homeostasis and a long-term hypoxic environment caused by impaired blood supply in diabetic wounds. The lack of HIF-1α will make chronic wounds difficult to heal. Zehra prepared PCL nanofiber membrane loaded with SPC (PCL-SPC) by uniaxial electrospinning, which can produce oxygen continuously for 10 d. Diabetic rats were randomly divided into three groups: the untreated diabetic wound control group, the PCL nanofiber treatment group (uniaxial electrospinning PCL solution), and the PCL-SPC fiber treatment group. A presented as means ± standard error. Differences were considered significant when p < 0.05 (*), p < 0.01 (**), or p < 0.001 (***). Reproduced with permission from [75], ACS, 2020. CeO 2 has antibacterial activity, and electrospun nanofiber membranes loaded with CeO 2 can also promote diabetic wound healing [82]. Augustine used an ultrasonic device to disperse nCeO 2 in a mixture of chloroform/dimethylformamide (v: v = 9:1) and dissolved PHBV in this solution to obtain 20% w/v PHBV electrospinning solution. PHBV fiber membranes loaded with nCeO 2 were prepared by uniaxial electrospinning, which was used to treat diabetic wounds. The fiber membrane with a nCeO 2 loading of 1% (weight ratio) was PHBV/nCeO 2 -1. The results showed that the wound healing rate of the PHBV/nCeO 2 -1 group was significantly higher than that of the PHBV fiber group without nCeO 2 . On d 10, 20, and 30, the wound healing rate of the PHBV/nCeO 2 -1 group was 52%, 73%, and 80%, respectively, while that of the PHBV fiber group was 27%, 43%, and 69%, respectively. PHBV fiber membranes loaded with nCeO 2 can promote diabetic wound healing [77]. The healing of wounds treated with PHBV fiber membranes and PHBV/nCeO 2 -1 fiber membranes are shown in Figure 6. (weight ratio) was PHBV/nCeO2-1. The results showed that the wound healing rate of the PHBV/nCeO2-1 group was significantly higher than that of the PHBV fiber group without nCeO2. On d 10, 20, and 30, the wound healing rate of the PHBV/nCeO2-1 group was 52%, 73%, and 80%, respectively, while that of the PHBV fiber group was 27%, 43%, and 69%, respectively. PHBV fiber membranes loaded with nCeO2 can promote diabetic wound healing [77]. The healing of wounds treated with PHBV fiber membranes and PHBV/nCeO2-1 fiber membranes are shown in Figure 6. Nanoparticles/Synthetic Polymers/Natural Polymer Electrospun Fibers Zhang used a PLA solution containing BGs as the electrospinning solution of core layer, and an HFIP solution of Gel as the electrospinning solution of shell layer. The core/shell structure patterned BG@PLA/Gel (BG@PG) electrospun nanofiber membrane was prepared by using a honeycomb structure receiver through coaxial electrospinning. The wounds of diabetic mice were treated with BG@PG fiber membranes, disordered PLA/Gel electrospun fiber membranes (UPG), and patterned PLA/Gel electrospun fiber membranes (PG), respectively, and untreated diabetic wounds were used as the control Nanoparticles/Synthetic Polymers/Natural Polymer Electrospun Fibers Zhang used a PLA solution containing BGs as the electrospinning solution of core layer, and an HFIP solution of Gel as the electrospinning solution of shell layer. The core/shell structure patterned BG@PLA/Gel (BG@PG) electrospun nanofiber membrane was prepared by using a honeycomb structure receiver through coaxial electrospinning. The wounds of diabetic mice were treated with BG@PG fiber membranes, disordered PLA/Gel electrospun fiber membranes (UPG), and patterned PLA/Gel electrospun fiber membranes (PG), respectively, and untreated diabetic wounds were used as the control group. The wound area of the PG group and the BG@PG group at 7 d was 25.4 mm 2 and 24.9 mm 2 , respectively, which were significantly smaller than that of the control group (33.7 mm 2 ) and the UPG group (30.2 mm 2 ). At 14 d, the wounds of the BG@PG group were almost completely healed, while the wound area of the control group, UPG group, and PG group was 6.7 mm 2 , 4.0 mm 2 , and 2.1 mm 2 , respectively [78]. Compared with the disordered nanofiber membrane, the patterned fiber membrane can provide more adhesion sites and growth space for cells, promote the adhesion and proliferation of cells on the fiber membrane, and stimulate the growth and differentiation of cells. A three-trilayer nanofibrous membrane (BGs-TFM) was prepared by the continuous uniaxial electrospinning method, with a CS fiber membrane as the lower layer, CS and PVA as the middle layer, and PVA/BGs as the upper layer. The BGs-TFM fiber membrane has good biocompatibility, strong antibacterial activity, and can promote skin regeneration. The wound model of diabetic mice showed that BGs-TFM can up-regulate growth factors, such as VEGF and TGF-β, and down-regulate inflammatory factors, such as TNF-α and IL-1β, promote epithelial regeneration and collagen deposition, and thus promote wound healing [79]. Using triethyl phosphate, tetraethyl orthosilicate, and calcium nitrate tetrahydrate as raw materials, Lv synthesized NAGEL with a particle size less than 2 mm using sol-gel method, and then prepared PCL/Gel nanofiber films loaded with NAGEL by uniaxial electrospinning. Wound experiments in diabetic mice showed that fibrous membranes can promote diabetic wound healing by promoting angiogenesis, collagen deposition and re-epithelialization, and inhibiting inflammatory reactions. The nanofibers containing 0 and 10% NAGEL particles were PL and 10NAG-PL, respectively. The wounds of diabetic mice were treated with PL and 10NAG-PL, and untreated diabetic wounds were the control group. At 7 d, the wound area of the 10NAG-PL group decreased by 57%, which was higher than that of the control group (18%) and the PL group (42%). After 13 d, the wound healing rate of the 10NAG-PL group was 94%, which was significantly higher than that of the PL group (82%) and the control group (69%) [80]. Ahmed prepared the CS/PVA/ZnO nanofiber membrane and the CS/PVA nano-fiber membrane by electrospinning CS/PVA solution with or without ZnO through uniaxial electrospinning. The wounds of diabetic rabbits were treated with CS/PVA and CS/PVA/ZnO nanofiber membranes. The results of wound healing experiments in diabetic rabbits showed that the wound healing rate of the CS/PVA/ZnO nanofiber membrane group was 44.8 ± 4.9% after 4 d treatment, which was higher than that of the CS/PVA nanofiber membrane group (22.5 ± 3.0%). In addition, the wound healing rate of the CS/PVA/ZnO nanofiber membrane group was 90.5 ± 1.7% on the 12th d, which was much higher than that of the untreated diabetic wound control group (52.3 ± 2.8%) [81]. Drugs/Natural Polymer Electrospun Fibers Liu prepared CA/zein nanofiber membranes loaded with different masses of sesamol through uniaxial electrospinning and studied the effect of CA/zein fiber membranes on wound healing in diabetic mice. Diabetic mice were divided into five groups: Group C (normal mouse wound, control group), Group S (diabetic mouse wound, untreated), Group M (diabetic mouse wound, treated with blank nanofiber membranes without sesamol), Group L (diabetic mouse wound, treated with CA/zein nanofiber membranes loaded with 2% sesamol), and Group H (diabetic mouse wound, treated with CA/zein nanofiber membranes loaded with 5% sesamol). After 5 d, the wound healing rates of each group (C, S, M, L, and H) were 80%, 20%, 40%, 60%, and 70%, respectively. After 9 d, the wound healing rates of each group (C, S, M, L, and H) were 100%, 60%, 85%, 95%, and 100%, respectively. The wounds in groups L and H were similar to those in group C, and essentially had healed on d 9. Studies have shown that sesamol can down-regulate expression of inflammatory factors such as IL-1β, and TNF-α, while up-regulating expression of IL-6 (anti-inflammatory factor), which can promote the rapid healing of diabetic wounds [83]. Drugs/Synthetic Polymer Electrospun Fibers PL can release PDGF and VEGF, thus promoting collagen deposition and re-epithelialization to promote diabetic wound healing, but PL is easily inactivated when directly used. In order to improve the stability of PL, Losi prepared a protein/poly(ether)urethane fiber loaded with PL (FB-PL fiber) using uniaxial electrospinning. The wounds of diabetic mice were treated with FB-PL fibers, and the wounds treated with mepore polyurethane film (transparent breathable dressing) were used as the control group. After 14 d, the remaining wound area in the FB-PL group was 20%, which was much lower than that in the control group (78%). The cumulative release of PDGF and VEGF from FB-PL fiber membranes was detected by the ELISA method. The results showed that 40% growth factor was released from FB-PL fiber membranes on the first d and 80% on the 7th d [84]. CTGF is unstable in a highly oxidized diabetic wound environment, and the use of nanofiber membranes loaded with CTGF can improve its stability and promote diabetic wound healing [99]. Augustine mixed PVA aqueous solution with CTGF solution to prepare PVA solution (6%, w/v) containing CTGF (0.1wt%). At the same time, PLA in dichloromethane (DCM)/DMF (v/v = 1:9) solution was prepared. The core/shell PVA-CTGF/PLA nanofiber membrane was constructed by coaxial electrospinning using PVA solution with CTGF as the core layer spinning solution and PLA solution in DCM/DMF as the shell spinning solution. PVA-CTGF/PLA nanofiber membranes can slowly release CTGF for 15 d. In vitro wound healing experiments showed that the fibroblast wound shrinkage rate of the control group (the untreated diabetic wound group) was 32.51 ± 6.44%, while that of the PVA-CTGF/PLA group was 54.34 ± 6.8%. The keratinocyte wound shrinkage rate in the control group was 8.62 ± 2.34%, while that in the PVA-CTGF/PLA group was 45.54 ± 6.68%. The wound shrinkage rate of endothelial cells in the control group was 43.45 ± 4.58%, while that in the PVA-CTGF/PLA group was 58.64% ± 3.46%. Cell activity test results showed that compared with the control group, the number of living cells (fibroblast, keratinocytes, and endothelial cells) in the PVA-CTGF/PLA group was more, indicating that PLA/PVA-CTGF membrane is conducive to cell proliferation and migration, and beneficial to the treatment of diabetic wounds [85]. Su prepared PCL solution by dissolving PCL in a mixed solvent of DCM/DMF (v/v = 4:1), and an aqueous solution of antibacterial peptide 17BIPHE2 was added into the PCL solution to obtain electrospinning solution A, and an aqueous solution of Pluronic F127 was used as electrospinning solution B. 17BIPHE2-PCL/Pluronic F127 core/shell nanofiber membranes with 17BIPHE2-PCL as core layer and Pluronic F127 as shell layer were prepared by coaxial electrospinning of solutions A and B. Then 17BIPHE2 was coated on the surface of the 17BIPHE2-PCL/Pluronic F127 nanofiber membranes to obtain 17BIPHE2-PCL/F127-S fiber membranes, and 17BIPHE2-PCL/F127-S can continuously release antimicrobial peptide 17BIPHE2 for 28 d. PCL/F127 core/shell nanofiber films were prepared by coaxial electrospinning using PCL solution without antibacterial peptide 17BIPHE2 as the core layer spinning solution and B as the shell layer spinning solution. The wounds of diabetic mice were inoculated with 10 µL methicillin-resistant Staphylococcus aureus (MRSA) at a concentration of 1 × 10 8 CFU/mL, and then the wounds were treated with PCL/F127 and 17BIPHE2-PCL/F127-S fiber membranes. In vivo antibiofilm efficacy test results showed that without debridement, 6.17 × 10 6 CFU/g MRSA was detected at the wound site after 3 d of 17BIPHE2-PCL/F127-S fiber membrane treatment, which had 3.08 log reduction compared to the PCL/F127 control group. After debridement, no colony was found in wounds treated with 17BIPHE2-PCL/F127-S fiber membranes after 3 d, which was 9.86 log reduction compared to the PCL/F127 control group. These results indicate that bacterial biofilms in diabetic wounds can be eliminated after 3 d of debridement and 17BIPHE2-PCL/F127-S treatment, thus promoting diabetic wound healing [86]. Mabrouk first prepared PAA ethanol solution (7% w/v), PVP ethanol solution containing CFX (PVP content 20% w/v, CFX/PVP, m/m = 1:10, 1:20 and 1:30), and PCL ethanol solution (10% w/v). A three-layer nanofiber membrane (PAA/PVP/PCL nanofiber membrane) was prepared by continuous uniaxial electrospinning with a PAA fiber membrane as the lower layer, PVP/CFX as the intermediate layer, and PCL as the upper layer. The fiber membrane can continuously release CFX for 48 h and has antibacterial activity against gram-negative bacteria and gram-positive bacteria. The antibacterial properties of the PAA/PVP/PCL nanofiber membrane give it the potential to promote diabetic wound healing [87]. DCH is a broad-spectrum antibiotic and MMPs inhibitor. Local administration of DCH can be used to treat chronic wounds, but local administration of DCH has some problems, such as poor efficacy and strong skin irritation. Cui added DCH into a PLA/HFIP solution, stirred at room temperature for 30 min to prepare the DCH contained polymer solution, and then prepared DCH-loaded PLA nanofiber membranes (DCH/PLA) by uniaxial electrospinning. The fiber membrane can release DCH for two weeks. The wounds of diabetic rats were treated with normal saline (the control group), uniaxial electrospun PLA nanofiber membranes (the PLA group), DCH solution dropping combined PLA nanofiber membranes (the DCH+PLA group), and DCH/PLA nanofiber membranes (the DCH/PLA group). After 7 d, the wound area of the PLA group (44.3 ± 5.9 mm 2 ) was almost the same as that of the control group (47.4 ± 2.6mm 2 ). In the DCH+PLA group, the wound was significantly reduced when the DCH concentration was 10 and 15% (the wound area was 16.6 ± 3.6 mm 2 and 18.1 ± 4.4 mm 2 , respectively), but when the DCH concentration was increased to 20%, the wound healing speed significantly decreased (the wound area was 29.3 ± 9.6 mm 2 ). The wound area of the DCH/PLA group on the 7th d was only 6.3 ± 2.7 mm 2 , indicating that loading DCH into PLA fibers can effectively improve the therapeutic effect of DCH on diabetic wounds [88]. Alhusein dissolved PCL in CHCl 3 /MeOH (v/v = 9:1) to prepare PCL solution and added MeOH solution of tetracycline (Tet) into the above PCL solution to prepare the PCL electrospinning solution containing 3% w/w Tet (solution A). Then polyethylene-co-vinyl acetate (PEVA) was dissolved in CHCl 3 /MeOH (v/v = 9:1) to obtain the PEVA solution. The MeOH solution of Tet was added to the PEVA solution to obtain the PEVA electrospinning solution containing 3% w/w Tet (solution B). The three-layer PCL/PEVA/PCL fiber membrane was prepared by uniaxial electrospun solution A and B successively. This fibrous membrane can continuously release Tet for 14 d, which can effectively inhibit the formation of Staphylococcus aureus bacterial membranes and kill bacteria, thus promoting diabetic wound healing [100]. DMOG is a non-specific small-molecule inhibitor of prolyl hydroxylases, which can inhibit the decomposition of HIF-α to create a cell microenvironment similar to hypoxia. In this micro-hypoxia environment, angiogenesis and fiber regeneration are activated, thus accelerating the rate of wound healing [101]. Zhang prepared DMOG-loaded PCL fiber membranes (PCLF/DMOG) and drug-free PCL fiber membranes (PCLF) by electrospinning. The wounds of diabetic rats were treated with PCLF membranes and PCLF/DMOG membranes, and untreated diabetic wounds were the control group. The wound healing rates of the control group on the 3rd, 9th, and 14th d were 7%, 56%, 70%, and 11%, 60%, 75% in the PCLF group, while those in the PCLF/DMOG group were 20%, 62%, and 89%, respectively. Histological analysis showed that the rate of re-epithelialization after 14 d in the control group was 47%, while that in the PCLF group was 50% and that in PCLF/DMOG group was 75%. The high rate of re-epithelialization and wound healing of the PCLF/DMOG group indicated that PCLF/DMOG can promote diabetic wound healing [89]. Ren first prepared DMOG-loaded mesoporous silica nanospheres (DS), then prepared PLLA nanofiber membrane loaded with DS (10 DS-PL) through uniaxial electrospinning, and the PLLA electrospinning membrane (PL) without drug loading was used as the control membrane. Wounds of diabetic mice were treated with PL and 10 DS-PL. After 11 d, the wound healing rates of the PL group and the 10 DS-PL group were 76 and 82%, respectively, while those of the untreated diabetic wound group were only 70%. After 15 d, the wound healing rate in the 10 DS-PL group was 97%, which was higher than that in the PL group (94%) and the untreated diabetic wound group (84%) [102]. In order to solve the problems of poor water solubility and unstable drug absorption of repaglinide, Thakkar prepared drug-loaded PVA/PVP nanofibers by uniaxial electrospinning of the PVA/PVP electrospinning solution containing repaglinide and investigated their effect on diabetic wound healing. Four groups of experiments were designed and casted film was prepared with the same polymer solution. Diabetic rats were randomly divided into four groups: the first group was untreated diabetic wounds (the control group), the second group was the repaglinide group, the third group and the fourth group were PVA/PVP electrospinning nanofiber group loaded with repaglinide and the casted film group loaded with repaglinide (obtained by the film casting method). The drug release experiment showed that the drug release amount of the third group and the fourth group after 10 min was 90 and 73%, respectively, while that of the second group was only 10%. The oral glucose tolerance test showed that the glucose level of nanofibers was (107.66 ± 6.72 mmol/L) after 120 min, which was lower than that of the control group (154.66 ± 6.47 mmol/L), the repaglinide group (142.33 ± 5.817 mmol/L), and the casted film group (110.00 ± 15.55 mmol/L). The above experimental results show that repaglinideloaded nanofibers release more repaglinide, and the blood glucose level decreases, which is beneficial to diabetic wound healing [90]. Drug combination therapy can more effectively promote diabetic wound healing. Lee dissolved PLGA, vancomycin, and gentamicin in HFIP to produce the PLGA-antibiotic solution as the shell spinning solution, and dissolved PDGF in PBS to produce the core spinning solution. Core/shell structure PDGF/PLGA-antibiotic nanofibers (A) were prepared by coaxial electrospinning. PBS/PLGA/antibiotic nanofibers (B) with a core/shell structure were prepared by coaxial electrospinning with PBS solution as the core spinning solution and PLGA/antibiotic as the shell spinning solution. Antibiotic/PLGA nanofibers (C) were prepared by uniaxial electrospinning of the PLGA-antibiotic solution. A nanofiber can continuously release antibiotics (e.g., vancomycin, gentamicin) and PDGF for more than 3 weeks. The wound healing experiment in diabetic mice showed that the wound area of fiber A (20.4 ± 1.7 mm 2 ) was significantly smaller than that of fiber B (26.4 ± 1.0 mm 2 ) and fiber C (26.4 ± 1.0 mm 2 ) after 7 d. After 14 d, the wound area in fiber B and C was reduced to 14.0 ± 0.7 mm 2 and 20.8 ± 1.3 mm 2 , respectively, while the wound area in fiber A was only 12.2 ± 0.1 mm 2 . PDGF/PLGA/antibiotic nanofiber membrane can promote angiogenesis and epidermal hyperplasia through the synergistic effect of PDGF and antibiotics to improve the diabetes wound healing effect [91]. The preparation of PDGF/PLGA/antibiotic nanofibers and the mechanism of promoting diabetic wound healing are shown in Figure 7. ± 1.0 mm 2 ) and fiber C (26.4 ± 1.0 mm 2 ) after 7 d. After 14 d, the wound area in fiber B and C was reduced to 14.0 ± 0.7 mm 2 and 20.8 ± 1.3 mm 2 , respectively, while the wound area in fiber A was only 12.2 ± 0.1 mm 2 . PDGF/PLGA/antibiotic nanofiber membrane can promote angiogenesis and epidermal hyperplasia through the synergistic effect of PDGF and antibiotics to improve the diabetes wound healing effect [91]. The preparation of PDGF/PLGA/antibiotic nanofibers and the mechanism of promoting diabetic wound healing are shown in Figure 7. Dwivedi prepared, respectively, Eudragit RL/RS nanofiber membrane loaded with GS (A) and Eudragit RL/RS nanofiber membrane without GS (B) by uniaxial electrospinning. Then rhEGF was immobilized on the surface of membrane A by the covalent immobilization technique to obtain GS and rhEGF co-loaded Eudragit RL/RS nanofiber membrane (C). In the wound healing experiment of diabetic mice, five groups were designed, the first group was untreated diabetic wounds (the negative control group), the second group was wounds treated with GS solution (the positive control group), the third group was wounds treated with B fiber membranes, the fourth group was wounds treated with C fiber membranes, and the fifth group was wounds treated with A fiber membranes. The experimental results showed that the residual wound area rates in the fourth group at 4, 8, and 12 d were 14.31 ± 2.61%, 10.76 ± 1.92%, and 8.91 ± 1.95%, respectively, which were much lower than those in other groups (the first group: 94%, 92%, and 89%, the second group: 55%, 50%, and 48%, the third group: 91%, 90%, and 88%, the fifth group: 64%, 60%, and 58%), indicating that GS and rhEGF have a better synergistic effect in the treatment of diabetic wounds [92]. Drugs/Synthetic Polymer/Natural Polymer Electrospun Fibers Chemotactic cytokines are a kind of small molecular cytokine that can cause chemotactic responses. MCP-1 is a chemotactic cytokine that can promote macrophages to participate in the process of wound healing. Yin prepared respectively MCP-1-loaded Gel-PGA nanofiber membrane (DES) and cytokine-free Gel-PGA nanofiber membrane (NES) by uniaxial electrospinning, DES and NES were used to treat the wounds of diabetic mice. After 3 d, the number of F4/80+ macrophages in the DES group were 1400 cells/mm 2 , which was much higher than that of 750 cells/mm 2 in the NES group and 800 cells/mm 2 of the control group. After 5 d, the wound closure rate of the DES group was 48.34 ± 10.23%, the NES group was 73.27% ± 11.45%, and the untreated diabetic wound group was 80.27 ± 15.56%. The wounds of the DES group fully recovered after 10 d, while the wounds of the untreated diabetic wound group needed 14 d to fully recover [93]. EACCs can promote diabetic wound healing, but the survival rate of EACCs is low under a high glucose environment. SRT1720 can improve the survival rate of EACCs, and then promote the healing of diabetic wounds [103,104]. Cheng prepared PLGA-collagen protein-silk nanofiber membranes loaded with SRT1720 (PCSS) by uniaxial electrostatic spinning. EACCs (5 × 10 5 ) were inoculated on the surface of PCSS fiber membranes to obtain PCSS-EACCs. PCSS-EACCs can steadily release SRT1720 at a rate of about 7.14% per d for 15 d, thus promoting the release of VEGFA and IL-8 from EACCs. The released VEGFA and IL-8 can promote endothelial cell proliferation, migration, and angiogenesis. The wounds of diabetic mice were treated with PCSS-EACCs, and the wounds of normal mice were the control group. After 14 d, the wound healing rate of the PCSS-EACCs group was fast, which was similar to that of normal mice, and the wound residual area rate of the PCSS-EACCs group was 2% [94]. Pioglitazone is a thiazolidinedione antidiabetic drug, which is an insulin sensitizer. Pioglitazone can activate the peroxidase-activated receptor PPAR-γ, thereby regulating the transcription of insulin-related genes that control glucose and lipid metabolism and maintaining normal blood glucose level to promote diabetic wound healing. Yu first prepared a formic acid/acetic acid solution of PCL as spinning solution A, and then dissolved Gel and pioglitazone in the mixed solvent of formic acid/acetic acid (v/v = 7:3) and stirred for 2 h to prepare spinning solution B. Using nylon mesh with a fixed pore size of 40 µm as the receiving device, the micropatterned PCL nanofiber membrane was prepared by electrospinning solution A, and then electrospinning solution B, and the fiber was further deposited on the PCL film to prepare PCL/Gel-pioglitazone nanofiber membranes (PCL/Gel-pio). PCL/Gel-pio has an asymmetric hydrophobic outer layer and a hydrophilic inner layer, which can effectively simulate the epidermis and dermis of natural skin ( Figure 8A). The prepared PCL/Gel-pio fiber membrane was stripped from nylon mesh and soaked in ethanol solution containing 2% w/w genipin to undergo the genipinbased cross-linking reaction, which can improve the stability of the PCL/Gel-pio fiber membrane ( Figure 8B). The cross-linked PCL/Gel-pio nanofiber membrane can promote diabetic wound healing by preventing bacterial adhesion and controlling the release of pioglitazone ( Figure 8C). After the wounds of diabetic mice were treated with cross-linked PCL/Gel-pio nanofiber membranes, it was found that the fiber membranes significantly up-regulated expression of MIP-2, TNF-α, and VEGF in the wound on the 7th d, which could promote wound healing. After 10 d, it significantly reduced expression of MMP-9, IL-1β, and IL-6 at the wound site to reduce the inflammation of the wound. This fiber membrane can effectively promote wound healing in type 1 and type 2 diabetic mice [95]. cross-linking reaction, which can improve the stability of the PCL/Gel-pio fiber membrane ( Figure 8B). The cross-linked PCL/Gel-pio nanofiber membrane can promote diabetic wound healing by preventing bacterial adhesion and controlling the release of pioglitazone ( Figure 8C). After the wounds of diabetic mice were treated with cross-linked PCL/Gel-pio nanofiber membranes, it was found that the fiber membranes significantly up-regulated expression of MIP-2, TNF-α, and VEGF in the wound on the 7th d, which could promote wound healing. After 10 d, it significantly reduced expression of MMP-9, IL-1β, and IL-6 at the wound site to reduce the inflammation of the wound. This fiber membrane can effectively promote wound healing in type 1 and type 2 diabetic mice [95]. Shin dissolved synthetic polymer PLGA and the natural drug EGCG in HFIP to prepare the shell spinning solution. The natural polymer HA aqueous solution was used as the core spinning solution. HA/PLGA core/shell nanofiber membranes loaded with EGCG (HA/PLGA-E) were prepared by coaxial electrospinning. The wounds of diabetic rats were treated with HA/PLGA-E nanofiber membranes, PLGA nanofiber membranes (pre- Shin dissolved synthetic polymer PLGA and the natural drug EGCG in HFIP to prepare the shell spinning solution. The natural polymer HA aqueous solution was used as the core spinning solution. HA/PLGA core/shell nanofiber membranes loaded with EGCG (HA/PLGA-E) were prepared by coaxial electrospinning. The wounds of diabetic rats were treated with HA/PLGA-E nanofiber membranes, PLGA nanofiber membranes (prepared by uniaxial electrospinning PLGA solution), and HA/PLGA core/shell nanofiber membranes (prepared by coaxial electrospinning PLGA solution and HA aqueous solution). After two weeks, the residual wound area rate in the HA/PLGA-E group was 10.84%, which was significantly lower than that in the other groups (the untreated diabetic wound group, the PLGA group, and the HA/PLGA group were 49.96%, 48.43% and 40.18%, respectively) [96]. Cod liver oil can promote wound healing by increasing the blood supply of the wound and changing the phospholipid composition of the membrane [105,106]. Khazaeli first added a water/ethanol (v/v = 18:3) solution of PLA into a water/dimethylformamide (v/v = 2:1) solution of CS to prepare the polymer solution of PLA/CS. Then the PLA/CS electrospinning solution of 30%w/w cod liver oil was prepared by adding tween and cod liver oil into the above PLA/CS polymer solution and reflowing. The cod liver oil loaded PLA/CS nanofiber membrane was prepared by uniaxial electrospinning, which was used to treat diabetic mice wounds. Wound experimental results of diabetic mice showed that after 14 d, the wound healing rate of the fiber group was 94.5%, which was much higher than that of the free cod liver oil group (40%) and the untreated diabetic wound group (13%) [97]. Chouhan prepared PVA-SF nanofiber membranes co-loaded with EGF, bFGF and LL-37 antimicrobial peptide by uniaxial electrospinning. Different kinds of SF (Bombyx mori silk fibroin (BMSF), A. assama silk fibroin (AASF), and P. ricini silk fibroin (PRSF)) showed different effects on wound healing in diabetic rabbits. The wound healing rate of the AASF group and the PRSF group was 85-90% on the 14th d, which was significantly higher than that of the BMSF group (73%). LL-37 antimicrobial peptide can reduce inflammation in wounds, and EGF and bFGF can promote the proliferation of fibroblasts, keratinocytes, and endothelial cells, thus promoting the wound healing of diabetes. Histological analysis showed that granulation tissue regeneration, angiogenesis and re-epithelialization were faster in the AASF group and the PRSF group, which had a better effect on diabetic wound healing [98]. Drug/Nanoparticle/Polymer Electrospun Fibers Studies have shown that polymer electrospun fibers loaded with antibacterial agents and zinc oxide nanoparticles (n-ZnO) have a positive effect on diabetic wound healing. Jafari added amoxicillin (AMX) (15wt%) into Gel solution and stirred for 1 h, then it was mixed with PCL solution to prepare electrospinning solution A, while n-ZnO (4wt%) in PCL solution was prepared for elctrospinning solution B. AMX-loaded PCL-Gel nanofiber film was prepared by uniaxial electrospinning of spinning solution A, and the fiber obtained by uniaxial electrospinning solution B was deposited on the AMX-loaded PCL-Gel membrane, thus, the n-ZnO-AMX double-layer nanofiber membrane was prepared. The drug release test in vitro showed that n-ZnO-AMX fiber membrane can slowly release AMX for 144 h. The antibacterial effect of AMX can reduce the inflammatory reaction of diabetic wounds and promote the transition of wound healing from the inflammation stage to the proliferation stage. The release of ZnO from n-ZnO-AMX fibrous membranes acts on the wounds to produce ROS, ROS initiates chemical reactions to promote the production of vascular regulatory growth factors, and, thus promotes angiogenesis. After 3 d, the wound healing rate of rats treated with n-ZnO-AMX fiber membranes (46.58 ± 3.66%) was significantly higher than that of the untreated diabetic wound control group (36.73 ± 4.93%). Histological analysis showed that n-ZnO-AMX fiber membranes can increase collagen deposition, promote neovascularization, and reduce scar formation under the synergistic effect of AMX and n-ZnO, thus promoting diabetic wound healing [107]. Cell Loaded Electrospun Fiber Membranes for Diabetic Wound Treatment Cells can be cultured and induce differentiation on the electrospun fiber membrane, which can promote the healing of diabetic wounds by promoting angiogenesis of differentiated cells. Bone marrow mesenchymal stem cells (BMSCs) can promote angiogenesis and thus promote diabetic wound healing, but BMSCs cannot survive in a high glucose environment, while Klotho protein has a protective effect on BMSCs under high glucose conditions [108]. Liu first prepared Klotho-protein-loaded CS microspheres, and then added the CS microspheres into an aqueous solution of Gel and stirred for 30 min to obtain the Gel solution containing CS microspheres. Then the solution was uniformly coated on the pre-prepared PLGA fiber membrane (the spinning solution was prepared by dissolving PLGA in the mixed solvent of CHCl 3 /DMF (v/v = 9:1), and the PLGA fiber was prepared by uniaxial electrospinning) with the coating method. After natural solidification of the Gel, the PLGA/Gel fibers were obtained. Then with the uniaxial electrospinning PLGA spinning solution, a PLGA layer was electrospun on PLGA/Gel fibers to obtain PLGA/Gel/PLGA nanofiber membranes (the structure of PLGA/Gel/PLGA fiber membrane is similar to the structure of a sandwich, that is, two layers of PLGA nanofiber membrane sandwich the intermediate Gel layer, and the Gel layer contains CS microspheres). Finally, BM-SCs were inoculated on the surface of PLGA/Gel/PLGA nanofiber membranes to obtain Klotho + BMSCs nanofiber membranes. The Klotho + BMSCs fiber membrane can slowly release Klotho protein for 7 d. The results of EDU(5-ethynyl-2 -deoxyuridine) experiments showed that the proliferation rate of BMSCs was increased by 126% when Klotho protein was directly applied to diabetic wounds, indicating that Klotho protein can promote the proliferation of BMSCs under high glucose condition. The wounds of diabetic mice were treated with Klotho + BMSCs nanofiber membranes, Klotho, and BMSCs, respectively, and untreated diabetic wounds were used as controls. After 10 d, the wound healing rate in the BMSCs + Klotho group was 80%, which was higher than that in the Klotho group (16%) and the BMSCs group (17%), and much higher than that in the control group (39%). The results showed that when BMSCs cells were incubated on electrospun fibers, and the Klotho-protein-loaded electrospun fibers can promote the differentiation of BMSC cells, thus promoting angiogenesis in diabetic wounds, which can achieve effective diabetes wound healing [109]. Outlook Compared with normal wounds, the healing process of diabetic wounds is often in the inflammatory stage for a long time due to uncontrolled inflammatory reaction, and the angiogenesis in the wound is difficult. In addition, there is a long-term and recurrent bacterial infection problem in diabetic wounds. Electrospun nanofiber membrane has great potential for application in the treatment of diabetic wounds because of its advantages in properties and structure. The structure of electrospinning nanofibers is similar to the structure of ECM, which is conducive to the attachment, growth and migration of fibroblasts, thereby facilitating the formation of new skin tissue in the wound. The easily modified characteristics of electrospinning nanofibers favor their structural modification. The two-dimensional fiber membrane could be transformed into three-dimensional structure through multi-layer stacking, gas-foaming technique and other new methods, which is beneficial to improving the proliferation rate of cells, thus accelerating diabetic wound healing [110,111]. Electrospinning fiber has the advantage of easy loading. Recent studies have shown that M2 macrophages play an important role in diabetic wound healing. Electrospinning nanofiber membranes can directly promote the transformation of M1 macro-phages (pro-inflammatory macrophages) into M2 macrophages (anti-inflammatory macrophages), and then promote diabetic wound healing. Incubating cells on the electrospinning fiber membrane for diabetic wound healing is a hot research topic currently. It is expected that M2 type macrophages could be used for diabetic wound healing after incubating on the fiber membrane. In addition, silver nanoparticles have good antibacterial effects and can be used in the treatment of normal wounds; however there have been no reports about their treatment of diabetic wounds [112][113][114]. Based on the complex microenvironment of diabetic wounds, the preparation of electrospun fiber membrane, by combining silver nanoparticles with other active components, may be an effective method for the treatment of diabetic wounds. Hydrogel fiber dressing is a new type of wound dressing, which has the advantages of high specific surface area, high liquid absorption, and good air permeability. Hydrogel fiber has the functional properties of hydrogels (e.g., high water content, high elasticity, and stimulus-response) and the structural advantages of fibers (e.g., high specific surface area and easy weaving). The development of electrospinning preparation methods for hydrogel fibers, and the combination of hydrogel fibers with active ingredients for the healing of diabetic wounds also provide an opportunity for the treatment of diabetic wounds. It should be noted that there are many in vivo animal experiments performed in diabetic wound healing studies, and most of the healing effect is evaluated by healing time, while analysis of the diabetic wound healing process and tissue section is insufficient. The relative lack of research on the biocompatibility and biodegradability of electrospun fibers also creates challenges for their safety evaluation.
16,616.6
2021-12-21T00:00:00.000
[ "Medicine", "Materials Science", "Engineering" ]
Remarkably Enhanced Methane Sensing Performance at Room Temperature via Constructing a Self‐Assembled Mulberry‐Like ZnO/SnO2 Hierarchical Structure Development of metal oxide semiconductors‐based methane sensors with good response and low power consumption is one of the major challenges to realize the real‐time monitoring of methane leakage. In this work, a self‐assembled mulberry‐like ZnO/SnO2 hierarchical structure is constructed by a two‐step hydrothermal method. The resultant sensor works at room temperature with excellent response of ~56.1% to 2000 ppm CH4 at 55% relative humidity. It is found that the strain induced at the ZnO/SnO2 interface greatly enhances the piezoelectric polarization on the ZnO surface and that the band bending results in the accumulation of chemically adsorbed O2− ions close to the interface, leading to significant improvement in the sensing performance of the methane gas sensor at room temperature. Introduction [3][4] Being colorless and odorless, methane (CH 4 ) can form an explosive mixture with ambient air when its concentration reaches up to 5%. [5]Reliable and sensitive sensors are therefore required to monitor the concentration of methane in the environment in real time.Methane is of a special tetrahedral nonpolar molecular structure, which is extremely stable due to the hydrocarbon (C-H) bond energy being as high as 413 kJ mol −1 .This makes the sensing of methane very difficult, particularly at room temperature.It is usually believed that noble metals promote the catalytic oxidation at low temperatures and enhance the low temperature sensing response.The CH 4 sensing response of the recently developed sensors containing noble metals was still low at room temperature.For example, the response of the sensor based on Pd-doped SnO 2 /reduced graphene oxide was lower than 10% to 14 000 ppm of CH 4 , and the corresponding response time was longer than 5 min. [6]Currently, various types of ultrahighly sensitive CH 4 sensors based on metal oxide semiconductors work only at high temperature from 100 °C to 420 °C. [7]Additional heater is therefore required, resulting in complexity in device design and fabrication, high power consumption and many safety issues. Recent studies have showed that, with the assistance of light irradiation, the response of a metal oxide-based CH 4 sensor can be significantly enhanced at room temperature.Wang et al. [8] reported that ZnO nano-sheets can sense CH 4 at room temperature with UV irradiation, with the response being 48% to a concentration of 1000 ppm.Chen et al. [9] found that the oxygen vacancy-enriched ZnO/Pd hybrid worked at a temperature as low as 80 °C with a response of 36.8% to 1000 ppm CH 4 under visible light illumination.Xia et al. [10] further demonstrated that the response of Pd-decorated ZnO/rGO to 10 000 ppm CH 4 at room temperature increased from 4.1% to 63.4% under Vis-light illumination.It was shown that the light-active catalysis, the efficient charge transfer, and the multiple heterojunctions formed within the hybrids synergistically enhanced the response at room temperature. [10]The synthesized ZnO/ g-C 3 N 4 porous hollow microspheres showed a response of 42% at room temperature under UV illumination, with the response time being reduced to 28 s. [11]It was also demonstrated that the oxygen vacancies formed at the hybrid ZnO/g-C 3 N 4 interface attracted more chemically adsorbed O À 2 ions at the interface under UV irradiation, thus promoting the rate of CH 4 oxidation. [11]lthough the light assisted CH 4 sensors exhibit attractive sensing performance at room temperature, the additional power consumption and extra light stability make the device control more complicated.It is desirable to design a material with high CH 4 sensing performance without the assistance of heat or light.The main breakthrough is depended on finding a suitable energy substitution to provide the extra energy for the activation of CH 4 dissociation at low temperature.ZnO is a polar semiconductor with strong piezoelectric property due to its lack of central symmetry in the wurtzite crystal structure. [12]The polarization electric field can be built under strain condition.It is expected that, by selecting suitable metal oxide which is to form multiheterojunction interface with ZnO, the deliberately created strain located at the oxide/ZnO interface may help enhance the polarization electric field and then provide the extra energy for the CH 4 dissociation.ZnO/SnO 2 hybrid composites have been extensively studied in the past Development of metal oxide semiconductors-based methane sensors with good response and low power consumption is one of the major challenges to realize the real-time monitoring of methane leakage.In this work, a selfassembled mulberry-like ZnO/SnO 2 hierarchical structure is constructed by a two-step hydrothermal method.The resultant sensor works at room temperature with excellent response of ~56.1% to 2000 ppm CH 4 at 55% relative humidity.It is found that the strain induced at the ZnO/SnO 2 interface greatly enhances the piezoelectric polarization on the ZnO surface and that the band bending results in the accumulation of chemically adsorbed O À 2 ions close to the interface, leading to significant improvement in the sensing performance of the methane gas sensor at room temperature. decades.The diverse nano-structures produced by different methods make the hybrid composites function effectively in sensing of different gases, such as NO 2 , ethanol, and CO. [13,14] As far as the authors are concerned, however, there is not yet any report on CH 4 sensing at room temperature. In this work, a ZnO/SnO 2 hierarchical structure was deliberately constructed.The self-assembled hexagonal ZnO nanorods growing along its [001] direction on AZO glass substrates were applied as the trunks, and the SnO 2 nanoparticles were self-assembled on the ZnO trunks to form a mulberry-like structure.The self-assembled nanorods structure is expected to provide an excellent carrier transport to enhance the gas response at low temperature, as in the TiO 2 system reported previously. [15,16]Heterojunction barriers and polarization field are expected to be created at the interface in the specific mulberry-like structure.It has been found that, the resultant sensor fabricated in this work showed an excellent CH 4 sensing performance at room temperature, providing a promising way in realizing the real-time monitoring of methane leakage with low power consumption. Microstructural Characterization Figure 1a gives the XRD patterns obtained for the samples of AZO substrate, ZnO, SnO 2 and ZnO/SnO 2 films over the 2θ range from 20°to 50°.A sharp peak is seen to appear 2θ = 34.2°for the AZO substrate, indicating that the AZO film is well orientated along [002] ZnO direction.A peak is at the similar position in the XRD spectra for the other three samples, respectively, suggesting that the ZnO, SnO 2 and ZnO/ SnO 2 films are highly orientated with the ZnO growing along its [002] and SnO 2 growing along its [101] direction, respectively.The enlarged XRD spectra at the local region of the peak in Figure 1b show that the ZnO (002) plane is located at 2θ = 34.35°,while the SnO 2 (101) plane is located at 2θ = 34.55°.It is interesting to notice that the sharp peak of the ZnO/SnO 2 film can be decomposed into three peaks.In addition to the two peaks located at the same position as ZnO and SnO 2 films, the new emerged peak is located at 2θ = 34.47°,indicating that there exists compressive and tensile strain on the ZnO and SnO 2 side of the interface, respectively.These may be caused by the anti-site substitution between Sn 4+ (atomic radius 0.069 nm) and Zn 2+ (atomic radius 0.074 nm) at the interface.The interplanar spacing of ZnO (002) in the strained part is calculated (based on the new emerged peak) to be about 96% of the unstrained part, suggesting the existence of compressive stress along the [002] direction, which changes the polarization electric field within the ZnO nanorods. The SEM images in Figure 1c(i-iv) show the morphologies of the ZnO and ZnO/SnO 2 films viewed from the surface and cross-section.The orientated hexagonal ZnO nanorods have grown along its c-axis and are uniformly and well aligned with the AZO seed layer in the ZnO film, in agreement well with the XRD results in Figure 1a.In average, the ZnO nanorods are about 800 AE 50 nm in length and 100 AE 30 nm, based on statistical analysis inserted in the figure.In the ZnO/SnO 2 film, the spherical SnO 2 nano-particles decorate the sides and the top surface of the ZnO nanorods, forming a mulberry-like hierarchical structure.It is believed that at the beginning of the reaction, a lot of oxygen dangling bonds are formed due to the weak corrosion of ZnO surface.Acting as the nucleation sites, the dangling bonds enable the nucleation and rapid growth of SnO 2 nanoparticles on the ZnO nanorods, leading to the formation of the mulberry-like hierarchical structure.The firm contact between SnO 2 and ZnO provides a fast carrier transport path. Oxygen Species The high resolution XPS spectra of the films are shown in Figure 2. The two peaks located at around 1020.4 and 1044.5 eV in the Zn 2p spectrum are assigned to Zn 2p 3/2 and Zn 2p 1/2 of Zn 2+ species respectively (Figure 2a).The peak position of the Zn2p 3/2 in ZnO/SnO 2 is red shift by about 0.11 eV compared with that of the ZnO film.The core levels of Sn 3d 5/2 and Sn 3d 3/2 are located at 485.67 and 494.12 eV in the ZnO/SnO 2 film, showing a blue shift by about 0.78 and 0.69 eV respectively, compared with that of the SnO 2 (Figure 2b).The large blue shift suggests that there exists a strong interaction between the Zn 2+ and Sn 4+ species.The blue shift of Sn 3d peaks and the red shift of Zn 2p peaks in ZnO/SnO 2 indicate that the charge is transferred from SnO 2 to ZnO. The typical O 1s peak of the ZnO, SnO 2 and ZnO/SnO 2 films can be decomposed into two peaks (Figure 2c).The low-energy one at ~529 eV is attributed to the lattice oxygen in ZnO (L O ), while the highenergy one at 531 eV is assigned to the adsorbed oxygen in oxygen vacancies (V O ). [17,18] No peak related to the adsorbed water molecules (H-O-H) (at ~532.5 eV) appears in the three samples.Besides, the concentration of V O on the surface in the SnO 2 film is 17.7%, much lower than that of the ZnO film (28.5%).However, in the ZnO/SnO 2 film where the SnO 2 nano-particles fully cover the surface of ZnO nanorods, the V O content is 24.9%.This suggests that more oxygen vacancies are formed at the SnO 2 side due to the anti-site substitution between Sn 4+ and Zn 2+ at the ZnO/SnO 2 interfaces.The large amount of V O acted as the active sites and thus allowed more O À 2 chemically adsorbed close to the interfaces. The EPR spectra in Figure 3a verify further the formation of the oxygen vacancies.The similar EPR signals locate around g = 2.003 in the ZnO and SnO 2 and ZnO/SnO 2 samples, demonstrating the existence of single-electron-trapped surface defects, V O or O À s . [19]In addition, no signal related to lattice electron trapping sites, such as Zn + or V À Zn , is observed at g = 1.960, suggesting that the surface oxygen vacancy is the dominant defect type. [18]he PL spectra of ZnO, SnO 2 and ZnO/SnO 2 films are shown in Figure 3b.There are two main peaks located at 378 and 620 nm in the ZnO film, corresponding respectively to the recombination at nearband edge and the transitions of involved oxygen vacancy (V O ). [20] No PL peak was observed for the SnO 2 film.In the ZnO/SnO 2 film, the emission at the near-band edge is completely suppressed, indicating that the photo generated electron-hole pairs are effectively separated.The position of the V O defect-related peak is found to red shift to 640 nm, and its intensity decreases by 90%.From the XPS results, it is believed that the V O move to SnO 2 in the ZnO/SnO 2 film and thus the PL intensity greatly decreases. Electronic Structure The electronic structure of the ZnO/SnO 2 film was determined by the analysis combining UV-Vis absorption spectroscopy and XPS valence edge spectroscopy.Both ZnO and SnO 2 are direct bandgap Energy Environ.Mater.2024, 7, e12624 semiconductors, so the optical band energy can be obtained by Tauc plots using the Kubelka-Munk equation (Equation 1). [21] wherein α, hv, A and E g are the optical absorption coefficient, the photon energy, the proportional constant and the optical band energy, respectively.There are two absorption edges in the ZnO film, one is 3.30 eV related to the ZnO film, which was consistent with the near-band edge emission of PL spectra; and the other is 3.03 eV, which origins from the light trapping effect of the ZnO nanorods. The band gap of SnO 2 film is 3.53 eV.The valence spectrum gives the information of the energy difference between the Fermi level and the top of the valence band (Figure 4b), and then the band alignment at the interface between ZnO and SnO 2 is obtained, as shown in Figure 4c.At the SnO 2 /ZnO interface, there exist transfer of electrons from SnO 2 to ZnO and transfer of holes in the opposite direction under the build-in electric field, resulting in an effective charge separation.This is in a good agreement with the PL results (Figure 3b) and the XPS analysis (Figure 2). CH 4 Gas Sensing Properties The concentration can be derived by the Langmuir isotherm adsorption model of the unimolecular layer as described in Equation ( 2). [22] where R is the response and C CH4 is the CH 4 concentration (in ppm), R s is the saturation response and a is constant related to the adsorption coefficient.As shown in Figure 5b, there is a very good agreement between the experimental data and the fitting curve, indicating that the Langmuir isotherm adsorption model is reliable to explain the CH 4 response behavior, with the values of R s and a being worked out as 65% and 0.004 in the case of the ZnO/SnO 2 sensor.The lowest limit of detection (LOD) can be theoretically calculated by Equation (3) based on the response on low concentration region as inserted in Figure 5b, [23] LOD ¼ 3R noise =L slope (3) where R noise is measured noise of the sensors, and L slope is the slope of fitting curve.R noise value of ZnO/SnO 2 sensor is calculated as 0.045 based on Equation (4) below: where, R i is the experimental data (i.e.various response of CH 4 concentration) and R is the fitting values.Therefore, the LOD value of the ZnO/SnO 2 sensor is 9.2 ppm. The sensing performance shows good repeatability and stability, as demonstrated by the repeatedly measured response to 800 ppm CH 4 using cycle tests for five times, as shown in Figure 5c.The device maintains good response about 75% of the original value after 50 days storage in the ambience without any preservation.Figure 5d gives the result of the selectivity test of the sensor to various gases, including H 2 , CO, NO 2 , NH 3 , H 2 S, and C 2 H 6 at the same concentration of 800 ppm.It is obvious that the ZnO/SnO 2 sensor exhibits superior CH 4 selectivity to the other gases.Figure 5e,f show the dynamic response of the ZnO/SnO 2 to 800 ppm CH 4 at various relative humidity.The initial resistance decreases gradually as the humidity increases and stabilizes under high humidity.The response is 52% at 36% humidity, and it decreases as the humidity increases.It is noticed that the good response as high as 40% is realized even under the working condition of 100% saturated humidity, demonstrating that the ZnO/SnO 2 sensor is universally applicable under various humidity conditions. Table 1 gives a direct comparison of the CH 4 sensing performance between the ZnO/SnO 2 sensor and the other metal oxide-based sensors.Significantly, the ZnO/SnO 2 sensor developed in this study works at room temperature without the assistance of Vis-or UV-light, with a higher and faster response, even superior to the other sensors working at a higher temperature.The outstanding performance makes the ZnO/ SnO 2 sensor one of the most promising candidates for commercial application in CH 4 sensing. CH 4 Gas Sensing Mechanism It is general accepted that the CH 4 gas sensing performance of a sensor depends on the thermos-activated oxidation reaction with the target gas CH 4 and the carrier concentration of the semiconductor, both being closely related to the working temperature of the sensors.Hence, CH 4 molecules are activated usually with the assistance of heat or light, which supplies the energy required for breaking the C-H bonds.The activated CH 4 molecules reacts with the oxygen species (O À 2 , O − or O 2− ) adsorbed on the surface of metal oxides, producing CO 2 and H 2 O, and releasing electrons in the meantime.At a temperature lower than 100 °C, the adsorbed oxygen species on the surface are O À 2 , and Energy Environ.Mater.2024, 7, e12624 the reaction is described by Equation ( 5). [26]he released electrons inject into the semiconductor, reducing the resistance and thus producing the CH 4 sensing response. The key point to realize the CH 4 sensing at room temperature is to lower the activation energy or to provide enough energy required for the CH 4 dissociation.ZnO is a polar semiconductor, and the strain induced polar electric field provides the additional energy required for the CH 4 dissociation. The polarization of ZnO was detected by Raman spectroscopy.The typical Raman spectrum of the AZO in Figure 6a is similar to that of the low-Al:ZnO sample, as described in the previous study. [27]There are four main peaks located at 105, 281, 443 and 587 cm −1 , related to the E 2g (low), A 1g (TO), E 2g (high) and E 1g (LO) mode, respectively.The intensity of E 2g modes is related to the orientated growth of ZnO in its c axis direction.A 1g (TO) is the Zn-O stretching vibration along the c axis direction.The E 1g (LO) mode is forbidden in a perfect crystal, but activated by the electric field on the surface of the ZnO nanorods. [28]he Raman spectra of the ZnO and ZnO/ SnO 2 films show similar peaks to those of the AZO.Although no SnO 2 related peak was observed in the ZnO/SnO 2 film due to the small amount of SnO 2 , the ZnO related peaks shift due to the influence of the interaction between ZnO and SnO 2 .The much enhanced E 2g peak of the ZnO compared with that of the AZO indicates the direc- tional growth of ZnO nanorods along the c axis, in agreement with the XRD and SEM results (Figures 1 and 2).The position of the A 1g (TO) peak in ZnO is at 283.2 cm −1 , which red shifts to 280.9 cm −1 in the ZnO/SnO 2 film as shown in Figure 6b, suggesting the shortening of the Zn-O bond along the c axis direction.The intensity of the E 1g (LO) peak gradually increases as ZnO is growing on AZO and then SnO 2 nanoparticles are decorating the ZnO, indicating that the polarization electric field is gradually enhanced.All these features in the Raman spectroscopy demonstrate that the ZnO nanorods are compressed along the c axis direction and that the polar electric field is the strongest in the ZnO/SnO 2 film. The piezoelectric property of the ZnO/SnO 2 films was measured by piezoelectric forces microscopy (PFM) and compared with that of the ZnO film.The corresponding PFM images are shown in Figure 7a,b.The typically butterfly shaped loop, as shown in Figure 7c, indicates the existence of the piezoelectric effect.The averaged phase angle is 183°and 175°for ZnO and ZnO/SnO 2 films, respectively (Figure 7d).The phase angle close to 180°verifies that the response was piezoelectric instead of electrostatic.It has been reported that the polarization coefficient d 33 of the pristine bulk ZnO was ~9.9 pm V −1 , while that of the films with well orientation was ~12.4 pm V −1 . [29]In this work, the average d 33 of the ZnO film is 18.5 pm V −1 , and that of the ZnO/SnO 2 film is further enhanced up to 24.1 pm V −1 .The lattice strain will induce the permanent local electric dipoles, and the well aligned ZnO nanorods results in the great enhancement of the piezoelectric polarization (d 33 ) of the ZnO/SnO 2 .This functions like the assistance of heat or light, providing the additional energy required for CH 4 dissociation, and thus realizing the enhancement of the CH 4 sensing at room temperature. The activation energy is further measured and calculated according to the Arrhenius equation.The gas sensing performance of ZnO and ZnO/SnO 2 sensors were measured at 25 °C, 100 °C and 150 °C, respectively.The plot of ln normalized resistance change rate (d(R/R 0 )/ dt) as a function of 1/T produces a straight line (Figure 8).From the slope based on Arrhenius equation, the activation energy (ΔE a ) for the process involved was calculated as 2.91 kJ mol −1 for ZnO/ SnO 2 , smaller than one third of 10.18 kJ mol −1 , the corresponding value for ZnO.This further proves that the enhanced polarization field in the new mulberry-like hierarchical structure can greatly reduce the activation energy, and thus significantly enhances CH 4 sensing performance at room temperature. Oxygen vacancies usually provide the site for O À 2 adsorptions on the surface.The analysis on the energy band structure indicates that the electron charges will transfer from SnO 2 to ZnO, and the accumulated electrons will readily attract the O 2 molecules chemically adsorbed close to the interface to form O À 2 .This enables the reaction of CH 4 and O À 2 more likely to occur at the interface, where the polar electrical field is the strongest. Conclusions In this study, methane sensing performance at room temperature is remarkably enhanced via constructing a self-assembled mulberry-like ZnO/SnO 2 hierarchical structure.The synergistic effect of the strongest polarized electric field and the maximum density of chemically adsorbed O À 2 in the region close to the ZnO/SnO 2 interface provides not only the energy required for activation of CH 4 dissociation, but also the large number of the reaction sites, leading to a significantly enhanced response at room temperature.This work provides an effective way to integrate the polarized electric field into the gas sensing to reduce the reaction temperature down to room temperature. Experimental Section Fabrication of ZnO, SnO 2 and ZnO/SnO 2 films: Aluminum-doped zinc oxide (AZO)-coated glass substrates were used to provide suitable seed layer.Hydrothermal method was used for the growth of ZnO nanorods on the AZO similar to that described in the previous work. [15]Two AZO substrates were placed diagonally with conductive surface downward in a 200 mL stainless steel Teflon-lined autoclave.The precursor solution was prepared by adding 30 mM hexamethylenetetramine (C 6 H 12 N 4 , HMT) and 30 mM Zn(NO 3 ) 2 Á6H 2 O powders into 100 mL deionized water and stirring to form a homogeneous solution.After the hydrothermal reaction at 100 °C for 4 h, the samples were cleaned, dried and then annealed at 400 °C for 20 min in air to form ZnO nanorods films (named as ZnO films). SnO 2 films were also grown on AZO substrates using hydrothermal method.The SnO 2 precursor solution was prepared by mixing 3 mM SnCl 4 Á5H 2 O and 10 mM NaOH powders with 100 mL deionized water.The hydrothermal reaction lasted 6 h at 180 °C.The samples were cleaned, dried and then annealed at 550 °C for 30 min in air to form SnO 2 films. On the basis of hydrothermal grown ZnO films, the ZnO/SnO 2 films were fabricated via a second-step hydrothermal process.Two AZO glasses with ZnO nanorods films were put back into the autoclave to allow growth of the ZnO/ SnO 2 nanostructure.The precursor solution and the hydrothermal reaction condition were the same as that used for growing SnO 2 films. Characterization: The phases and growth orientation of the films were identified using X-ray diffraction spectrometry (XRD, Bruker D8 advanced, Germany) with Cu Kα X-ray source (λ = 0.154 nm).The morphology of the films was examined by field emission scanning electron microscopy (FESEM, Sigma500, Zeiss).Xray photoelectron spectroscopy (XPS, Escalab 250Xi, Thermo Fisher, USA) was conducted to analyze the chemical composition and valence states of elements.Photoluminescence (PL) was conducted by a fluorescence spectrometer (Pico-Quant, FluoTime 300) at the excitation wavelength of 320 nm.The UV-Vis spectroscopy (UV-3600, Shimadzu, Japan) was employed to examine optical absorption properties.Raman spectra were collected by high-resolution confocal micro-Raman spectroscopy (Horiba JY LabRAM HR800, France) with the laser wavelength of 532 nm.Unpaired electrons of the film sample, i.e., oxygen free radicals, were tested by an electron paramagnetic resonance (EPR spectra, Bruker EMXplus spectrometer, Germany).The piezoelectric polarization was measured by piezoelectric forces microscopy (PFM, SPA 400, Seiko Inc.). Gas sensing test: Platinum interdigitated electrodes were deposited onto the thin films by DC magnetron sputtering as described in our previous work. [30]The sensing performance was assessed in air at room temperature about 25 °C.A Keithley DAQ6510 multimeter was used to measure the resistance change of the sensors.The sensing response (S) is calculated by Equation ( 6): where R air and R gas are the resistances of the sensor in air and in gas atmosphere (i.e.mixed 5% CH 4 and 95% Ar) respectively.The response/recovery time (t res /t recov ) is defined as the time taken for the change of the resistance to reach 90% of ΔR in the case of gas in and gas out, respectively. [26] Figure 1 . Figure 1.a) XRD patterns of the AZO, ZnO, SnO 2 and ZnO/SnO 2 films; b) The zoomed XRD spectra; and c) SEM images showing the morphology of ZnO and ZnO/SnO 2 films viewed from the surface and cross-section, respectively. Figure 5 . Figure 5. CH 4 gas sensing properties of the ZnO/SnO 2 film.a) Dynamic response change curve to 100-2000 ppm CH 4 ; b) the corresponding response curve fitted by the Langmuir adsorption isotherm equation; c) repeatability to 800 pm CH 4 upon five exposure-release cycles; d) selective measurement at 800 ppm.e) Dynamic response change curve and f) the response under various relative humidity. Figure 7 . Figure 7. PFM images of domain structures a) ZnO and b) ZnO/SnO 2 films; c) A representative amplitude by PFM versus applied bias voltage curve; d) phase-voltage hysteresis loop versus applied bias voltage curve. Table 1 . Comparison of CH 4 gas sensing properties between the ZnO/SnO 2 sensor and fabricated from varying nanohybrids and nanostructures in previous reports.
6,190.8
2023-03-31T00:00:00.000
[ "Materials Science", "Environmental Science", "Chemistry" ]
An empirical appraisal of eLife’s assessment vocabulary Research articles published by the journal eLife are accompanied by short evaluation statements that use phrases from a prescribed vocabulary to evaluate research on 2 dimensions: importance and strength of support. Intuitively, the prescribed phrases appear to be highly synonymous (e.g., important/valuable, compelling/convincing) and the vocabulary’s ordinal structure may not be obvious to readers. We conducted an online repeated-measures experiment to gauge whether the phrases were interpreted as intended. We also tested an alternative vocabulary with (in our view) a less ambiguous structure. A total of 301 participants with a doctoral or graduate degree used a 0% to 100% scale to rate the importance and strength of support of hypothetical studies described using phrases from both vocabularies. For the eLife vocabulary, most participants’ implied ranking did not match the intended ranking on both the importance (n = 59, 20% matched, 95% confidence interval [15% to 24%]) and strength of support dimensions (n = 45, 15% matched [11% to 20%]). By contrast, for the alternative vocabulary, most participants’ implied ranking did match the intended ranking on both the importance (n = 188, 62% matched [57% to 68%]) and strength of support dimensions (n = 201, 67% matched [62% to 72%]). eLife’s vocabulary tended to produce less consistent between-person interpretations, though the alternative vocabulary still elicited some overlapping interpretations away from the middle of the scale. We speculate that explicit presentation of a vocabulary’s intended ordinal structure could improve interpretation. Overall, these findings suggest that more structured and less ambiguous language can improve communication of research evaluations. Introduction Peer review is usually a black box-readers only know that a research paper eventually surpassed some ill-defined threshold for publication and rarely see the more nuanced evaluations of the reviewers and editor [1].A minority of journals challenge this convention by making peer review reports publicly available [2].One such journal, eLife, also accompanies articles with short evaluation statements ("eLife assessments") representing the consensus opinions of editors and peer reviewers [3].In 2022, eLife stated that these assessments would use phrases drawn from a common vocabulary (Table 1) to convey their judgements on 2 evaluative dimensions: (1) "significance"; and (2) "strength of support" (for details see [4]).For example, a study may be described as having "landmark" significance and offering "exceptional" strength of support (for a complete example, see Box 1).The phrases are drawn from "widely used expressions" in prior eLife assessments and the stated goal is to "help convey the views of the editor and the reviewers in a clear and consistent manner" [4].Here, we report a study which assessed whether the language used in eLife assessments is perceived clearly and consistently by potential readers.We also assessed alternative language that may improve communication. Our understanding (based on [4]) is that eLife intends the common vocabulary to represent different degrees of each evaluative dimension on an ordinal scale (e.g., "landmark" findings are more significant than "fundamental" findings and so forth); however, in our view the intended ordering is sometimes ambiguous or counterintuitive.For example, it does not seem obvious to us that an "important" study is necessarily more significant than a "valuable" study nor does a "compelling" study seem necessarily stronger than a "convincing" study. Table 1.Phrases and their definitions (italicised) from the eLife vocabulary representing 2 evaluative dimensions: significance and strength of support.The significance dimension is represented by 5 phrases and the strength of support dimension is represented by 6 phrases.In a particular eLife assessment, readers only see 1 phrase from each of the evaluative dimensions.Phrases are accompanied by eLife definitions, but these are not shown in eLife assessments (though some words from the definitions may be used).Box 1.A complete example of an eLife assessment.This particular example uses the phrase "important," to convey the study's significance, and the phrase "compelling," to convey the study's strength of support "The overarching question of the manuscript is important and the findings inform the patterns and mechanisms of phage-mediated bacterial competition, with implications for microbial evolution and antimicrobial resistance.The strength of the evidence in the manuscript is compelling, with a huge amount of data and very interesting observations.The conclusions are well supported by the data.This manuscript provides a new co-evolutionary perspective on competition between lysogenic and phage-susceptible bacteria that will inform new studies and sharpen our understanding of phage-mediated bacterial co-evolution."[5]. Significance Additionally, several phrases like "solid" and "useful," could be broadly interpreted, leading to a mismatch between intended meaning and perceived meaning.The phrases also do not cover the full continuum of measurement and are unbalanced in terms of positive and negative phrases.For example, the "significance" dimension has no negative phrases-the scale endpoints are "landmark" and "useful."We also note that the definitions provided by eLife do not always map onto gradations of the same construct.For example, the eLife definitions of phrases on the significance dimension suggest that the difference between "useful," "valuable," and "important" is a matter of breadth/scope (whether the findings have implications beyond a specific subfield), whereas the difference between "fundamental" and "landmark" is a matter of degree.In short, we are concerned that several aspects of the eLife vocabulary may undermine communication of research evaluations to readers. In Table 2, we outline an alternative vocabulary that is intended to overcome these potential issues with the eLife vocabulary.Phrases in the alternative vocabulary explicitly state the relevant evaluative dimension (e.g., "support") along with a modifying adjective that unambiguously represents degree (e.g., "very low").The alternative vocabulary is intended to cover the full continuum of measurement and be balanced in terms of positive and negative phrases.We have also renamed "significance" to "importance" to avoid any confusion with statistical significance.We hope that these features will facilitate alignment of readers' interpretations with the intended interpretations, improving the efficiency and accuracy of communication. The utility of eLife assessments will depend (in part) on whether readers interpret the common vocabulary in the manner that eLife intends.Mismatches between eLife's intentions and readers' perceptions could lead to inefficient or inaccurate communication.In this study, we empirically evaluated how the eLife vocabulary (Table 1) is interpreted and assessed whether an alternative vocabulary (Table 2) elicited more desirable interpretations.Our goal was not to disparage eLife's progressive efforts, but to make a constructive contribution towards a more transparent and informative peer review process.We hope that a vocabulary with good empirical performance will be more attractive and useful to other journals considering adopting eLife's approach. Our study is modelled on prior studies that report considerable individual differences in people's interpretation of probabilistic phrases [6][7][8][9][10][11][12].In a prototypical study of this kind, participants are shown a probabilistic statement like "It will probably rain tomorrow" and asked to indicate the likelihood of rain on a scale from 0% to 100%.Analogously, in our study participants read statements describing hypothetical scientific studies using phrases drawn from the eLife vocabulary or the alternative vocabulary and were asked to rate the study's significance/ importance or strength of support on a scale from 0 to 100.We used these responses to gauge the extent to which people's interpretations of the vocabulary were consistent with each other and consistent with the intended rank order. Research aims Our overarching goal was to identify clear language for conveying evaluations of scientific papers.We hope that this will make it easier for other journals/platforms to follow in eLife's footsteps and move towards more transparent and informative peer review.With this overall goal in mind, we had 3 specific research aims: • Aim One.To what extent do people share similar interpretations of phrases used to describe scientific research? • Aim Two.To what extent do people's (implied) ranking of phrases used to describe scientific research align with (a) each other; and (b) with the intended ranking? • Aim Three.To what extent do different phrases used to describe scientific research elicit overlapping interpretations and do those interpretations imply broad coverage of the underlying measurement scale? Methods Our methods adhered to our preregistered plan (https://doi.org/10.17605/OSF.IO/MKBTP) with one minor deviation: our target sample size was 300, but we accidentally recruited an additional participant, so the actual sample size was 301. Ethics This study was approved by a University of Melbourne ethics board (project ID: 26411). Design We conducted an experiment with a repeated-measures design.Participants were shown short statements that described hypothetical scientific studies in terms of their significance/importance or strength of support using phrases drawn from the eLife vocabulary (Table 1) and from the alternative vocabulary (Table 2).The statements were organised into 4 blocks based on vocabulary and evaluative dimension; specifically, block one: eLife-significance (5 statements), block two: eLife-support (6 statements), block three: alternative-importance (5 statements), block four: alternative-support (5 statements).Each participant saw all 21 phrases and responded using a 0% to 100% slider scale to indicate their belief about each hypothetical study's significance/importance or strength of support. Materials There were 21 statements that described hypothetical scientific studies using one of the 21 phrases included in the 2 vocabularies (Table 1).Statements referred either to a study's strength of support (e.g., Fig 1) or a study's significance/importance (e.g., Supplementary Figure A in S1 Text).For the alternative vocabulary, we used the term "importance" rather than "significance."To ensure the statements were grammatically accurate, it was necessary to use slightly different phrasing when communicating significance with the eLife vocabulary ("This is an [phrase] study") compared to communicating importance with the alternative vocabulary ("This study has [phrase] importance"; e.g., Supplementary Sample size.As data collection unfolded, we intermittently checked how many participants had met the inclusion criteria, aiming to stop data collection when we had eligible data for our target sample size of 300 participants.Ultimately, 461 participants responded to the survey.Of these 461 participants, 156 participants failed the attention check and 12 participants took longer than 30 min to complete the study and were therefore excluded.No participants failed to respond to all 21 statements or completed the study too quickly (<5 min).We applied these exclusion criteria one-by-one which removed data from 160 participants and retained eligible data from 301 participants (we unintentionally recruited 1 additional participant). Sample size justification.The target sample size of 300 was based on our resource constraints and expectations about statistical power and precision (see S2 Text). Inclusion criteria.Participants had to have a �95% approval rate for prior participation on the recruitment platform (Prolific).Additionally, Prolific prescreening questions were used to ensure that the study was only available to participants who reported that they speak fluent English, were aged between 18 and 70 years, and had completed a doctorate degree (PhD/ other). Procedure. 1. Data collection and recruitment via the Prolific platform began on September 13, 2023 and was completed on September 14, 2023. 2. After responding to the study advert (https://osf.io/a25vq),participants read an information sheet (https://osf.io/39vay)and provided consent (https://osf.io/xdar7).During this process, they were told that the study seeks to understand "how people perceive words used to describe scientific studies so we can improve communication of research to the general public."3. Participants completed the task remotely online via the Qualtrics platform.Before starting the main task, they read a set of instructions and responded to a practice statement (S3 Text). 4. For the main task, statements were presented sequentially, and participants responded to them in their own time.The order of presentation was randomized, both between and within the 4 blocks of statements.After each statement, there was a 15-s filler task during which participants were asked to complete as many multiplication problems (e.g., 5 × 7 =?) as they could from a list of 10.The multiplication problems were randomly generated every time they appeared using the Qualtrics software.Only numbers between 1 and 15 were used to ensure that most of the problems were relatively straightforward to solve.A single "attention check" statement (Supplementary Figure C in S1 Text) appeared after all 4 blocks had been completed. 5. Participants were required to respond to each statement before they could continue to the next statement.The response slider could be readjusted as desired until the "next" button was pressed, after which participants could not return to or edit prior responses. 6.After responding to all 21 statements and the attention check, participants were shown a debriefing document (https://osf.io/a9gve). Participant characteristics Participants stated that their highest completed education level was either a doctorate degree (n = 287) or graduate degree (n = 14).Participants reported that the subject areas that most closely represented their degrees were Life Sciences and Biomedicine (n = 97), Social Sciences (n = 77), Physical Sciences (n = 57), Arts and Humanities (n = 37), and various "other" disciplines (n = 33). Response distributions The distribution of participants' responses to each phrase is shown in Tables 3 and 4 show the 25th, 50th (i.e., median), and 75th percentiles of responses for each phrase (as represented by the black vertical lines in Figs 2 and 3).The tables include 95% confidence intervals only for medians to make them easier to read; however, confidence intervals for all percentile estimates are available in Supplementary Implied ranking of evaluative phrases Do participants' implied rankings match the intended rankings?Although participants rated each statement separately on a continuous scale, these responses also imply an overall ranking of the phrases (in order of significance/importance or strength of support).Ideally, an evaluative vocabulary elicits implied rankings that are both consistent among participants and consistent with the intended ranking.Fig 4 shows the proportion of participants whose implied ranking matched the intended ranking (i.e., "correct ranking") for the different evaluative dimensions and vocabularies. On the significance/importance dimension, 59 (20% [15% to 24%]) participants' implied rankings of the eLife vocabulary aligned with the intended ranking and 188 (62% [57% to 68%]) participants' implied rankings of the alternative vocabulary aligned with the intended ranking.We performed an "exact" McNemar test and computed the McNemar odds ratio with Clopper-Pearson 95% confidence intervals adjusted with the "midp" method, as recommended by [16].The McNemar test indicated that observing a difference between the vocabularies this large, or larger, is unlikely if the null hypothesis were true (odds ratio = 8.17, 95% CI [5.11, 13.69], p = 1.34e-26).The intended ranking was the most popular for both vocabularies; however, participants had 55 different implied rankings for the eLife vocabulary and 8 different implied rankings for the alternative vocabulary (for details, see Supplementary Tables A-D in S6 Text).Note that these values should be compared with caution, as for the significance/ importance dimension, the eLife vocabulary had more (6) phrases than the alternative vocabulary (which had 5 phrases and therefore fewer possible rankings). On the strength of support dimension, 45 (15% [11% to 20%]) participants' ratings of the eLife phrases were in accordance with the intended ranking relative to 201 (67% [62% to 72%]) participants who correctly ranked the alternative vocabulary.A McNemar test indicated that observing a difference between the vocabularies this large, or larger, is unlikely if the null hypothesis were true (odds ratio = 11.4,95% CI [6.89, 20.01], p = 5.73e-35).The intended ranking was the most popular for both vocabularies; though for the eLife vocabulary, an unintended ranking that swapped the ordinal positions of "convincing" and "solid" came a close second, reflected in the ratings of 44 (15% [10% to 19%]) participants.Overall, there were 34 different implied rankings for the eLife vocabulary, relative to 10 implied rankings for the alternative vocabulary. Quantifying ranking similarity.Thus far, our analyses have emphasised the binary difference between readers' implied rankings and eLife's intended rankings.A complementary analysis quantifies the degree of similarity between rankings using Kendall's tau distance (K d ) -a metric that describes the difference between 2 lists in terms of the number of adjacent pairwise swaps required to convert one list into the other [17,18].The larger the distance, the larger the dissimilarity between the 2 lists.K d ranges from 0 (indicating a complete match) to n(n-1)/ 2 (where n is the size of one list).Because the eLife strength of support dimension has 6 phrases and all other dimensions have 5 phrases, we report the normalised K d which ranges from 0 (maximal similarity) to 1 (maximal dissimilarity).Further explanation of K d is provided in S7 Text. Fig 5 illustrates the extent to which participants' observed rankings deviated from the intended ranking in terms of normalised K d .This suggests that although deviations from the intended eLife ranking were common, they only tended to be on the order of 1 or 2 discordant rank pairs.By contrast, the alternative vocabulary rarely resulted in any deviations, and when it did, these were typically only in terms of one discordant rank pair. Locus of ranking deviations.So far, we have examined how many participants adhered to the intended ranking (Fig 4 ) and the extent to which their implied rankings deviated from the intended rankings (Fig 5).However, these approaches do not optimally illustrate where the ranking deviations were concentrated (i.e., which phrases were typically being misranked).The heat maps in Fig 6 show the percentage of participants whose implied rankings matched or deviated from the intended ranking at the level of individual phrases.Ideally, a phrase's observed rank will match its intended rank for 100% of participants.For example, the heat maps show that almost all participants (98%) correctly ranked "moderate importance" and "moderate support" in the alternative vocabulary.The heat maps also reveal phrases that were often misranked with each other, for example: "solid," "convincing," and "compelling" in the eLife vocabulary. Discussion Research articles published in eLife are accompanied by evaluation statements that use phrases from a prescribed vocabulary (Table 1) to describe a study's importance (e.g., "landmark") and strength of support (e.g., "compelling").If readers, reviewers, and editors interpret the prescribed vocabulary differently to the intended meaning, or inconsistently with each other, it could lead to miscommunication of research evaluations.In this study, we assessed the extent to which people's interpretations of the eLife vocabulary are consistent with each other and consistent with the intended ordinal structure.We also examined whether an alternative vocabulary (Table 2) improved consistency of interpretation. Overall, the empirical data supported our initial intuitions: while some phrases in the eLife vocabulary were interpreted relatively consistently (e.g., "exceptional" and "landmark"), several phrases elicited broad interpretations that overlapped a great deal with other phrases' interpretation (particularly the phrases "fundamental," "important," and "valuable" on the significance/importance dimension (Fig 2 ) and "compelling," "convincing," and "useful" on the strength of support dimension (Fig 3 )).This suggests these phrases are not ideal for discriminating between studies with different degrees of importance and strength of support.If the same phrases often mean different things to different people, there is a danger of miscommunication between the journal and its readers.Responses on the significance/importance dimension were largely confined to the upper half of the scale, which is unsurprising, given the absence of negative phrases.It is unclear if the exclusion of negative phrases was a deliberate choice on the part of eLife's leadership (because articles with little importance would not be expected to make it through editorial triage) or an oversight.Most participants' implied rankings of the phrases were misaligned with the ranking intended by eLife-20% of participants had aligned rankings on the significance/importance dimension and 15% had aligned rankings on the strength of support dimension (Fig 4).The degree of mismatch was typically in the range of 1 or 2 discordant ranks (Fig 5).Heat maps (Fig 6) highlighted that phrases in the middle of scale (e.g., "solid," "convincing") were most likely to have discordant ranks. By contrast, phrases in the alternative vocabulary tended to elicit more consistent interpretations across participants and interpretations that had less overlap with other phrases (Figs 2 and 3 and Tables 3 and 4).The alternative vocabulary was more likely to elicit implied rankings that matched the intended ranking-62% of participants had aligned rankings on the significance/ importance dimension and 67% had aligned rankings on the strength of support dimension (Fig 4).Mismatched rankings were usually misaligned by one rank (Fig 5).Although the alternative vocabulary had superior performance to the eLife vocabulary, it was nevertheless imperfect.Specifically, interpretation of phrases away from the middle of the scale on both dimensions (e.g., "low importance" and "very low importance") tended to have some moderate overlap (Figs 2, 3, and 6).We do not know what caused this overlap, but, as discussed in the next paragraph, one possibility is that it is overly optimistic to expect peoples' intuitions to align when they judge phrases in isolation, without any knowledge of the underlying scale. Rather than presenting evaluative phrases in isolation (as occurs for eLife readers and occurred for participants in our study), informing people of the underlying ordinal scale may help to improve communication of evaluative judgements.eLife could refer readers to an external explanation of the vocabulary; however, prior research on interpretation of probabilistic phrases suggests this may be insufficient as most people neglect to look up the information [6,19].A more effective option might be to explicitly present the phrases in their intended ordinal structure [19].For example, the full importance scale could be attached to each evaluation statement with the relevant phrase selected by reviewers/editors highlighted (Fig 7A).Additionally, phrases could be accompanied by mutually exclusive numerical ranges (Fig 7B ); prior research suggests that this can improve consistency of interpretation for probabilistic phrases [19].It is true that the limits of such ranges are arbitrary, and editors may be concerned that using numbers masks vague subjective evaluations in a veil of objectivity and precision.To some extent we share these concerns; however, the goal here is not to develop an "objective" measurement of research quality, but to have practical guidelines that improve accuracy of communication.Specifying a numerical range may help to calibrate the interpretations of evaluators and readers so that the uncertainty can be accurately conveyed.Future research could also explore the relationship between the number of items included in the vocabulary and the level of precision that reviewers/editors wish to communicate. Our study has several important limitations.First, we did not address whether editor/ reviewer opinions provide valid assessments of studies or whether the vocabularies provide valid measurements of those opinions.We also note that eLife assessments are formed via consensus, rather than representing the opinions of individuals, which raises questions about how social dynamics may affect the evaluation outcomes.It may be more informative to solicit and report individual assessments from each peer reviewer and editor, rather than force a consensus (e.g., see Fig 7C).Although these are important issues, they are beyond the scope of this study, which is focused on clarity of communication. Second, we are particularly interested in how the readership of eLife interpret the vocabularies, but because we do not have any demographic information about the readership, we do not know the extent to which our sample is similar to that population.We anticipated that the most relevant demographic characteristics were education status (because the content is technical), knowledge of subject area (because eLife publishes biomedical and life sciences), and language (because the content is in English).All of our participants reported speaking fluent English, the vast majority had doctoral degrees, and about one third had a degree in the Biomedical and Life Sciences.Relative to this sample, we expect the eLife readership probably consists of more professional scientists, but otherwise we think the sample is likely to be a good match to the target population.Also note that eLife explicitly states that eLife assessments are intended to be accessible to non-expert readers [4], therefore, our sample is still a relevant audience, even if it might contain fewer professional scientists than eLife's readership. Third, to maintain experimental control, we presented participants with very short statements that differed only in terms of the phrases we wished to evaluate.In practice however, these phrases will be embedded in a paragraph of text (e.g., Box 1) which may also contain "aspects" of the vocabulary definitions (Table 1) "when appropriate" [4].It is unclear if the inclusion of text from the intended phrase definitions will help to disambiguate the phrases and future research could explore this. Fourth, participants were asked to respond to phrases with a point estimate; however, it is likely that a range of plausible values would more accurately reflect their interpretations [9,11].Because asking participants to respond with a range (rather than a point estimate) creates technical and practical challenges in data collection and analysis, we opted to obtain point estimates only. Conclusion Overall, our study suggests that using more structured and less ambiguous language can improve communication of research evaluations.Relative to the eLife vocabulary, participants' interpretations of our alternative vocabulary were more likely to align with each other, and with the intended interpretation.Nevertheless, some phrases in the alternatively vocabulary were not always interpreted as we intended, possibly because participants were not completely aware of the vocabulary's underlying ordinal scale.Future research, in addition to finding optimal words to evaluate research, could attempt to improve interpretation by finding optimal ways to present them. Figure B in S1 Text).Additionally, there was 1 attention check statement (Supplementary Figure C in S1 Text), a question asking participants to confirm their highest completed education level (options: Undergraduate degree (BA/BSc/other)/Graduate degree (MA/MSc/MPhil/other)/Doctorate degree (PhD/other)/Other), and a question asking participants the broad subject area of their highest completed education level (options: Arts and Humanities/Life Sciences and Biomedicine/Physical Sciences/Social Sciences/Other).The veridical materials are available at https://osf.io/jpgxe/.Sample Sample source.Participants were recruited from the online participant recruitment platform Prolific (https://www.prolific.co/).As of 23rd August 2023, the platform had 123,064 members.Demographic information about Prolific members is provided in S8 Text. Fig 1 . Fig 1.An example summary statement referring to a study's strength of support and the corresponding response scale with an arbitrary response shown.https://doi.org/10.1371/journal.pbio.3002645.g001 Fig 2 ( importance/significance dimension) and Fig 3 (strength of support dimension).These "ridgeline" plots[15] are kernel density distributions which represent the relative probability of observing different responses (akin to a smoothed histogram). Fig 6 . Fig 6.Heat maps showing the percentage of participants (N = 301) whose implied rankings were concordant or discordant with the intended ranking at the level of individual phrases.Darker colours and higher percentages indicate greater concordance between the implied rank and the intended rank of a particular phrases.The data underlying this figure can be found in https://osf.io/mw2q4/files/osfstorage.https://doi.org/10.1371/journal.pbio.3002645.g006 Table 2 . Phrases from the alternative vocabulary representing 2 evaluative dimensions: importance and strength of support. Each dimension is represented by 5 phrases. Alternative vocabulary Importance Strength of support Table A in S5 Text and Supplementary Table B in in S5 Text. Table 4 . Percentile estimates for participant responses to phrases on the strength of support dimension for eLife and alternative vocabularies . The data underlying this table can be found in https://osf.io/mw2q4/files/osfstorage.
6,118
2024-08-01T00:00:00.000
[ "Education", "Computer Science" ]
Short-Term Memory in Signed Languages: Not Just a Disadvantage for Serial Recall The higher short-term memory (STM) capacity for spoken language compared to signed language is well-documented: speakers have a digit span of 7 ± 2, signers only 5 ± 1 (see Hall and Bavelier, 2010, for a review). A consensus has been developing that speech is “special” in supporting the temporal sequencing of linguistic information, giving spoken-language users a serial recall advantage (e.g., Bavelier et al., 2008; Conway et al., 2009). The “speech supports temporal sequencing” hypothesis predicts that the difference between signed and spoken languages should disappear in tasks that do not require recall of a sequence of signs. However, recent data from a non-sign repetition task do not support this prediction. We created a sign language equivalent of the non-word repetition task, a task widely used to investigate phonological STM in speakers. Test items were phonotactically plausible but meaningless signs, manipulated for complexity of handshape and movement (Mann et al., 2010). Importantly, serial recall was not involved in this task. At most, a non-sign contained a change from one handshape to another or from one location to another (or both). We tested 91 deaf children aged 3–11 years, divided into three age-bands. All had early and continued regular exposure to BSL, comprehension skills within the normal range (as measured by the BSL Receptive Skills Test; Herman et al., 1999), normal nonverbal cognitive development and no identified special educational need additional to deafness. The task proved to be surprisingly difficult, with low scores across the age groups in comparison to results from non-word repetition studies of hearing, English-speaking children of equivalent ages, including those for whom English is an additional language (Figure ​(Figure11). Figure 1 Comparison between performance on the non-sign repetition test and studies of non-word repetition in English. Bars show SD. Non-sign repetition, Mann et al., 2010; Non-word repetition, native English speakers; • At age 3–5 years: ... Generally, signs are of longer average duration than words, so we measured the duration of a selection of 3 and 4 syllable non-words used in a new non-word repetition study (Marshall et al., 2011). Duration for non-words ranged from 0.97 to 1.36 s and non-signs were slightly longer, with a mean duration of 1.31 s for phonologically simple and 1.35 s for phonologically complex non-signs (overall range 1–1.84 s). Is it possible that non-signs are more difficult to repeat because of these differences in temporal duration? After all, hearing children are less successful at repeating longer non-words (for a review, see Gathercole, 2006). For hearing children, STM capacity increases rapidly throughout childhood, and at the age of 3 years it is already equal to about 3 digits (Chi, 1977). Yet, in non-sign and non-word repetition tasks, stimuli are presented singly and not in a span, so there is no new material to block rehearsal, and our stimuli do not exceed the estimated 2 s of available time capacity in STM (Baddeley et al., 1975). Rather than duration, we argue that the way in which phonological material is structured is likely to be a more important limiting factor in repetition. The “speech supports temporal processing” view has recently been modified by Hall and Bavelier (2010), who argue that the advantage for speech arises from speakers being more likely to rely on the temporal chunking of units and on articulatory rehearsal. Under this more nuanced view, they might predict that the repetition of non-signs would be disadvantaged relative to the repetition of non-words because the latter benefit from the chunking of temporally adjacent units. However, in a direct comparison between serial recall and non-word repetition Archibald and Gathercole (2007) found that there was a role for phonology in explaining individual differences in non-word repetition accuracy above and beyond the impact of serial recall. We suggest therefore that STM differences between signed and spoken languages are not due solely to the advantages offered by temporal chunking, but also to differences in phonological structure. Here we speculate as to what those differences might be. Gozzi et al. (2011) propose a “same store, bigger units” explanation, according to which “signs are ultimately more difficult to retain because they are phonologically heavier than words” (p. 6). They suggest that signed material is “heavier” because even the simplest syllable requires the signer to process information about the four formational parameters of a sign, namely handshape, orientation, movement, and location. In contrast, they argue, a spoken-language syllable can consist of just a single vowel (for example, the spoken forms of the English words “eye” and “oh!”). That may be the case, but in non-words the amount of phonological material to be remembered is considerably larger than just a vowel. However, there are formational constraints on the construction of spoken syllables: from the inventory of sounds that a language allows its words to be built from, only a subset of those sounds can occur in word-initial, middle, and final positions. In contrast, it is not clear that there are equivalent limits on the permutations of handshape, orientation, movement and location within signs. While there are well-formedness constraints on signs in terms of how many parameters can be combined (e.g., one specified handshape or location change) there do not appear to be restrictions, for example, on which handshapes can occur with which locations (Orfanidou et al., 2010). As a result, signers have to be prepared to encounter many possible combinations of each formational parameter while processing novel signs, rather than following predictive routes. Furthermore, with respect to the phonological features that make up phonemes and sign parameters, signed languages have arguably around twice as many features as spoken languages (Sandler, 2008). We speculate that these differences in structural organization between signed and spoken phonology mean that signers, when faced with an unfamiliar sign, have to monitor a larger repertoire of parameter values and parameter combinations. One way of construing “phonological heaviness” is in terms of there being more “degrees of freedom” in the phonological composition of a sign. Having fewer limits on what to expect in terms of the linguistic input's phonological form imposes a greater STM load. Thinking in terms of “degrees of freedom” also makes predictions for non-word repetition in spoken languages. Speakers of languages with larger inventories of segments, syllable types, and metrical patterns (and therefore arguably less predictability in terms of how segments are sequenced and where stress falls) might repeat non-words less accurately than speakers of languages with smaller inventories, all other characteristics (e.g., syllable number) being equal. The “degrees of freedom” hypothesis needs fleshing out, but it has promise in helping us to understand why modality differences in STM exist, and why STM deals particularly effectively with speech. The higher short-term memory (STM) capacity for spoken language compared to signed language is well-documented: speakers have a digit span of 7 ± 2, signers only 5 ± 1 (see Hall and Bavelier, 2010, for a review). A consensus has been developing that speech is "special" in supporting the temporal sequencing of linguistic information, giving spoken-language users a serial recall advantage (e.g., Bavelier et al., 2008;Conway et al., 2009). The "speech supports temporal sequencing" hypothesis predicts that the difference between signed and spoken languages should disappear in tasks that do not require recall of a sequence of signs. However, recent data from a non-sign repetition task do not support this prediction. We created a sign language equivalent of the non-word repetition task, a task widely used to investigate phonological STM in speakers. Test items were phonotactically plausible but meaningless signs, manipulated for complexity of handshape and movement (Mann et al., 2010). Importantly, serial recall was not involved in this task. At most, a non-sign contained a change from one handshape to another or from one location to another (or both). We tested 91 deaf children aged 3-11 years, divided into three age-bands. All had early and continued regular exposure to BSL, comprehension skills within the normal range (as measured by the BSL Receptive Skills Test; Herman et al., 1999), normal nonverbal cognitive develop-ment and no identified special educational need additional to deafness. The task proved to be surprisingly difficult, with low scores across the age groups in comparison to results from non-word repetition studies of hearing, English-speaking children of equivalent ages, including those for whom English is an additional language (Figure 1). Generally, signs are of longer average duration than words, so we measured the duration of a selection of 3 and 4 syllable non-words used in a new non-word repetition study (Marshall et al., 2011). Duration for non-words ranged from 0.97 to 1.36 s and non-signs were slightly longer, with a mean duration of 1.31 s for phonologically simple and 1.35 s for phonologically complex non-signs (overall range 1-1.84 s). Is it possible that non-signs are more difficult to repeat because of these differences in temporal duration? After all, hearing children are less successful at repeating longer non-words (for a review, see Gathercole, 2006). For hearing children, STM capacity increases rapidly throughout childhood, and at the age of 3 years it is already equal to about 3 digits (Chi, 1977). Yet, in non-sign and non-word repetition tasks, stimuli are presented singly and not in a span, so there is no new material to block rehearsal, and our stimuli do not exceed the estimated 2 s of available time capacity in STM (Baddeley et al., 1975). Rather than duration, we argue that the way in which phonological material is structured is likely to be a more important limiting factor in repetition. The "speech supports temporal processing" view has recently been modified by Hall and Bavelier (2010), who argue that the advantage for speech arises from speakers being more likely to rely on the temporal chunking of units and on articulatory rehearsal. Under this more nuanced view, they might predict that the repetition of non-signs would be disadvantaged relative to the repetition of non-words because the latter benefit from the chunking of temporally adjacent units. However, in a direct comparison between serial recall and nonword repetition Archibald and Gathercole We speculate that these differences in structural organization between signed and spoken phonology mean that signers, when faced with an unfamiliar sign, have to monitor a larger repertoire of parameter values and parameter combinations. One way of construing "phonological heaviness" is in terms of there being more "degrees of freedom" in the phonological composition of a sign. Having fewer limits on what to expect in terms of the linguistic input's phonological form imposes a greater STM load. Thinking in terms of "degrees of freedom" also makes predictions for non-word repetition in spoken languages. Speakers of languages with larger inventories of segments, syllable types, and metrical patterns (and therefore arguably less predictability in terms of how segments are sequenced and where stress falls) might repeat non-words less accurately than speakers of languages with smaller inventories, all other characteristics (e.g., syllable number) being equal. The "degrees of freedom" hypothesis needs fleshing out, but it has promise in helping us to understand why modality differences in STM exist, and why STM deals particularly effectively with speech. (2007) found that there was a role for phonology in explaining individual differences in non-word repetition accuracy above and beyond the impact of serial recall. We suggest therefore that STM differences between signed and spoken languages are not due solely to the advantages offered by temporal chunking, but also to differences in phonological structure. Here we speculate as to what those differences might be. Gozzi et al. (2011) propose a "same store, bigger units" explanation, according to which "signs are ultimately more difficult to retain because they are phonologically heavier than words" (p. 6). They suggest that signed material is "heavier" because even the simplest syllable requires the signer to process information about the four formational parameters of a sign, namely handshape, orientation, movement, and location. In contrast, they argue, a spoken-language syllable can consist of just a single vowel (for example, the spoken forms of the English words "eye" and "oh!"). That may be the case, but in non-words the amount of phonological material to be remembered is considerably larger than just a vowel. However, there are formational constraints on the construction of spoken syllables: from the inventory of sounds that a language allows its words to be built from, only a subset of those sounds can occur in word-initial, middle, and final positions. In contrast, it is not clear that there are equivalent limits on the permutations of handshape, orientation, movement and location within signs. While there are wellformedness constraints on signs in terms of how many parameters can be combined (e.g., one specified handshape or location change) there do not appear to be restrictions, for example, on which handshapes can occur with which locations (Orfanidou et al., 2010). As a result, signers have to be prepared to encounter many possible combinations of each formational parameter while processing novel signs, rather than following predictive routes. Furthermore, with respect to the phonological features that make up phonemes and sign parameters, signed languages have arguably around twice as many features as spoken languages (Sandler, 2008).
3,054.8
2011-05-18T00:00:00.000
[ "Linguistics" ]
Aging of Concrete Structures and Infrastructures: Causes, Consequences, and Cures (C 3 ) of deteriorating concrete structures incorporating various uncertainties. Aging of any concrete structure is a natural process, but it has become an urgent and critical problem in recent years, during which long-operating dams and nuclear power plants have begun to lose reliable life. A large number of infrastructures all over the world are over 50 years old and suffer from extensive deterioration that affects their serviceability. e high costs associated with preserving the aging structures along with the limited funds allocated for their maintenance pose significant technical and financial challenges, which require the systematic approaches for riskinformed condition assessment. Only in the USA, the American Society of Civil Engineers (ASCE) estimates a required investment amount of about 3.6 trillion dollars by 2020 to improve the condition of infrastructures to an acceptable level. is is more than twice the anticipated available funding level. Aging usually begins to appear in individual elements of the structures, leading to nonuniform or heterogeneous behavior. e most well-known and widespread sign of structural aging is related to weakening of concrete mechanical properties. Figure 1 schematically presents the performance assessment of concrete structures and infrastructures in their lifetime highlighting the aging effects. Besides typical uncertainties such as uncertainty in environmental loads, several other sources of uncertainty exist, including unknown initial and boundary conditions, unknown damage history to the structure, uncertainties in current laboratory test methods, and finally a big uncertainty in the available predictive models. In order to develop a comprehensive performance assessment methodology, the past behavior (through diagnosis) should be combined by current observations and tests, and all should be used to predict the future life of the structure. Typically, the hybrid uncertainties are increased with lifetime of the system. Aging and deterioration is a factor which accelerates/intensifies the uncertain response of the structures to imposed environmental actions. Aging and deterioration of concrete structures and infrastructures can be incorporated in both design and analysis phases. In design of new structures, factors such as creep and shrinkage, temperature gradients, sustainability and life cycle cost, and resiliency of the system should be considered. In the analysis of existing structures, material uncertainty and current damage pattern are the key parameters. e articles presented in this special issue are focused on the state-of-the-art techniques, methods, and applications employed in aging, deterioration, and damage analysis and assessment in concrete structures and infrastructures. Overall, 17 submissions were received by the editorial team, and 8 manuscripts have been accepted for publication. Freeze-thaw cycling conditions are a primary cause of durability deterioration of concrete structures in the regions with extreme temperature variations. In the paper by Yang et al. "Equation for the Degradation of Uniaxial Compression Stress of Concrete due to Freeze-aw Damage," the authors conducted a series of experiments on concrete specimen and determined the freeze-thaw-based damage variable. Subsequently, they proposed an equation for the stress-strain constitutive relation including the freeze-thaw damage variable. ey pointed out the observed changes in the elastic modulus with increase in the freeze-thaw cycle number. Concrete paving stones are fabricated by mixing cement, aggregate, water, and additives in certain ratios. ey are widely used in urban road, pavement, and recreation areas. erefore, increasing their durability and strength is the key factor in sustainability design. In the paper by Bakis "Increasing the Durability and Freeze-aw Strength of Concrete Paving Stones Produced from Ahlat Stone Powder and Marble Powder by Special Curing Method," the author proposed a method to use construction waste materials, i.e., Ahlat stone powder and marble powder, in fabricating interlocked paving stones. Both the durability and freezethaw strength of the produced material increased by the special curing method. Sulfate attack on cement is one of the reasons for degradation of concrete durability and subsequently reduces the service life of the concrete structures. In the paper by Liu et al. "Research on Sulfate Attack Mechanism of Cement Concrete Based on Chemical ermodynamics," the authors characterized the relationship between temperature and the Gibbs free energy of erosion products generated during the sulfate attack on cement. e proposed model, which was based on principles of chemical thermodynamics, determined the phase composition, microstructure, crystal form, and morphology of erosion products before and after sulfate attack. ey pointed out that the sulfate attack has double effects on mechanical properties of specimens. Accurate performance evaluation of expressway pavement is a vital factor to determine the pavement design scheme and the future maintenance program. In the paper by Yang et al. "Highway Performance Evaluation Index in Semiarid Climate Region Based on Fuzzy Mathematics," the authors proposed a method based on fuzzy mathematics in order to evaluate the performance of a case study pavement. ey incorporated multiple sources of fuzziness and randomness in their calculations. e performance grade is quantified by an iterative scheme and contrasted with traditional methods. Although the long-span concrete girder bridges (including continuous rigid-frame bridges) have been widely used in construction, they suffer from excessive long term deflection. In the paper by Niu and Tang "Effect of Shear Creep on Long-Term Deformation Analysis of Long-Span Concrete Girder Bridge," the authors developed a systematic framework for long-term creep calculation of girder bridges using commercial finite element package. Based on linear creep and the superposition principle, the proposed method can consider both shear creep and segmental multiage concrete effect. For a case study bridge, they reported that shear creep causes 10%+ in long-term deformation. However, the impact of shear creep is close for a bridge with different degrees of prestressing. e posttensioning by monostrands in substitute cable ducts is a highly efficient method for strengthening of existing bridges in order to increase their load-bearing capacities in terms of current traffic load and to extend their service life. In the paper by Svoboda et al. "Strengthening and Rehabilitation of U-Shaped RC Bridges Using Substitute Cable Ducts," the authors described strengthening and rehabilitation of 100+-year-old bridges. e proposed method minimizes the interventions into the constructions, unseen method of cable arrangement, and the absence of impact on appearance. Adjacent precast concrete box-beam bridges have been a popular solution for small-and medium-span bridges worldwide. Although nonlinear FEA provides an accurate redundancy assessment of box-beam segments, its application is not always feasible for practitioners. In the paper by Leng et al. "Structural Redundancy Assessment of Adjacent Precast Concrete Box-Beam Bridges in Service," the authors proposed a simplified approach based on linear FEA coupled with field load testing to address the particular structural feature and topology of adjacent precast concrete box-beam Advances in Materials Science and Engineering bridges. is method reduces the computation complexity and improves the reliability. Multihazard resilience and sustainability of the structures decrease by aging of their components. In the paper by Hayashi et al. "Fundamental Investigation on Seismic Retrofitting Method of Aging Concrete Structural Wall using Carbon Fiber Sheet-Constitutive Law of Rectangular Section," the authors developed a method for seismic strengthening of aging RC buildings by wrapping the structural members with carbon fiber sheets. According to a series of monotonic uniaxial compression tests, they found that the compressive strength decreases and the ultimate strain increases as the ratio of long to short side of the rectangular cross section increases. ey also proposed evaluation formulas for the constitutive law of concrete elements with rectangular cross sections. We hope that this special issue would shed light on the recent advances and developments in the area of aging concrete structures and infrastructures and attract attention by the scientific community to pursue further research and studies on causes, consequences, and cures (C 3 ) of aging and deteriorating concrete components in various scales (micro, meso, and macro). Conflicts of Interest e editors declare that they have no conflicts of interest regarding the publication of this Special Issue.
1,806
2020-05-07T00:00:00.000
[ "Engineering" ]
Researcher perspectives on challenges and opportunities in conservation physiology revealed from an online survey We used an online survey of researchers to determine the challenges that the field of conservation physiology currently faces. We found that many participants cited communication difficulties, lack of funding, logistical constraints and a lack of physiological baseline data as barriers. We provide recommendations for overcoming some of these challenges. Introduction Conservation science inherently involves combining various disciplines (e.g. conservation genetics, conservation behaviour, conservation social science; see Kareiva and Marvier, 2012) to solve complex problems, which is both laudable and necessary (Lubchenco, 1998;Dick et al., 2016). The field of conservation physiology seeks to apply physiological tools, techniques and knowledge to identify and solve conservation challenges (Cooke et al., 2013). It is relatively new, having only been named and described as a discipline with cohesive goals ∼15 years ago (Wikelski and Cooke, 2006). It is clear that the field is growing: it now boasts a dedicated journal and textbook (Madliger et al., 2021), is increasingly represented at international scientific conferences and includes many early-career researchers identifying conservation physiology as the main focus of their research programmes. Nonetheless, given the nascent nature of the field, the fact that it merges two often-disparate subdisciplines, and its mission-oriented goal of contributing to on-the-ground conservation action, it could inherently face a number of challenges (Cooke and O'Connor, 2010). For the similarly interdisciplinary field of conservation behaviour, Caro and Sherman (2013) outlined 18 reasons why animal behaviourists may avoid working in the realm of conservation science. These barriers included a lack of targeted funding, differences in scale of study (e.g. individuals versus populations), lack of expertise and a perception that conservation science is less intellectually stimulating. We anticipate that physiologists may cite similar reasons as to why incorporating conservation applications as a component of their research goals is challenging and that conservation scientists may be hesitant to employ physiological techniques due to a lack of baseline physiological data (i.e. reference ranges of a physiological metric's 'normal' or pre-environmental change levels) and the invasive nature of some physiological techniques (Cooke and O'Connor, 2010;Lennox and Cooke, 2014;Madliger and Love, 2015). With the conservation physiology toolbox rapidly expanding in terms of the number of tools available and their validation (Madliger et al., 2018), it is a worthwhile time to ascertain the challenges in the discipline that could be hindering growth to ensure that this toolkit can be applied as extensively as possible to promote conservation gains. We are unaware of any attempt to survey scientists across the globe about their experiences navigating the field of conservation physiology, despite there being immense potential to gain information that is not readily shared in publications, focus groups or other forums. Identifying the specific challenges that researchers are facing in the field could indicate where misconceptions lie, provide information on which validations need to be performed to better apply physiology to conservation endeavours and provide starting points for improvement in communication (Cooke et al., 2020). Explicitly articulating barriers can also represent a way to share frustrations; knowing others are experiencing similar challenges can strengthen the feeling of community among conservation physiologists and provide a rallying point to share strategies and approaches (McMillan and Chavis, 1986;Michaut, 2011). Overall, identifying barriers and opportunities allows researchers and practitioners to prioritize challenges and view them in a more objective way for problem-solving purposes, further helping to create a community of practice (McMillan and Chavis, 1986). Additionally, exploring the perspectives of those working across an entire discipline (i.e. physiology) rather than simply with a specific research tool (i.e. biotelemetry; as per Young et al., 2018) has the potential to identify what researchers could do to make meaningful advances in conservation practice and policy. To begin better characterizing the field of conservation physiology and identifying where challenges exist, we surveyed scientists (using an online survey) with experience at the intersection of physiology and conservation science to identify the following: (i) the extent to which researchers engage in conservation physiology work and their demographic composition; (ii) the barriers researchers have experienced when integrating physiology and conservation science and the level of difficulty they have faced in overcoming them; (iii) whether participants believe conservation physiology is accomplishing its primary goals; and (iv) whether their own work linking physiology and conservation has led to on-the-ground conservation success. We conclude with recommendations for addressing the challenges the survey unveiled that also arose from ideas shared by survey respondents. We recognize that there would also be value in conducting a similar survey with conservation practitioners, but that was beyond the scope of the current study. In addition, we note that a future study that enables hypothesis testing using quantitative tools would be desirable; however, this introductory survey to highlight possible barriers and opportunities was not designed to do so. Instead, we consider this to be a relatively modest, exploratory study that can be used to identify hypotheses worthy of formal testing in a more extensive follow-up endeavour. As such, we consider this a 'perspective' article in that we are synthesizing and sharing the perspectives of the members of the conservation physiology research community. Participant pool To form a pool of potential participants with experience working in the realm of conservation physiology, we performed a search in Web of Science (Core Collection) to identify research articles that combined physiology and conservation tools and approaches. We completed 4 separate searches on 1 December 2016 with the goal of identifying papers published in the following: (i) conservation journals that used physiological approaches; (ii) physiology journals that considered conservation implications; (iii) general ecol- ogy journals that combined physiological and conservation science approaches; and (iv) any scientific journal that used the term 'conservation physiology'. The search strings we used for each scenario can be viewed in the Supplementary Information (Part 1). We retained all 3287 results from search (i), the first 3000 results from search (ii) (sorted on relevance), 3000 results from search (iii) and all 134 results from search (iv). We chose the number of results to retain to balance the contribution of papers across each search type and by examining the search results to choose a cut-off when results were no longer relevant (i.e. results were not fulfilling the above search criteria). Through the metadata stored in the Web of Science results, we extracted the email addresses of the corresponding authors on all 9421 publications, published between 1997 and 2016. After deleting duplicate emails, we reached a final potential participant email list of 7080. Survey instrument We conducted an anonymous, international online survey (Supplementary Information, Part 2) of scientists, which was approved by the University of Windsor's Research Ethics Board (#16-193) with adjunct clearance from the Carleton University Research Ethics Board-B (CUREB-B). The findings we present here are reported in aggregate, although we use quotes from open-ended questions to provide context. No self-identifying information was collected from participants. The survey was available from 5 December 2016 to 17 January 2017 and was administered via FluidSurveys. Participants were sent an email on 5 December 2016 inviting them to participate in the survey, with reminders sent on 19 December 2016 and 9 January 2017. Of 707 individuals that opened the survey, 180 were filtered outside of the desired sample due to a response of 'no' to an initial question: 'Have you ever participated in research or other work that combines physiology and conservation?' Of the remaining participants, 468 completed the survey and submitted their responses. As a result, the overall response rate was 9%, which is similar to other targeted e-mail-based surveys (e.g. Cooke et al., 2016;Sappleton and Lourenco, 2016), which notoriously have lower response rates than mail surveys (Coderre et al., 2004). We cannot exclude the possibility that some spam filters categorized our survey invitations as 'junk mail', or that there was survey fatigue within the scientific community. We did not track the country of origin or any demographic parameters for individuals on our initial recruitment list, so it is not possible to determine if there was any geographic or demographic bias in the respondents. As stated above, participants were drawn from a pool of research publications that spanned 1997-2016, and we acknowledge that the period when researchers worked at the interface between physiology and conservation could influence their conceptualization of the field; however, we are unable to ascertain whether this bias existed in our data. The survey was only administered in English so we must assume that it is biassed towards researchers with a command of English. In addition, because we generated our survey panel by using published authors, our sample is inherently biassed towards those scientists who publish their work in journals. The survey consisted of 27 questions covering demographics, perceptions of barriers in conservation physiology, perceptions of the success of conservation physiology and research dissemination venues and framing. The barriers we included in the survey were identified in the existing literature (Cooke and O'Connor, 2010;Caro and Sherman, 2013;Cooke et al., 2013;Coristine et al., 2014), and respondents were provided with the opportunity to add other barriers. We used a mix of Likert-style, yes/no, multiple choice and open-ended questions. The number of participants answering each question varied, and we therefore provide sample sizes for each question separately with the results. Given the breadth and number of questions we posed, we only present the data corresponding to a sub-set of the questions here. Specifically, we omitted questions asking participants about their research dissemination activities, journal choices and framing, and questions asking respondents to describe conservation physiology techniques they feel are well-validated versus those requiring more validation for application (Supplementary Information Part 2,21,(24)(25), as they did not pertain to the purpose of the current manuscript (i.e. identifying the major barriers and perception of success of conservation physiology). An overview of the questions we assessed for this manuscript partitioned by topic, along with response rates, can be found in Supplementary Table 1. Open-ended questions were manually coded thematically by the lead author to provide context to the patterns in the data. Thematic codes were determined inductively after reading all of the responses and assigned during a second reading (Thomas, 2006). We also use the open-ended question responses as a source of quotes below to better articulate some of the underlying viewpoints that the survey uncovered. Who is engaging in conservation physiology research? As stated above, any individual that had experience at the intersection of physiology and conservation was permitted to complete the survey, meaning that we would obtain a range of perspectives spanning those that have only worked briefly on conservation physiology research to those whose major research focus is centred on the discipline. Our first goal was therefore to characterize the general composition of the research community contributing to the field of conservation physiology by determining where individuals work, their career stage and their taxa of study, as well as whether they consider conservation physiology to be a major disciplinary focus of their work. (n = 48) in a multi-sector capacity (mostly joint between academics and government), 9.4% (n = 44) at a governmental agency, with the remaining individuals (n = 32) employed in private sectors, research institutes or currently unemployed (Supplementary Table 2). Not surprisingly given the methodology we used to locate the participant pool, 44.3% (n = 207) of respondents were research faculty, with other major percentages being represented by graduate students/postdoctoral fellows (17.6%; n = 82), governmental scientists (13.7%; n = 64), educator/lecturers (9.6%; n = 45) and nongovernmental scientists (9.0%; n = 42). In addition, most respondents were male (62.5%; n = 290). Full demographic data can be found in the Supplementary Information (Table 1). While our survey did not ask participants to provide information on the geographic regions in which they have lived, studied or worked, our sample included respondents located around the globe including North America (45%), South America (3%), Australia and New Zealand (9%), Africa (2%), eastern and western Europe (28%) and Asia (4%) (based on IP addresses, with 9% unidentified). Figure 1) with fish (excluding elasmobranchs) comprising the research foci of 18% (n = 127) of participants, plants 16% (n = 109), mammals 15% (n = 106), invertebrates 14% (n = 98), birds 12% (n = 80), reptiles 9% (n = 64), amphibians 7% (n = 45) and algae 3% (n = 19). The remaining respondents (n = 47) focus on elasmobranchs (2%), bacteria (2%), fungi (1%), lichens (0.1%) or take a non-taxonomic or full ecosystem approach to their work (1%). It should be noted that some respondents work on more than one taxonomic group and percentages are calculated over total responses. We find it interesting that there was such a large constituent of respondents identifying plants and invertebrates as their taxa of focus. Generally, these taxonomic groupings tend to be under-represented in much of the conservation physiology literature (Lennox and Cooke, 2014;van Kleunen, 2014;Madliger et al., 2018). Respondents work on a diversity of taxa (Supplementary Overall, one-third of the respondents (35%; n = 148) considered themselves to be a 'conservation physiologist', while the remaining two-thirds (65%; n = 275) did not (based on the question 'Do you consider yourself a Conservation Physiologist?'). We did not see a difference in age, gender or job title/position between participants that identified as conservation physiologists versus those that did not (Supplementary Table 3), indicating that these demographic characteristics do not likely dictate entry into the field. Overall, within each taxonomic focus, <50% of researchers surveyed self-identify as conservation physiologists, with those studying fish constituting the largest absolute number of conservation physiologists. Indeed, some of the bestdocumented, earliest success stories in conservation physiology have been related to the management of native fishes, such as Pacific salmon (see Cooke et al., 2021 for a summary). Research on species of fisheries importance is often able to link more easily with conservation applications because of the long-standing policy channels accessible to fishery biologists. Both of these reasons could have led to a greater influx of individuals formulating research foci or entire research programmes on the conservation physiology of fishes. When asked to list disciplines that their research falls under, respondents provided a diversity of fields ranging from behavioural ecology to botany to evolutionary physiology to restoration ecology (Supplementary Table 4). The field of conservation physiology being relatively new in name could mean it has lacked overall exposure. For example, one respondent who self-identified as a seagrass ecologist and ecophysiologist stated Until being asked to participate, I was unaware that there was a field of conservation physiology. I think that my work broadly fits into this category but I had never heard it phrased in this manner. [Governmental scientist, USA] Similarly, another participant indicated I would be really curious to know how many of us are out there. I am the only conservation physiologist I know. [Nongovernmental scientist, USA] There are also likely conservation scientists that have only briefly employed physiological approaches but have not incorporated physiology into their ongoing research programs. Indeed, 36% of respondents never (n = 19), rarely (n = 56) or only sometimes (n = 93) incorporate physiological techniques into their current research program ( Figure 1A). Likewise, some physiologists may have collaborated on a conservation endeavour, but do not do so regularly given that 30% of respondents never (n = 2), rarely (n = 21) or only sometimes (n = 112) take an applied conservation approach in their work ( Figure 1B). Participants who considered themselves a 'conservation physiologist' more often take a physiological approach to their work, and more often consider the applied implications of their work, compared to nonconservation physiologists (Supplementary Figure 2). This is not entirely surprising, as we expect conservation physiologists to be merging the two disciplines on a regular basis. While we do not believe it is necessary to self-identify as a conservation physiologist to accomplish fruitful integrations between conservation science and physiology, we believe it is useful to encourage the formation of a community of scientists that can share ideas and establish an evidence base. It is therefore possible that the field of conservation physiology is missing out on collaborations and perspectives that could encourage growth, especially if some researchers feel isolated. We provide recommendations for increasing the visibility of the discipline in our concluding section. How are individuals entering the field? We aimed to determine whether educational or training experience could influence the likelihood an individual would become a conservation physiologist. We found no differences between conservation physiologists and non-conservation physiologists in regard to formal training experiences (i.e. coursework, laboratory techniques and fieldwork) in either conservation or physiology. Of respondents self-identifying as conservation physiologists, 99% (n = 146) had formal training in physiology and 87% (n = 129) had training in conservation science. With regard to non-conservation physiologists, 93% (n = 255) had formal training in physiology, while 82% (n = 226) had training in conservation science. We acknowledge that receiving training at the university level in a classroom setting can be very different than hands-on training. We therefore asked respondents to further indicate the type of training they received. A total of 73% (n = 106) of conservation physiologists identified laboratory and/or field work as part of their physiological training compared to a similar 69% (n = 175) of non-conservation physiologists. For conservation training, 71% (n = 92) of conservation physiologists indicated that they received hands-on laboratory or field training in comparison to 64% (n = 145) of non-conservation physiologists. Overall, these comparable proportions indicate that exposure to formal training is not likely to dictate entry into the field. However, hands-on training in conservation science may slightly increase chances of students pursuing futures in conservation physiology. It is logical that exposure to concepts in conservation physiology specifically (i.e. course sections or entire courses dedicated to conservation physiology) may influence future interest in the discipline, but this remains to be causally explored. Is the field of conservation physiology perceived as successful? In 2013, leaders in the field refined the definition of conservation physiology and outlined its eight primary goals (Cooke et al., 2013; Figure 2). The two goals that were viewed by survey respondents as most often accomplished are 'identifying the sources and consequences of different stressors' and 'predicting how organisms will respond to environmental change' (Figure 3), with 86% (n = 200) and 81% (n = 185) of respondents, respectively, indicating that the goal is accomplished 'often' or 'sometimes'. In contrast, the goals that respondents felt were least often accomplished were 'evaluating and improving the success of conservation interventions', 'informing the selection between various conservation actions' and 'understanding reproductive physiology to inform ex situ conservation activities' (Figure 2). These three goals have the strongest ties to on-the-ground conservation efforts, indicating that respondents may view conservation physiology as needing to make more progress in achieving its ultimate goal of solving conservation problems (Cooke et al., 2013). Only a little over 1% (n = 6) of total respondents (n = 431) indicated that conservation physiology 'very often' leads to conservation success (defined as 'a change in human behaviour, management or policy'). It was much more common for respondents to feel that conservation physiology sometimes (47%; n = 203) or rarely (43%; n = 186) leads to success, and a small proportion of respondents (2%, n = 7) felt that conservation physiology has never led to success. Respondents who indicated that they believe conservation physiology rarely or never leads to conservation success were prompted to articulate their reasons. The responses varied, but a number of common reasons emerged (Figure 3). For example, 14% (n = 22) of respondents believed the field is too new or requires more validation before it can lead to measurable conservation success. One respondent stated: I think the fields of conservation and physiology are not well integrated, e.g. we hardly ever see a physiologist at the table of our conservation workshops in which we set management priorities for the conservation of specific species. A similar proportion of participants (14%; n = 21) believed that the gap between science and policy/management precludes findings in conservation physiology from being translated into conservation solutions. Most of these respondents indicated that the primary literature is not translated into on-the-ground action or used as a decisionmaking tool by agencies, or that professionals involved in environmental policy have little knowledge of physiological work or the necessary training to interpret it. Other reasons included a general difficultly in influencing human behaviour with science (9%; n = 13), lack of communication between conservation physiologists and practitioners (8%; n = 12), lack of awareness of the field of conservation physiology as a potential contributor (7%; n = 11) and the opinion that managers focus on other methods or timeframes apart from physiology (7%; n = 10) (Figure 3; Supplementary These responses suggest that the merits of understanding mechanism for conservation (Seebacher and Franklin, 2012), and the fact that physiological traits can be linked to the demographic processes that drive population change over longer time periods (Bergman et al. 2019), may be unclear. We believe that conservation physiologists have the willingness and power to make stronger connections and promote the value of their evidence-based science. Despite many cogent arguments available in the scientific literature on the value of using mechanistic physiological measures for determining cause-effect relationships (Carey, 2005;Pörtner and Peck, 2010;Ellis et al., 2011;Blaustein et al., 2012;Seebacher and Franklin, 2012), we believe explicit success stories (e.g. Tracy et al., 2006;Cooke et al., 2012;Donaldson et al., 2013;Madliger et al., 2016;Madliger et al. 2021) will speak more loudly than theoretical arguments. Interestingly, participants were comparatively more confident that their own work will result in conservation success, with 41% (n = 177) stating that their research is in the process of contributing to conservation success and 19% (n = 83) indicating that their work has already done so. It is important to note that self-reporting of successes is inherently subject to bias with potential for level of success to be inflated (van de Mortel, 2008). Of the 33% (n = 140) of respondents that indicated their work has not led to conservation success, the reasons varied (thematized open-ended question). The majority of respondents (40%; n = 56) did not provide a reason or stated that they were unsure. Approximately 17% (n = 24) indicated that their results did not generate actionable data for conservation science (i.e. the work was too theoretical, the research was only a small part of a larger project or the work was completed on a small scale). A similar number of respondents (15%; n = 21) expressed that their work was discouraged or ignored in some way by decision-makers, citing that policy is often more interested in economic interests, that policy-makers often support research that is already in line with their goals or that it would take overwhelming evidence to change existing policy. Other reasons respondents felt their work had not been translated into success included limited time (9%; n = 13), lack of connection to conservation practitioners (9%; n = 12), a feeling that physiology is not yet being accepted by conservation science (5%; n = 7), that they did not try (2%, n = 4) or that their work showed there was no conservation issue (2%, n = 3). Given that many participants indicated that their work is in the process of contributing to on-the-ground success, we anticipate that there could be many new opportunities to highlight the benefits of physiological approaches to conservation science in the near future. We provide further recommendations for increasing the reach and success of conservation physiology below. What are the challenges that prevent conservation physiology from achieving conservation success? We queried participants about the barriers they perceived to be negatively impacting the growth of the field. Nearly 50% of participants believe researchers 'often' face the challenges of lack of physiological baseline data (n = 222), lack of funding (n = 224), lack of expertise of conservation scientists with physiological tools (n = 212) and lack of communication between scientists and practitioners (n = 205) (Figure 4). In contrast, physiological techniques being too invasive (n = 61), lack of success stories (n = 98) and a lack of interest among physiologists to work on applied questions (n = 74) were perceived to be less-common problems, with fewer than 20% of participants citing them as a barrier that is 'often' faced ( Figure 4). A key pattern, however, is that all of the barriers presented were perceived as relatively persistent, in that over half of participants indicated they were 'often' or 'sometimes' occurring ( Figure 4). When we compare patterns in perceived barriers to realized barriers (i.e. the frequency with which participants personally faced the same barriers), the trends are similar (Figure 4). Again, lack of funding (n = 223) and lack of physiological baseline data (n = 202) are still experienced 'often' by nearly 50% of respondents. However, lack of communication among scientists and practitioners and a lack of expertise of conservation scientists with physiological tools are not faced as often as they are perceived to be. Indeed, there is a greater proportion of individuals who 'rarely' or 'never' personally face these barriers compared to a participant's perception of what the field is generally experiencing (Figure 4). This provides some optimism, in that the actual challenges that must be overcome may be less common than imagined and we likely need greater communication even among conservation physiology researchers on where to place future effort for progressing the field. A barrier may be frequent, but if easily overcome, it may not amount to a great impediment. As a result, we asked participants to indicate their level of difficulty in overcoming each barrier (Likert-scale) and found that, apart from lack of funding, very few respondents (<15%) viewed any barrier as 'very difficult' to surmount ( Figure 5). However, many barriers were still considered 'difficult' to address, with lack of funding, logistical constraints (e.g. sample size, permits), lag time between acquiring physiological data and applying it to conservation and lack of baseline physiological data representing the most arduous barriers ( Figure 5). In contrast, other limitations are viewed as more surmountable. by many (>55%) as 'easy' or 'very easy' to tackle. Many researchers believed that barriers associated with knowledge or awareness can be reversed, and we advocate that this can be achieved through education or increased communication among physiologists and conservation scientists (see below). Finally, we gauged which barrier individuals have found to be the most difficult to face personally and the reason why (thematized open-ended responses). Over 26% (n = 107) of responses were focused on funding ( Figure 6). For example, one respondent stated: There seems to be significantly more funding for physiological research in an evolutionary context and/or research directly applicable to human health than funding available for physiology with intentional conservation applications. [Educator/lecturer, USA] A number of respondents also indicated that they feel pressure to downplay the applied aspects of their research when applying for large grants. One participant stated: Conservation needs to be hidden within a question that large funding bodies find more relevant. Others mentioned that funding for conservation-focused research often necessitates working on an imperilled species, but that applying physiological tools in such species is seen as too invasive, or it is impossible to attain the necessary sample sizes for biological/statistical relevance. This speaks to the fact that some challenges are intertwined; funding (27%; n = 107), logistical constraints (11%; n = 46) and invasiveness of techniques (7%; n = 27) were often mentioned in combination. Lack of physiological baseline data was identified as the most difficult barrier to overcome by 11% (n = 42) of participants ( Figure 6). Again, there was some inter-connectedness identified between barriers as a number of participants indicated that funding to collect this type of data is hard to attain. One participant stated: ...very often in conservation the need for baseline data isn't realized until some acute problem expresses itself, at which point it is too late to collect baseline data. [Governmental scientist, USA] And still others expressed concern that baseline data often must come from proxies. For example, one respondent indicated: [Physiological baseline data] comes predominantly from more lab-friendly model species, which tend not to be those of conservation interest, and are often so taxonomically different that extrapolation of the baseline data is speculative... This makes it very difficult to interpret the physiological data from the species of interest sufficiently robustly that it can be confidently applied to conservation actions. [Graduate student/post-doctoral fellow, Finland] Approximately 8% (n = 30) of respondents identified a barrier that was not provided in the survey ('Other' in Figure 6). Half of these pointed to lack of interest in conservation physiology among conservation scientists as the most difficult barrier they have faced. Participants cited a number of reasons that appear to account for the lack of interest, including that conservation scientists do not believe physiological tools are useful, have unreasonable expectations of sample sizes or that the tools appear confusing or expensive (in some cases because physiologists have trouble showing or explaining how their tools can be useful). Related to this, another barrier that came up repeatedly (n = 7) was that conservation scientists and physiologists lack an understanding of one another's disciplines, having different underlying priorities, concerns, histories and viewpoints and, occasionally, a lack of respect for one another's disciplines. In many ways, this is symptomatic of working in interdisciplinary fields. Interdisciplinarity, especially in the conservation sciences, is absolutely essential (Dick et al., 2016), yet there are many challenges to doing so (Rhoten and Parker, 2004). However, there are a number of proactive strategies for overcoming the barriers that were reinforced by some of the survey respondents and also previously discussed in a reflective article on conservation physiology in practice as it relates to Pacific salmon (Cooke et al., 2012). Twelve suggested actions for overcoming barriers in conservation physiology We conceptualized the following 12 actions based on our own experiences, by considering the barriers identified above and through feedback we received from survey respondents regarding what they feel is needed to inspire an up-coming generation of biologists to consider becoming conservation physiologists (Figure 7, thematized open-ended question; Figure 8). In particular, respondents indicated that success stories, education/training, funding, job prospects and inspiring mentors are most needed to raise the profile of conservation physiology in the minds of students and young professionals (Figure 7). Together, these action items also attempt to address the challenges related to funding acquisition, logistics, communication, lack of knowledge/awareness, baseline data and time lags that many respondents cited as the most difficult barrier they have attempted to overcome (Figure 8). (1) Share success stories and increase the visibility of the discipline: Be vocal across news media, social media, personal websites and blogs, conferences, invited lectures at institutions and government facilities, public outreach events and traditional publications with success stories in conservation physiology. Propose symposia and workshops at both physiological/integrative biology and conservation conferences that highlight how physiological approaches have helped to address conservation challenges (Madliger et al., 2017). Conservation physiology special issues can also be proposed to journals with a readership interested in integrative techniques for conservation science. By focusing on 'bright spots' where policy/practice has been successfully influenced by conservation physiology approaches, we can promote an optimistic outlook that can inspire action, promote team collaboration and coordination and support creativity in addressing challenges (Cvitanovic and Hobday, 2018). (2) Create conservation physiology 'hubs': Following from a need to increase the visibility of the discipline, we suggest that researchers begin amalgamating their conservation physiology networks into 'hubs' (sensu Taylor et al., 2017) with shared properties (e.g. taxonomic focus, sub-discipline of physiology, conservation challenge of interest). Having core groups of experts on given topics could increase the accessibility of conservation physiology techniques for managers and practitioners who are keen to begin collaborating. In addition, this type of action could provide opportunities for collaborative grants that span geography, ecosystem type and taxonomy, potentially attracting large-scale funding that could not be obtained by projects or laboratories in isolation. We also see the possibility of such collaborative groups being active on social media to share their conservation physiology work with a broader public. The process of creating of such hubs could be well-suited to some granting programs, such as the National Science Foundation's Research Coordination Networks funding, which supports projects that 'advance a field or create new directions in research or education by supporting groups of investigators to communicate and coordinate their research, training and educational activities across disciplinary, organizational, geographic and international boundaries' (National Science Foundation, 2021). As the discipline grows, there may also be the possibility to create a universal 'hub' in the form of an online repository that draws on knowledge and experience across the entirety of the field. Within such a repository, both academic and non-academic users could access a full index of conservation physiology literature, search for research based on a topic, generate and contribute to shared ideas, engage in discussion or pose questions to the community, identify research gaps, locate other researchers with similar interests and find new collaborations with other researchers or practitioners (e.g. Veterans Research Hub: Cooper, 2016). (3) Encourage education and training opportunities: Exposure to the diversity of unanswered questions in conservation physiology can inspire curiosity and passion. Those teaching courses in conservation science and wildlife management have the ability to expose students to physiological approaches, just as those teaching animal physiology classes have the capacity to expose students to the connections that physiology can make to conservation science. In particular, exposing students to conservation physiology early in their undergraduate studies could stimulate more students to choose courses in both topics moving forward, gaining expertise that will allow them to combine tools and theory more effectively in professional settings. As the field continues to grow, we see the opportunity for upper-year undergraduate or graduate courses dedicated entirely to the topic of conservation physiology (Madliger et al., 2021). (4) Seek out new collaborations: Physiologists seeking to apply their tools more directly to conservation initiatives could begin by looking in their own backyard, contacting (i) fellow faculty members working in conservation science to brainstorm collaborative opportunities; (ii) local conservation authorities (near their institution or field sites) to understand their mandate, as well as the specific challenges they are working on addressing; and (iii) local government agencies that focus on wildlife management to inquire about the opportunity to present their work and the value of their tools. (5) Maintain existing partnerships: While the process of building and maintaining trusting relationships may require a non-trivial time investment, it is well-established that it is a critical component for meaningful information and knowledge exchange (Jacobs et al., 2005;Gibbons et al., 2008;Roux et al., 2006;Young et al., 2014). For example, workshops, field trips, secondments, fellowships and sabbaticals can represent opportunities to focus on bridging the gap between physiology and conservation science and maintaining connections (Gibbons et al., 2008). This continued contact will act to build trust and respect between all parties, which is considered integral to interdisciplinary success (Daily and Ehrlich, 1999;Chapman et al., 2015). If honest, transparent partnerships are maintained, even outside of the timeframe of targeted projects, it is likely that additional opportunities for informing management will develop (Brooks et al., 2018). (6) Acknowledge disciplinary differences and develop a shared language: Disciplinary differences between physiologists and conservation scientists can undoubtedly create barriers in communication. However, it is this diversity of opinion, techniques and history that can spark innovative approaches to addressing conservation challenges. Be open and honest about lack of expertise and use collaborations as an opportunity to grow and find a shared language (Bracken and Oughton, 2006). The act of establishing a shared goal can be a reasonable first step in streamlining communication, and we suggest openly sharing any reservations, such as the level of invasiveness of techniques, timeframes, costs and sample sizes. This type of dialogue is essential to dispelling misconceptions and gaining the trust necessary for long-term interdisciplinary partnerships (Pooley et al., 2014;Dick et al., 2016). (7) Co-create projects: Conservation physiologists need to become involved early-on in targeted projects where their physiological tools can have relevance (i.e. co-create success stories with practitioners and managers) (Brooks et al., 2018;Laubenstein and Rummer, 2021). Indeed, the idea of cocreation of the research agenda and co-production of knowledge are regarded as fundamental to achieving success in applied realms such as conservation science (Colloff et al., 2017). The conservation toolbox is quite vast (Madliger et al., 2018), and it will be through careful planning at the onset of projects that data will become useful for management purposes (Fazey et al., 2013;Clark et al., 2016). (8) Design experiments with evidence-based management in mind: The systematic review of accumulated evidence is becoming a growing part of environmental management decisions (Sutherland et al., 2004;Pullin and Knight, 2009;Cooke et al., 2017b). Conservation physiologists can contribute to available evidence bases by ensuring their empirical work is included in systematic reviews (Cooke et al., 2017a). Carefully planning sample sizes, using replicates, acknowledging bias, using controls and reporting descriptive data are all essential to formulating a study that can contribute to evidence synthesis (Cooke et al., 2017a). (9) Be a supportive mentor: Provide mentees with connections to other students and professionals working in the field, opportunities to connect with on-the-ground projects and skill-building workshops. Connect students to your own current and past mentors and other pioneers in the field. Overall, promote diversity and an environment where students of any identity or background feel supported and included (Gould et al., 2018). (10) Promote funding and job opportunities: As a scientific community, we have never been more connected, in large part due to social media. With this comes the opportunity to share successes and challenges that can assist the conservation physiology community in identifying where and how to apply for funding, as well as job postings that are relevant to conservation physiologists. When proposing large, collaborative grants, consider how physiological investigations could simultaneously be valuable in both pure and applied contexts. (11) Communicate risks and seek out minimally invasive and non-invasive alternatives where appropriate: It is possible that many physiological techniques are viewed as more invasive than they truly are in practice, and it will therefore be important for physiologists to take time to communicate the details of methodologies when approaching new collaborations. Just as important, conservation scientists beginning to work with physiologists should communicate the acceptable degree of animal handling and anticipated sample sizes. In many cases, there may be minimally invasive options or physiological measurements that can be taken in conjunction with other data already requiring animals to be handled. (12) Contribute baseline data: Since conservation physiology was formally conceptualized, lack of baseline data has been outlined as an impediment to employing physiology as a conservation monitoring tool (Wikelski and Cooke, 2006). However, physiological data have been accumulating for many species, including information on both inter-and intra-individual variation (e.g. how physiology changes with development and age, reproductive states, seasons), which is essential for accurately interpreting results and providing recommendations based on physiological changes over time or between populations. For example, HormoneBase is a recently launched, freely accessible online database of over 6500 glucocorticoid and androgen measurements taken in adult vertebrates (Vitousek et al., 2018) that continues to grow. We anticipate that the expansion of existing databases or creation of new databases for other physiological traits could become similarly well-populated, and we urge researchers, managers, and wildlife veterinarians to commit their data to these pursuits. Conclusion Identifying the challenges faced by researchers integrating conservation science and physiology represents the first step in working collaboratively to find solutions. While some barriers will likely prove more difficult to solve in the shortterm (e.g. funding), we believe the growth in interdisciplinarity across all facets of conservation science (see Dick et al., 2016) will open more doors for conservation physiology. In particular, the current cohort of graduate students and early career researchers will have much to be optimistic about as the list of success stories and dedicated mentors and educators in conservation physiology continues to grow. The 12 actions for overcoming conservation physiology challenges identified here (Figure 8) are exclusively from the perspective of the 'researcher'. There is also much that could be learned from conducting a survey of practitioners to better understand their perspectives on conservation physiology-not unlike a recent study conducted on conservation genomics (see Kadykalo et al., 2019). Moreover, there would be great value in investigating how geographic location (e.g. where individuals have studied, been employed and/or completed field or laboratory studies) or their level of local or international collaboration has impacted their experience in the field. Since our study at the outset was designed to be exploratory, we have been unable here to test explicit hypotheses. However, this work provides an important foundation for future hypothesis-driven quantitative studiessomething we strongly encourage as the next productive steps to move this field forward. Supplementary material Supplementary material is available at Conservation Physiology online.
9,542.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Sensor-Fusion Based Navigation for Mobile Robot in Outdoor Environment Autonomous navigation of the vehicles or robots is very challenging and useful task used by many scientists and researchers these days. By keeping this fact in mind, an algorithm for autonomous navigation of mobile robot in outdoor environment is proposed. This navigation track consists of colored border containing obstacles and some unplanned surfaces along with some specific points for GPS (Global Positioning System) alarms. The main goal is to avoid colored border and obstacles. For the vision problem webcam is used. First the colored border is detected by using OpenCV library by following HSI (Hue Saturation Intensity)technique. The Canny edge algorithm is used to find both edges of the border, then for detecting straight lines on both sides of track, Hough transformation is used. Finally, the closest border line is detected and its center point is calculated for which the mobile robot has to steer to avoid it. Second step is to avoid the obstacles, which is done by LRF (Laser Range Finder), first the range of LRF is defined, because not all obstacles have to be avoided, only those obstacles are detected and avoided which are in specified range as defined before, and finally GPS receiver is used to make alarms at some specific points. As a result, a successful navigation of the mobile robot in the outdoor environment is implemented. INTRODUCTION [1]. Some authors adopted vision system and encoder to localize the target approximately by introducing the concept of decision-making space, forward passageway [2]. Some propose a novel-based navigation method in contrast with appearance-based approaches. This algorithm is based on motion estimation by a camera to plan the next moment of a robot and robust feature matching to recognize home and destination locations [3]. Some use mobile robot by control station and control this robot by receiving the image data from camera by sending to the control station [4]. Some use the differential GPS and odometry data by detecting the curbs by laser range finder and building the map [5]. For boundary detection, many vision techniques have been used by using image segmentation algorithm and have shown the experimental results [6]. Tracking the people for mobile robot in motion is discussed in [7]. The kinematics of 4wheeld mobile has been derived by using two 2-wheeled mobile robots [8]. Motion in skid-steering mobile robots is achieved by varying the speed on opposite sides of wheels [10]. For vision some reference use hybrid localization approach which switchs between local, in strong illumination changes and global image features [11]. Some authors have used both the vision and laser data for tracking moving obstacles like humans on outdoor environment depending on the speed of the obstacles [12].A hybrid approach is used to decrease the computational time of type-reduction process [13]. The navigation is mainly dependent on the parameters like perception, localization, cognition and motion control [14]. The WLPS (Wireless Local Positioning System) provides local positioning with sufficient coverage, reliability and accuracy [15]. This paper has two main portions. In the first portion, hardware design of mobile robot is explained, and in second portion, navigation algorithm is proposed. For the hardware, 4-wheeled skid steering mobile robot is used. These 4-wheels are controlled by four BLDC (Brushless DC motor) having four motor drivers [9]. It also has GPS receivers which provide one with the latitude and longitude of the mobile robot platform by using WGS-84 The Navigation track was about 150m in length consisting of obstacles, unplanned surface, and speed breakers. Also it has some specific positions for GPS alarms. There was a colored border of the track. The successful navigation was to pass through the navigation track without striking the obstacles, without moving it out of the track and making GPS alarm at some specific positions. Fig. 1 shows the sample navigation track in which we can see the border, obstacles, and some slope paths and Fig. 2 shows the size of obstacle. Finally, by using this algorithm and kinematic design, navigation of the mobile robot in the outdoor environment has been successfully implemented. Kinematics of the 4-Wheeled Mobile Robot The kinematic description of a 4-wheeled mobile robot is shown in Fig. 3. Basically it is obtained by combining two 2-wheeled mobile robots [8]. , and v 4 are the velocities of the four wheels 'v' and 'w' are the linear and angular velocities of the mobile robot platform respectively, 'd' is the distance between two opposite wheels, and  is the angle between the wheel and the perpendicular axis of each 'd'. The kinematic equations of the 4-wheeled mobile robot are shown below: VISION SYSTEM FOR DETECTING COLORED BORDER In vision process OpenCV library is used. The steps in the vision process are shown in the flow chart in Fig. 6. In the vision process, following steps have been done on the original image. Median Filtering In image processing, it is usually necessary to perform high degree of noise reduction in an image before performing higher-level processing steps, such as edge detection. The image filter is a non-linear filtering technique, often used to remove noise from images or other signals. Median filtering is a common step in image filtering. It is particularly useful to reduce noise and salt and pepper noise. Its edge-preserving nature makes it useful in cases where edge blurring is undesirable. Color Extraction in HSV There are two main kinds of color models. RGB, and the other one is HSI. Here, we have used HSI model because it is more tolerant to the intensity of light and also has more accurate color relationship as compared to RGB. Canny Edge Detection The Canny edge detection characterizes boundaries and is therefore a fundamental step in image processing. Hough Transformation The Hough Transformation is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of this technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a socalled accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. Vision Results The original picture taken by vision camera is shown in If the center of closest line lies within some specific range, then the mobile robot needs to avoid it otherwise it has to go straight. In Fig. 11 we can see the range of the line. We can see in Fig where v normal =30cm/sec and 'd' is the distance between the mobile robot and obstacle at that specific point. OBSTACLE DETECTION So far, the colored border has been detected by vision data using a vision camera. However, the performance of the vision-based technique is very sensitive to the conditions of camera setting such as view point and angle of pixels, and so on. Moreover, the common change of illumination and weather conditions are another major obstacle to the reliability and robustness of vision system so to detect obstacles LRF is used [7]. Fig. 12 shows how the obstacle is detected within the range of LRF. Where r c is the radius of the obstacle, and x p1 , y p1 , , and  can be calculated by the formulas shown below: In the same way, the coordinates of p 2 can be calculated by: x p2 = r 2 cos q 2 (9) y p2 = r 2 sin q 2 (10) The coordinates of the centre point p m can be calculated by: Case-1: We can see the vision of the camera showing the border at position 1 on the track in Fig. 18(a). Here in this case the mobile robot will steer towards right because of detecting the border on the left side by the vision camera. Case-2: We can see the vision of the camera showing the border at position 2 on the track in Fig. 18(b). Here we can see that because of steering towards right side, the length of border is getting reduced within the range of vision camera, but it will keep on steering little towards right side because of the border. Case-3: We can see the vision of camera showing the border at position 3 on the track in Fig. 18(c). Here we can see that because of constantly steering towards right side, the length of border within the range of camera is getting reduced, but still it is inside the range of vision camera and thus it will keep on steering towards right side. Case-4: We can see the vision of camera showing the border at position 4 on the track in Fig. 18(d). Here Right Side There are four cases if the mobile robot is moving on the right side of the track. Case-1: We can see the vision of camera showing the border at position 1 on the track in Fig. 19(a). Here the mobile robot will steer towards left because of detecting the border on the right side by vision camera. Case-2: We can see the vision of camera showing the border at position 2 on the track in Fig. 19(b). Here, the vision camera has detected the right side corner of the border, thus still it will keep on steering towards the left side. Fig. 19(c). Here the vision camera has not detected any border line as it has come out of the range of the right side border and has not entered in the range of any side border yet, So it will keep on moving straight until it came across any border. Case-4: We can see the vision of camera showing the border at position 4 on the track in Fig. 19(d). Here the mobile robot has detected the front line border. Here it has to decide whether it has to move towards left side or right side. but here the correct side is the right side, so the algorithm is written in such a way that if it comes across the situation that before detecting front line border, it has no border line within the camera range as mentioned in the case 3, it must have to steer on the same side where it has detected the last side border before getting no border data within the camera range, as it has detected the border on the right side as in case 2, it will move towards right side and vice versa. VISION AND LASER RANGE FINDER So far we have only discussed the vision process to detect the border and LRF to detect obstacles individually, but now we will combine the data of vision and LRF to avoid border and obstacles simultaneously. Different cases having both vision camera and laser range finder have been discussed. In Fig. 20(a), we can see the mobile Robot on navigation track with obstacle and in Fig. 20(b), we have two cases whether the robot will be on the right side of the track or on the left side of the track. Case-1: If the mobile robot is moving on the left side as shown in Fig. 20(b) and comes across obstacle, then it will behave in the same way as front side border and as before detecting front line border or obstacle it has left side border in the record and by following the same algorithm it will steer towards right side. Case-2: If the mobile robot is moving on the right side, then it will go straight by avoiding the right side border. Now we can consider another case by taking the obstacles in different positions as shown in Fig. 21(a-b). Case-1: If the mobile robot is on the left side, it will go straight by avoiding the border. Case-2: If the mobile robot is moving on the right side of border and it comes across an obstacle, then it will consider it as a front line border and by following the algorithm it will steer towards left to avoid the obstacle. Now we can see another case of mobile robot with obstacle in Fig. 22(a-b). In Fig. 22(b), the mobile robot can follow any direction depending on the situation and distance between the obstacle and the border. VISUAL C++ Microsoft Visual C++ tool is used for making the algorithm of navigation. Fig. 23 shows all the control program of computer simulation window. Computer Simulation Window of Vision Camera First the simulation window of vision camera is shown in Computer Simulation Window of Laser Range Finder The simulation window of the laser range finder is shown in Fig. 25. Computer Simulation Window of Motor Control The computer simulation window for the motor control is shown in Fig. 26. Here we have two control buttons. One is Motor Enable Computer Simulation Window of GPS The computer simulation window for the GPS is shown in There is a TIME edit box which is used to show time at the current moment and finally the Set GPS Points button which is used to show the GPS data. CONCLUSION As it was a long navigation track thus it has made some motor problem but as it was a redundantly actuated system so it kept on moving and finally by applying all these techniques, successful navigation of mobile robot has been completed and successfully implemented. FUTURE WORK In future, this work can be implemented to more complicated navigation track by changing algorithm and hardware parameters as this is the first step and can be extended in many ways.
3,154.2
2019-01-01T00:00:00.000
[ "Computer Science" ]
Investigation of the Turkish university rankings using InterCriteria Analysis : In the current investigation Turkish university rankings are analyzed. The dataset for overall ranking of universities for 2021-2022 is used. The information is downloaded from the University Ranking by Academic Performance website. Dependencies and independencies between Turkish universities are analyzed. The relationship between university rankings indicators are investigated. Introduction to InterCriteria Analysis InterCriteria Analysis is a decision making method based on the two fundamental concepts: intuitionistic fuzzy sets [2,5] and index matrices [1]. Intuitionistic fuzzy set are introduced in 198 1983 by Krassimir Atanassov as an extension of fuzzy sets. The intuitionistic fuzzy pairs are calculated on the base of the comparison of different values for criteria and objects. Index matrices are used as a tool for representing the values and degrees of dependencies and independencies between the criteria and objects. ICA determines the correlations by pairwise comparison to achieve possible relationships between different objects or criteria [3,6]. Different extensions of ICA are developed in the years [4,7,8,13]. ICA is successfully applied to analyze different correlations and opposite behaviors between the objects/criteria of the datasets in the area of genetic algorithms, medicine, crude oils, university rankings of different countries and etc., [9,12]. InterCriteria Analysis applied to Turkish university rankings The dataset for Turkish university ranking for 2021-2022 is downloaded from the University Ranking by Academic Performance website. The methodology of indicators selection is described in the Methodology section of the website. The input dataset contains information for 179 Turkish universities estimated by 5 indicators: paper score, total citation score, total scientific document score, number of graduated doctoral students and scientist/student score. The universities are numbered from C001 to C179 [14]. Application of ICA to Turkish University Rankings for identification the relationships between the indicators Firstly, ICA is applied to investigate the relationships between the indicators. The analysis is performed using IcrAData software [11]. The investigation is a continuation of the applications of the ICA for years 2019-2020 and 2020-2021 [10]. The result of ICA application for 2021-2022 are presented in Table 1. Description of the pairs of indicators is presented in Table 2. ICA determines 3 pairs of indicators in positive consonance, 3 pairs of indicators in weak dissonance and 4 pairs of indicators in strong dissonance. The pairs of indicators in the area of dissonance are independent. The pairs of indicators in positive consonance have weak dependencies. The received results are presented in the intuitionistic fuzzy triangle ( Figure 1). They are compared with the results from previous investigation, published in [10]. The behavior of the indicators is constant in time with small exceptions. The pairs of indicators "Total scientific document score -Number of graduated doctoral students" varies from 1 year in weak dissonance to 3 years in weak positive consonance. Thereafter, despite of the indicators in the area of positive consonance, the selected indicators of Turkish university rankings are correct. They are constant in time. The weak variation is observed only in single event. The indicators that are in dissonance field are constant in time. Paper score -Total citation score, Paper score -Total scientific document score, Total citation score -Total scientific document score Application of ICA to Turkish university rankings for identification the relationships between the universities The ICA is applied to identify the relationships between the 179 universities. The distribution of the pairs of the universities according to their dependencies or independencies is written in Table 3. The universities in strong negative consonance have different functioning. These universities work differently to achieve their goals. The universities have opposite characteristics in their educational procedures. The universities in strong negative consonance have strong opposite relations between them. Obviously, the received pairs of universities in strong positive consonance have strong dependencies each other. These universities have very close relationships. The pairs of universities in weak positive consonance and positive consonance have weak similarities. These universities have similar functionalities with small differences. The pairs of universities in 206 dissonance, weak dissonance and strong dissonance are independent. These universities have not relationships. The pairs of universities strong negative consonance have strong opposite behavior. The pairs of universities in weak negative consonance and negative consonance have opposite properties. The results are presented into the intuitionistic fuzzy triangle (Figure 2). Conclusion In the presented investigation the Turkish University Rankings are analyzed. ICA analysis is applied to the datasets for the year 2021-2022. The correlations between the pairs of indicators and pairs of universities are determined. The universities are segmented in different areas of similarities and differences. The pairs of indicators are mainly independent excluding 3 pairs of indicators in positive consonance. The received outcomes present good choice for the most indicators. The university segmentation according their similarities is made. In the future research work the tendency of university distribution in the time will be observed.
1,121.4
2023-08-01T00:00:00.000
[ "Economics", "Education", "Mathematics" ]
Data Science in the Chemical Engineering Curriculum : With the increasing availability of large amounts of data, methods that fall under the term data science are becoming important assets for chemical engineers to use. Methods, broadly speaking, are needed to carry out three tasks, namely data management, statistical and machine learning and data visualization. While claims have been made that data science is essentially statistics, consideration of the three tasks previously mentioned make it clear that it is really broader than just statistics alone and furthermore, statistical methods from a data-poor era are likely insu ffi cient. While there have been many successful applications of data science methodologies, there are still many challenges that must be addressed. For example, just because a dataset is large, does not necessarily mean it is meaningful or information rich. From an organizational point of view, a lack of domain knowledge and a lack of a trained workforce among other issues are cited as barriers for the successful implementation of data science within an organization. Many of the methodologies employed in data science are familiar to chemical engineers; however, it is generally the case that not all the methods required to carry out data science projects are covered in an undergraduate chemical engineering program. One option to address this is to adjust the curriculum by modifying existing courses and introducing electives. Other examples include the introduction of a data science minor or a postgraduate certificate or a Master’s program in data science. of Science: An Action Plan for Expanding the Technical Areas in the Field of Statistics”, plan for how academic statistics their work The abstract reads: action plan to expand the technical areas of statistics focuses on the data analyst. The plan sets out six technical areas of work for a university department and advocates a specific allocation of resources devoted to research in each area and to courses in each area. The value of technical work is judged by the extent to which it benefits the data analyst, either directly or indirectly. The plan is also applicable to government research labs and corporate research organizations”. Cleveland then goes on to propose six areas of activity and even indicates what percentage a department should devote to each. They are: (1) Multidisciplinary investigations (25%), (2) Models and Methods for Data (20%), (3) Computing with Data (15%), (4) Pedagogy (15%), (5) Tool Evaluation (5%) and (6) Theory (20%). Donoho [1] points out that many departments overemphasize the last area, Theory, therefore resisting the recommendations by colleagues like Tukey and Cleveland towards a much broader definition of the field. Introduction The terms "Data Science", "Big Data" and "Data Analytics" are becoming pervasive, affecting many aspects of our lives, many professional disciplines and certainly the profession of chemical engineering. The reason for this is of course the increasing availability of data brought on by the proliferation of inexpensive sensors and instrumentation, new measurement capabilities related to the development of the Internet of Things and smart sensors, and improved data storage power like cloud computing. In order to exploit the data that is becoming available, chemical engineers need to use data science methods. The questions are then what are these methods and what role can the curriculum play in preparing graduates to be able to use them. This paper reviews some of the trends and developments that are occurring related to Data Science, explores the relationship between Data Science and Statistics, addresses some of the limitations and failures of data science projects and lists some applications and methodologies in Data Science that are relevant to Chemical Engineering. Based on this discussion, the implications for an undergraduate chemical engineering curriculum are discussed. Several approaches for teaching Data Science at different institutions are reviewed and finally the desire to include Data Science in the curriculum is set in context with respect to many other pressures that exist for a modern engineering program. Trends in Data Science As mentioned above, the field of Data Science is affecting our lives in a multitude of ways and Institutions and Industry are recognizing its importance and making significant investments. For example, on 8 September 2015, the University of Michigan announced a $100 M "Data Science Initiative" involving the hiring of 35 new faculty members [1]. "Data Science has become the fourth approach to scientific discovery, in addition to experimentation, modeling and computation" said Provost Martha Pollack. Similarly, Purdue University announced that it plans to embed data science into every Major [2]. In addition, there is an urgent need to support Canadians in transitioning to new jobs in data science, machine learning and big data analytics so that they can confidently acquire the relevant skills to work in high-demand jobs that currently go unfilled. The Information and Communications Technology Council (ICTC) projects that by 2020, 43,300 data analytics specialists will be directly employed in Canada [3]. Chiang et al. [4] report that over 70 universities in the US offer Master's programs in related areas, a trend that is occurring at many Canadian institutions as well. Broadly speaking, as discussed by Beck et al. [5], Data Science can be divided into three tasks. The first is data management, which is core to data science in that it deals with how the data is organized, stored, accessed and shared. The most basic tool for this purpose is the spreadsheet, however for large datasets they are inadequate and it is necessary to use relational database management systems (RDBMs) and some form of structured query language (SQL). The second task is statistical and machine learning which are methods that can be used for supervised or un-supervised learning. In supervised learning, the objective is to develop a predictive model that predicts the outputs from the inputs. In un-supervised learning, the objective is to determine the underlying structure of the data based on some features. The third task is data visualization which consists of methods that can be used to explore the data and help to make decisions based on the analysis of the data. The EDISON project [6] gives a broader, more detailed description of Data Science competencies, which include: The emergence of Data Science as a scientific and academic domain is related to the notion of Big Data, which is comprised of data sets that are too large for commonly used software tools to deal with and therefore require specialized approaches and tools. Some of the key concepts or characteristics related to Big Data are often referred to as the three Vs, namely Volume, Velocity and Variety [4]. The Volume refers to the amount of data that has to be managed, while the Velocity describes the rate of the incoming data. Variety is related to the type of data be it structured or unstructured. Two additional characteristics that are important are the Veracity, meaning the quality and accuracy of the data and Value which addresses the question: will the data lead to value? Is it useful and informative? Data Science and Statistics In trying to define what data science really is, the question arises is data science really different from statistics? In 1997, Jeff Wu in his inaugural lecture entitled "Statistics = Data Science" for his appointment to the H.C. Carver Professorship at the University of Michigan, suggested that statistics, which he defined as a trilogy of data collection, data modeling and analysis, and decision making, be renamed data science and statisticians be called data scientists [7]. As far back as 1962, Tukey promoted the idea that the field of statistics and its research scope needed to be broadened, enlarged and redirected [8]. The emphasis on mathematical statistics with theorems and proofs was too narrow and he introduced the term "Data Analysis" with a focus on techniques for analyzing and interpreting data and the design of experiments for collecting data having high information content, which he felt should become the new areas of focus. In his 2001 paper entitled, "Data Science: An Action Plan for Expanding the Technical Areas in the Field of Statistics", William S. Cleveland suggested a plan for how academic statistics departments should reframe their work [9]. The abstract reads: "An action plan to expand the technical areas of statistics focuses on the data analyst. The plan sets out six technical areas of work for a university department and advocates a specific allocation of resources devoted to research in each area and to courses in each area. The value of technical work is judged by the extent to which it benefits the data analyst, either directly or indirectly. The plan is also applicable to government research labs and corporate research organizations". Cleveland then goes on to propose six areas of activity and even indicates what percentage a department should devote to each. They are: (1) Multidisciplinary investigations (25%), (2) Models and Methods for Data (20%), (3) Computing with Data (15%), (4) Pedagogy (15%), (5) Tool Evaluation (5%) and (6) Theory (20%). Donoho [1] points out that many departments overemphasize the last area, Theory, therefore resisting the recommendations by colleagues like Tukey and Cleveland towards a much broader definition of the field. Despite the fact that statisticians, engineers and scientists working in the area of applied statistics have promoted the methods and ideas behind data science for many years, statistics and the role of statisticians seems to have become marginalized in the recent developments around Data Science. Leaders and leading initiatives in Data Science seem to be coming from various engineering disciplines, computer science, business schools, etc. This despite the fact that many descriptions of what Data Science is, will be very familiar to statisticians. Vincent Granville at the Data Science Central Blog writes "Data Science without statistics is possible, even desirable" [10]. He differentiates between "old" outdated statistical methods that are not particularly useful and "new" statistical methods that are, but that are not recognized by traditional statisticians as being "statistics". In his 2013 paper, Vasant Dhar addresses what the terms data science and big data mean, what skills individuals working in data science need and what the implications might be for scientific inquiry [11]. He makes the distinction between data analysis, which he states has been used to explain phenomena and data science which "aims to discover, and extract actionable knowledge from the data, that is knowledge that can be used to make decisions and predictions not just to explain what is going on". He also points out that data takes many forms including text, videos and images. As far as the skills required Dhar lists (1) Machine Learning which must build on statistics including Bayesian statistics and multivariate analysis, (2) Computer Science including data structures, algorithms and systems including distributed computing, databases, parallel computing, and fault-tolerant computing (3) Knowledge about correlation and causation and finally (4) The ability to formulate problems in a way that results in effective solutions. On the last point, Wladawsky-Berger points out that being able to effectively formulate the problems requires domain expertise to identify what the important problems in an area are, how to formulate the questions properly and how to present the results so that they are useful to the domain practitioner [12]. In addition to the technical skills, Dhar also points out that a significant change in a manager's mindset from intuition and past practices to a data-driven decision-making approach is required. Here he reminds us of a quote from W.E. Demming "In God we trust-everyone else please bring data". While there certainly have been many successes reported in the application of data science in industry, the financial sector, healthcare, transportation, education, professional services, etc., there have also been a number of reports about limitations and failures that have been experienced. An article in Wired Magazine [13] makes the controversial claim that the abundance of data that is becoming available will make the hypothesize-model-test approach to science obsolete. The latter was necessary in the age when only small samples of data were available, but now scientists have access to the entire population and therefore do not need statistics or theory. However, as Reis et al. point out [14], a sample no matter how big, may not accurately reflect the target population and give an excellent example related to the 1936 US presidential election to illustrate the point. This supports the point that domain knowledge is needed even when massive data are available which is particularly true in the process industries. They go on to discuss a number of challenges that need to be considered. These include the issue of meaningful data referring to the difference between happenstance data, which may be suitable for process monitoring or fault detection, but not for prediction, where data from designed experiments for process optimization or system identification experiments for process control may be necessary. In addition, it may be possible that a very large industrial data set may be information poor, when interesting information happens only infrequently. They also discuss issues related to multiple data structures, heterogeneous data, multiple data management systems, the incorporation of a priori knowledge, uncertainty data, unstructured variability, data with high time resolution and adaptive fault detection and diagnosis. They conclude that while big data has the potential to be of great benefit in the process industries there are many issues that still need to be addressed and that data science and domain knowledge should be used synergistically. A number of publications [15][16][17][18] also discuss organizational and managerial issues that need to be addressed to prevent failure of data science applications. These include: For data science to be successfully deployed in an organization, data scientists and data science projects must be managed properly, including deploying data scientists in the right spots, and giving them the tools and opportunities to make a convincing case. Data Science in Chemical Engineering There are of course many examples of the use of data science methodologies in the chemical engineering community that have been ongoing for many years, including multivariate analysis, on-line fault detection, inferential sensors, batch data analytics, experimental design approaches, parameter estimation and model discrimination. Chiang et al. [4] mention in their review paper that the process industries were for example early adopters of computer-based control. They point to the use of multivariate methods including principle component analysis, partial least squares and canonical variate analysis which have been used to analyze large volumes of data to develop predictive models and for fault detection. They provide examples from five different industries including chemicals, energy, semi-conductors, pharmaceuticals and food. A number of technical challenges, software platform challenges and culture challenges are also addressed. They conclude that the application of data science methods requires additional skills outside of the traditional chemical engineering curriculum and point to the large number of Master's programs in data science that have been introduced. Qin [19] points out that for well understood chemical mechanisms, first-principle mechanistic models can be derived for process operations, but that processes for which mechanisms are not well understood, data analytics are a valuable tool for gaining insight and developing predictive models. A four-part series in Chemical Engineering Progress 2016 special issue on big data analytics discusses what big data is [20], gives some success stories [21], describes how to get started [22] and what the challenges are [14]. In their 2016 paper, Beck et al. [23] address the question "What is Data Science and Why Should Chemical Engineers Care About It?" They also discuss a number of research areas in chemical engineering that have benefited from data science including computational molecular science and engineering, synthetic biology and energy systems and management. Finally, Holdaway [24] uses specific case studies to show how data analytics can be used for optimization in exploration, development, production and rejuvenation of oil and gas assets. Curriculum Implications Chemical engineers have a very strong background in mathematics and problem solving and are therefore well poised to engage in data science. Historically, chemical engineering undergraduate programs, certainly in Canada, have also incorporated some form of statistical training. This has been accomplished by incorporating one or more courses in applied statistics into the undergraduate core curriculum, in many cases taught by chemical engineering faculty. However, it can be argued that chemical engineering education and training may not have kept pace with the proliferation of data described above and that therefore courses tend to teach approaches used in a data-poor era. An informal survey of twelve chemical engineering programs in Canadian schools show that all contain a fundamental probability and statistics course, an introductory computing programming course, a course on engineering computation/numerical methods and in some a course on advanced statistics, usually experimental design taught as an elective. As mentioned above, the skills required to apply data science methods include data access and management, databases and data warehousing, statistical methods including classification and clustering, time series, various regression methods and multivariate statistics and data visualization. It is unlikely that existing courses are sufficient to cover all the required topics. The question then is what can be done to better prepare graduate chemical engineers for the realities of today when it comes to data analysis? Beck et al. [23] propose that this can be accomplished by making small "tweaks" to the existing curriculum to include data science methods to existing courses, by adding elective course work or professional development workshops, or via the use of free on-line self-guided tutorials. The key is to ensure of course that this material is not added to the detriment of the core chemical engineering curriculum since as mentioned in Section 3 above, domain knowledge is an important component required for the successful outcome of data science projects. One also has to remember however that there are many pressures on engineering programs to add additional material on topics such as Life Cycle and Socio-Economic Analyses, Life Sciences, Nanotechnology, Renewable Energy, Advanced Materials and Additive Manufacturing, Virtual and Augmented Reality, etc. The EDISON project [25] links the data science competencies listed in Section 2 above to learning outcomes and even proposes courses that could be used in a Master's program. There is also some discussion on how to accommodate students with diverse educational backgrounds by assessing their competencies and having students take pre-requisite courses and bootcamps. An approach offered by, for example the University of Calgary is to offer a Data Science Minor, which is taken co-currently with the student's particular core program. Another variant on this is the Certificate in Data Analytics offered by Ryerson University's Chang School of Continuing Education in which a six-course compressed program is offered to students who already have completed an undergraduate degree. The latter is a very high-touch program which provides one-on-one support, especially for students whose background does not include the competencies normally required for data science studies. While adding data science components to an existing undergraduate program may be beneficial, examining the competencies required would indicate that acquiring a strong background may require additional studies. Targeted Master's programs in Data Science may be one option; however, many chemical engineering departments have faculty working in the area of process systems engineering and offer graduate programs which allow for the interdisciplinary training that is required. Funding: This research received no external funding.
4,463.2
2019-11-08T00:00:00.000
[ "Computer Science" ]
Sequential parameter optimization for algorithm-based design generation using data from multiphysics simulations The implementation of algorithmic modelling in CAD technologies is an opportunity to reduce manual design work in repetitive design tasks. This increases the importance of design automation and digital workflows. This technology can process the results from computational fluid dynamics (CFD) and finite element analysis (FEA) automatically and can optimize designed geometries sequentially. Nevertheless, this design process often sets high computational requirements. The aim of this paper is to present a design automation workflow that reduces the computational time of a design process. The computational resource requirements of the system are reduced by using knowledge-based engineering techniques to obtain information from previous successful designs, decomposing the design into sub-parts according to their functions and optimizing each sub-part individually. Furthermore, through algorithmic modelling, the different input geometries required for the physical description of each simulation are made separately. This allows different design simplifications to be made for each simulation domain. Once the output of the simulations is obtained, the design is evaluated in CAD to optimize the geometry. After each sub-part has been optimized, the sub-parts are composed to obtain the final design. The case study of a reactor for methanol synthesis supports the results of this paper. Introduction The capabilities of modern production methods such as additive manufacturing (AM) allow an increasing complexity of products today.Moreover, fierce global competition increases the demand for innovative complex products and lengthens the product development process considerably [1].How quickly companies bring innovative products to market plays a crucial role regarding success of the product in the market.Product development plays an active role in the time to market [1].Companies launch different generations or variants of a product to meet different standards and to keep the product attractive for customers that makes the product tree more complex.However, they also tend to use repetitive design features to simplify the product development and manufacturing process in order to keep the high success rate in qualification tests.Some industries, such as process engineering, have even more repetitive design features due to high qualification standards.According to Stokes [2], 80% of all manual design activities are routine design activities that do not add value to the design.Automation of such routine design activities can be the key to reduce product development time.Although design automation systems are one of the popular topic currently, such systems require high computational resources to provide reasonable results in the presence of multiphysics optimization problems. The present paper provides an automated system to generate repetitive design features for multiphysics optimization problems where human creativity is not required, thus freeing design resources for creative, value-adding tasks.Section 2 describes the state of the art of design automation systems and explains why a new methodology is required for these systems.Section 3 provides the overall workflow as well as the details of each steps in the workflow.The workflow is supported by a case study in section 4.After conducting the case study, conclusions and possible future improvements to the workflow are discussed. State of the Art Cederfeldt and Elgh [3] define design automation as developing reusable computer functions, which support the design process.There are two main approaches namely computational design synthesis (CDS) and knowledge based engineering (KBE) in design automation [1]. Computational Design Synthesis CDS aims to generate design alternatives computationally in the early stages of the design process in accordance with defined requirements.In this manner, optimal structures can be found for the application.One way to generate design alternatives is to use topology optimization (TO).However, the results obtained from the TO should be interpreted and applied by an expert designer because the result may not always be manufacturable or reasonable [5].Moreover, since the results obtained from TO are non-parametric, this has a negative impact on the next stages of the automation task.Therefore, a parameter based CDS system should be developed to remove user influence from the automation system [5]. Knowledge Based Engineering KBE focuses on avoiding repetitive tasks and reducing development time of a design.Repetitive tasks are reduced by incorporating knowledge from previous designs into new designs.KBE has three steps: knowledge capture, formalization and representation [6].The knowledge capture phase gathers information from proven design concepts to support the design process.The captured information is then organized in a structured way into rules, objects or agents in the KBE system by adding geometric information [6]. Previous works developed valuable methods to identify repetitive design tasks with a KBE system.These methods mostly serve to create a feature taxonomy for large assembly designs [7,8].However, each part in an assembly can also have features that do not require creativity.The knowledge from these parts should also be captured by KBE systems.In other previous studies, KBE systems were developed for each part and the information captured by the KBE system was used for generating various design variants with different CDS systems that developed for multi-flow nozzles [5] and crankshaft [9], respectively.Such a CDS system with knowledge capture capability improves the feasibility in the market and the accuracy of design automation.However, some products, such as reactors in process engineering, require long multiphysics simulations.In these products, it takes days for a simulation to converge [10].Automatically building and simulating these products with workflows from previous studies can take weeks of computational time.Therefore, a new optimization system is necessary for the products that need long simulations.This paper addresses this research gap. Methodology Algorithmic modelling provides high flexibility in design in response to changes.Compared to other modelling techniques such as parametric or direct modelling, logical connections can be established with this technique.This enables users to design not only an object, but also a process [11].With this kind of understanding, CAD software can evaluate simulation results and optimize geometry with decision-making structures accordingly.The presented paper utilizes this knowledge from the literature and proposes a novel methodology for the product development that require extensive simulations.We illustrate stages of our methodology in Figure 1. Knowledge Capturing Mechanism In a classical product development process, a concept is developed by defining the functions and their structures of the part according to the requirements [12].Then the design is divided into modules to realize the solution principles.Once all modules are properly designed, the modules are combined to form the complete part [12].In this process, these modules can be called sub-parts, and the part structure, which is the knowledge of sub-part dependency, can be thought of as the part architecture. In the first step, we reversed the above workflow to capture knowledge on the part architecture of a proven part concept from the literature.Once this knowledge is captured, the part is decomposed into sub-parts according to the functions in the part architecture.Each sub-part has a single function that is related to other functions, but each of them needs different validations.Besides they should have no conflicting objectives with other functions.At this stage, all sub-parts are listed and prioritized according to the importance of their function.However, there are sometimes conflicting goals between the sub-parts, making them difficult to verify individually.Here, they are considered as a combination of sub-parts and verified together.This boundary definition brings the advantage that optimum sub-parts can be considered also as optimum to the whole part. The verification is done through simulations which are linked to the algorithmic model to have closed loop automation in the whole optimization process.Suitable types of simulations are selected according to the functions of each subpart.These simulations can verify structural mechanics, flow properties, other physical properties of the sub-parts or manufacturability.However, in this research we evaluate only the operational performance of the part.Therefore, we assume that the part is manufacturable and focus the verification on functionality. In this research, we used object-oriented programming to define the part as main object and each sub-part as class.According to an object-oriented programming, every class has functions and attributes.Attributes refer to information of the classes.Functions can modify or update the object [13].Every sub-part has the knowledge to modify the part through its function and to make decisions through its attributes on which simulations are done to verify the structure of the sub-part.A case study demonstrates this method in section 4. Design Synthesis The captured information is used to define the algorithmic model.However, firstly the requirements are investigated to synthesize the information obtained in the design.The performance of a design is directly related to the concept idea.Also, creativity is required in the concept phase.Therefore, only this phase is done manually by the designer.In this step, manufacturing constraints are also taken into account, since there is no automatic control mechanism in the workflow due to the additional requirement of computational resource as described in Section 3.1.Once the concept is developed, an algorithmic model is created in a 3D-CAD software such as Rhinoceros® or Siemens NX using pre-programmed design elements from an available database.The database is used to increase the number of variants of each sub-part [5]. In the classical development workflow, a non-detailed solid model is first created to optimize the structure of the part.Then the part is simulated according to the requirements and in each iteration of the optimization the part is redesigned according to the results obtained from the simulation or optimization software. The method proposed in this paper integrates and synthesizes the classical product development steps.According to our method, not only the 3D model is developed in CAD software, but also the development process.First, possible simplifications for each sub-part simulations are investigated.Furthermore, it is examined which parameters have an impact on the simulation results.If more than one simulation is required to verify the sub-part, the simulations are sequenced in a logical order to reach the optimal sub-part in the fastest way.After this decision, the input geometries for the first simulation are designed and the simulation software is connected to the CAD software by using an automation software.To automate the connection of CAD and simulation software, only one simulation is prepared manually in order to define physics correctly.After that, the input geometry is imported to the simulation software in every iteration and the simulation is executed automatically.Using logical decisionmaking structures in the CAD software, a small loop is created to optimize based on the simulation result.Parameters that have a high impact on the simulation results are optimized in this loop.After a first optimization loop, the optimized structure is used to build an input geometry for the next simulation task.This process is done sequentially for all simulations.Hence, it is possible to optimize geometries in local loops through an algorithmic model.This approach reduces the required amount of computational power because the whole model does not have to be recalculated for each parameter changes. Once the sub-part is optimized, the design synthesis process is applied for all sub-parts.As mentioned earlier, each sub-part needs different verifications.With this methodology, simulations for verifications are only done locally within the part.In addition, the methodology allows to optimize different variants of the sub-parts automatically.Once the verification process is complete, all the sub-parts are merged with each other, and the part is created.The part is optimized and verified consequently.For regulatory reasons, the design still needs to be validated at the system level before it can be operated. Case Study In this section, a case study is presented to demonstrate the effectiveness of the proposed design automation methodology for products that require long simulations.One example of such products are reactors in chemical process engineering.Various types of reactors are widely used in different sectors such as the pharmaceutical or energy industry.Different requirements are essential to design a successful reactor for each application.The validation of each reactor design requires detailed reaction simulations because a reactor design is a complex multiphysics problem with variable material parameters.In addition, an optimum reactor design depends on many different input parameters such as volumetric flow rate, concentration, temperature, pressure, and catalyst volume.This increases the need for a high number of optimization cycles.Therefore, the reactor design is chosen for the application of the proposed methodology.In this research, we used the developed design automation system to design a reactor for methanol synthesis. Methanol ( CH 3 OOH) is a chemical intermediate widely used to produce alternative fuels such as dimethyl ether (DME) or to store electricity from renewable sources such as wind or solar energies.Its synthesis is an exothermic reaction, a cooling system is required to maintain the operation temperature in the methanol reactor within an acceptable range.As an example, multitubular packed bed reactors can be operated at high pressure (50-80 bar) and relatively high temperatures (200-300°C) on an industrial scale for methanol synthesis [14].Due to the high-pressure operation of methanol reactors and the high safety regulations for acceptance according to DIN EN 13445 norm, reactors need to be validated by an extensive testing or simulation program.In addition, the wall thickness of the channels of the methanol reactor should be as thin as possible to increase heat transfer.This challenge in methanol reactor design increases the necessity of testing not only from a structural point of view but also from a chemical aspect. The required test process for an acceptance and restrictions of traditional manufacturing methods limits the number of different reactor types.These classical reactor types such as multitubular packed-bed reactors are well-known, proven systems in the industry.Therefore, analytical solutions are developed for the dimensioning of the reactor.However, with technological advances in production systems such as AM, complex methanol reactors can be produced.As a consequence of this progress, analytical solutions are not sufficient to optimize a methanol reactor and numerical solutions are needed.However, for the simulation of the part according to DIN EN 13445 annex B, the entire part must be simulated together.Therefore, only the structure that affects the heat transfer phenomenon should be optimized in an optimization cycle.The external structure can be optimized in the system level.In order to reduce the computational resources required for reactor optimization, the methodology proposed in this paper is implemented as follows. Knowledge Capturing from Proven Concept To understand the part architecture of methanol reactors, a multitubular packed-bed reactor is selected as a proven concept.The classical multitubular packed bed reactor has 2 inlets and 2 outlets.In the operation of the reactor, reactants enter through the first inlet.Reactants are raw materials in chemistry.In the case of methanol synthesis, carbon dioxide (CO 2 ) and hydrogen (H 2 ) are the reactants and an inert gas such as nitrogen ( N 2 ) is also used to avoid unwanted chemical reactions.In some cases, carbon monoxide (CO) is also used to produce methanol.The reactants react with each other using catalyst until an equilibrium of the reaction is reached.Products and remaining reactants leave the reactor as through the outlet. The reaction is exothermic and releases heat, therefore the reactor is cooled by a cross-sectional and countercurrent flow system through a second inlet and outlet. Figure 2 depicts this type of reactor as an arrangement of four sub-parts.The first sub-part "Distribution Structure" has the single function of evenly distributing the mixed gas flow in the channels.There is no catalyst in this sub-part, so no reaction takes place.For verification, this sub-part only needs to be simulated to optimize the flow properties and there is no need for a complex reaction simulation.Moreover, since there is no heat transfer phenomenon in this sub-part, there is no need to optimize the sub-part in terms of structural mechanics as mentioned above. Once the reactants reach the catalyst particles in the tubular channels, the reaction takes place with an interaction between reactants and catalyst.These tubular channels can be named as "Reaction chamber".Due to the importance of the surface mechanism of the catalyst and the heat transfer properties of the channel structure, complex reaction simulations should be performed on this sub-part.With these simulations, the flow and heat transfer properties of the geometry of the sub-part are studied.However, these simulations cannot be performed without knowing the cooling behavior in the reactor system.Therefore, the "Cooling Chamber" cannot be considered separately from "Reaction chamber".In addition, the structure of the channels must be optimized in structure mechanics aspect.However, these structures, especially the wall thickness of the channels affect the heat transfer as well as the mechanical integrity.Therefore, they cannot be optimized afterwards like "Distribution Structure", and they must be taken in optimization loop.Therefore, multiple simulations are required for the "Reaction chamber" and "Cooling Chamber" subparts. After the reaction has taken place and the products and residual reactants leave the channels.They are collected in the "Collection chamber" and leave the whole reactor through the outlet.This sub-part has no other function than to collect all products and reactants.Therefore, there is no need for a simulation for the flow characteristic.Only the external structure must withstand the pressure in the reactor and the designer must consider the pressure drop. Table 1 lists the sub-parts and prioritizes them according to their importance for the reactor.The most important function in a methanol reactor is the conversion of methanol and it is determined in the "Reaction chamber".The "Cooling chamber" has also same importance because it cannot be optimized separately, as described above.After this optimization loop, the "Distribution Structure" is optimized [15] because its function is more important than "Collection chamber".The interface geometries from "Reaction chamber" are considered in the optimization of the "Distribution Structure" and the "Collection chamber".The computational design synthesis of the sub-parts is done in this order. Design Synthesis of Methanol Reactor The design synthesis and optimization workflow is implemented in Siemens HEEDS.The software is used to open and connect CAD and the simulation software, as shown in Figure 3.This makes the optimization of structure parameters a closed loop.In this loop, when to run the simulation is determined in the CAD software through algorithmic modeling. Algorithmic modelling makes it possible to build decisionmaking structures in CAD software.With such structures, different optimization cycles can be defined in a single CAD model.To optimize the "Reaction chamber" first, an algorithmic model is created using the Grasshopper plug-in of Rhinoceros® software.A parametric model is created in Grasshopper to define the geometry of the sub-part mathematically.The number of the variants for the sub-part "Reaction Chamber" is increased by using pre-programmed design elements, as demonstrated for multi-flow nozzles by Biedermann et.al [5].Production constraints are considered in the modeling and the concept is developed with this information.Other requirements of the methanol reactor are also taken into account and the objective functions and constraints are defined for the necessary simulations as shown in Table 1. Due to the requirement of multiple simulation types as shown in Table 1, two simulation loops are created in the model.A pseudo-static model is created in the ABAQUS software to simulate the behavior of the structure in the first loop (see Figure 3).The pressure inside of the channels from the reactants and the pressure outside due to the coolant are considered as the loads in the system.Since the whole chamber consists of the equal channels, the model can be simplified and only one channel can be taken as a meta-model for the further simulation phase (see Figure 4).A meta-model is an inexpensive deterministic approximation function for the calculation of the quality criteria of the simulation [16].The decision-making structure in Grasshopper, avoids the need to calculate the whole CAD model for parameter optimization.Until the end of the first optimization loop, the other features are frozen by the Metahopper plug-in. Once the parameters are optimized for mechanical requirements, HEEDS starts another simulation loop on a CAD model representing the thermal management components (see Figure 3).This loop uses Starccm+ as a CFD software to simulate methanol synthesis.In order to define the reaction kinetics, the common model from Vandan Bussche et.al. [17] is used.Furthermore, the initial conditions for the reaction and the type of the catalysis are considered as the same in the reaction's kinetic in Vandan Bussche et.al.With using conjugate heat transfer mechanism from Starccm+, the interaction between "Reaction Chamber" and "Cooling Chamber" is simulated.Because of the symmetry in the model, a meta-model of the geometry is created, as shown in Figure 4.After the second optimization loop, the sub-part is created.The whole optimization process for "Reaction Chamber" took only 50 minutes, which shows the computational advantages of the method over classical optimization.However, the required computational time can increase with usage of more complex design elements.To avoid unnecessary repetitive calculations in Grasshopper during the automation process, the same freezing features idea of the Metahopper plug-in is used, as described above. As an advantage of the proposed methodology, the other sub-parts are not simulated with complex reaction definition.The sub-part "Distribution Structure" is created with the results from the sub-part "Reaction Chamber" such as the positions of the channels and their diameters.In this sub-part, the geometry is only optimized to distribute the reactants in the channels equally.On the other hand, the last sub-part "Collection Chamber" is created with using the knowledge from the sub-part "Reaction Chamber". After all sub-parts have been optimized, they are combined with each other in Grasshopper.Thus, a new design is created using the proposed methodology.The number of variants of the reactor is increased by changing the design elements in the database.The design and simulation steps are embedded in HEEDS software to automate the whole process. Discussion This work introduces a digital workflow for the creation of an automated design for products that need lengthy simulations for verification.The workflow shows how the part is divided into sub-parts and what verifications are required for each subpart.The knowledge capturing mechanism helps the designer to organize the requirements for each sub-part individually.In addition, this method greatly reduces the use of computational resources by verifying all sub-parts separately.This separation allows dedicated simplifications of the sub-part geometry for each simulation.This method enables not only to reduce the computational time, but also to create an integrated product development method. Currently, we use pre-programmed design elements to increase the number of design variants.In terms of design and production, the workflow works flawlessly.However, it has the following limitations:  Although using the database is increasing the number of the variants, it increases also the required computational time, because the workflow must be recalculated for each variant. Due to the verification of all sub-parts individually, it is difficult to validate the workflow experimentally.A basic experimental validation structure is required. Because of the complexity of multiphysics problems, pre-programmed design elements are used in this method only in the same part architecture.However, the method can be further improved using the function integration advantage of AM.The next step is overcoming these limitations and experimentally validate the workflow.Further, the workflow will be adapted for the different tasks and applications. Conclusion In this paper, a digital integrated product development workflow is presented for the automatic generation of the algorithm-based designs.Starting with a knowledge capturing mechanism from a proven concept to find the part architecture and then decomposing the part into sub-parts, an algorithmic model is defined for each sub-part.In the algorithmic model, the sub-parts are optimized dynamically with different simulation in an integrated way.The digital workflow is supported with a case study of the reactor design for the methanol synthesis.After demonstrating a case study, the results are discussed and possible further improvements in the workflow is shown. Figure 1 : Figure 1: Design automation workflow for products that need long simulations Table 1 . The sub-parts and required simulations for a Figure 4 : Figure 4: Meta-models for the simulations in the "Reaction Chamber" Figure 3 : Figure 3: Used algorithm from HEEDS for optimization of the "Reaction Chamber"
5,409.6
2023-01-01T00:00:00.000
[ "Physics" ]
Ontology Mapping of Indian Medicinal Plants with Standardized Medical Terms : Problem statement: World Wide Web (WWW) consisting large volume of information related with medicinal plants. However health care recommendation with Indian Medicinal Plants becomes complicated because valuable Information about medicinal resources as plants is scattered, in text form and unstructured. Search engines are not quite efficient and require excessive manual processing. Therefore search becomes difficult for the ordinary users to find the medicinal uses of herbal plants from the web. And another problem is that the domain experts could not able to map the medicinal uses of herbal plants with the existing standardized medical terms. Mapping the existing ontology introduces the problem of finding the similarity between the terms and relationships. Finding the solution to perform automatic mapping is another major challenge to be solved. Approach: To address these issues we developed a Knowledge framework for the Indian Medicinal Plants (KIMP). Knowledge framework includes the ontology creation, user interface for querying the system. Jena is used to build semantic web applications with the ontology representation of Resource Description Framework (RDF) and Web Ontology Language (OWL). SPARQL Protocol and RDF Query Language (SPARQL) is used to retrieve various query patterns. Automated mapping is achieved by considering lexical and edge based relatedness. Results: The user interface is demonstrated for five thousand concepts, which gives the related information from Wikipedia web page in three languages. Mapping recommendation by the lexical similarity Jaccard algorithm gives 27% and Jaro Winkler algorithm gives 60%. Edge based relationship using WuPalmer algorithm gives 93% mapping recommendation. These are analyzed and compared with our algorithm based on WuPalmer gives more specific mapping results than WuPalmer with 71%. Conclusion: Thus it possible to find the specific resultant web page based on the user requirement in three different languages. The mapping with standardized ontology gives more improvement in analyzing the performance of the medicinal plants and their uses. INTRODUCTION India is the largest producer of medicinal herbs and is called the botanical garden of the world. India is blessed with rich and diverse heritage of cultural traditions. In the modern world it has been realized that the herbal drugs strengthens the body system without side effects. Web is having large volume information related to herbal plants and becomes very difficult to search for the required information. Searching the specific information by the general user is a difficult process. Search engines are used to search for these documents, but they still have to be interpreted by themselves before any useful information could be extracted. And the text based herbal plant details are not mapped with the standardized medical terms which is required by the domain experts. As text based information, there are some limitations in using the medicinal plants: • Searching text-based documents is very difficult • They provide general information which is not more appropriate to the user need • There is no mapping with the standardized medical terms This study is used to address these limitations for providing useful information. To cope with the existing web based problems with information searching the augmentation of meaningful contents in the web is a semantic based solution. Semantic Web was introduced by Berners- Lee et al. (2001). Semantic Web is an intelligent incarnation and advancement in World Wide Web to collect, manipulate and annotate the information by providing categorization, uniform access to resources and structuring the information in machine process able format. To structure the information in machine process able form, Semantic Web has introduced the concept of "Ontology" (Antoniou and Harmelen, 2004). India possesses a rich traditional knowledge of ways and means practiced to treat diseases afflicting people. This knowledge has generally been passed down by word of mouth from generation to generation. A part of this knowledge has been described in ancient classical and other literature, often inaccessible to the common man and even when accessible rarely understood. Documentation of this existing knowledge, available in public domain, on various traditional systems of medicine has become imperative to safeguard the sovereignty of this traditional knowledge. References are also collected from Tamil (one of the regional language of India), English and Hindi (one of the regional language of India) Wikipedia related to medicinal plants (http://www.tkdl.res.in/tkdl/langdefault/common/Home.a sp?GL=Eng; http://en.wikipedia.org/wiki/Main-Page). Ontology describes the concepts, their relationships and properties within their domain and it can be utilized both to offer automatic inferring and interoperability between applications. This is an appropriate vision for knowledge management. Ontology provides understanding of the structure of information. With a common ontology, information that is spread out in many different applications and documents can be viewable in an easy way to understand and navigate. The ontology makes it possible to search both explicit and tacit knowledge, thereby bridging the gap between the tacit and explicit knowledge. The advantages of ontology are: knowledge sharing, logic inference and reuse of knowledge. Ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. In practical terms, developing ontology includes: • Defining classes in the ontology • Arranging the classes in a taxonomic (subclasssuperclass) hierarchy • Defining properties (or slots) • Filling in the values for properties of instances Related works: Ontology based E-Health system with Thai Herb recommendation project is created the ontology for Thai herbs and based on the user input as symptoms, province of living, chronic disease details; the recommendations are given for treating the symptoms. But it is not considering the MeSH terms for treating the symptoms (Kato et al., 2010). Designing a conceptual model for herbal research domain using ontology technique, discussed on how ontology technique can be used to represent conceptual model database design for herbal research domain (Mamat and Rahman, 2009). The role of domain ontologies in database design: An ontology management and conceptual modeling environment, this study demonstrated how ontology representation can assist database design. Common ontology representation or basic relationships for conceptual modeling are-a, synonym and related-to. The purpose of this application is to simplify in defining the rules exist in herbal industry. The following four types of relationship component are Prerequisite, temporal, mutually inclusive and mutually exclusive are also explained (Sugumaran and Storey, 2006). Organizing herbs knowledge: Is an ontology or taxonomy the answer? This study identified that ontology can be used to organize the information that have variety of concepts are need in sharing herb knowledge. Despite more problem solver pointed to ontology, taxonomy also important in identification and classification of herbs (Azlida et al., 2008). A model driven ontology-based architecture for supporting the quality of services in pervasive telemedicine applications, discusses on ontology based architecture model enabling an intelligent pervasive telemedicine tasks management. Message exchange among different actors, the message exchanged by the system will be encapsulated in the XML format. For example, if the patient needs coronary angioplasty and need emergency physician to the closest hospital can be identified and exchanged as message (Nageba et al., 2009). The interactive aspect of relationship discovery, is dicussed in (Heim et al., 2010). The real discovery is only possible with a human involved, since only the user can ultimately decide if a found relationship is relevant in a certain situation or not. A Methodology for Ontology Integration, ontology reuse is an important research issue only one of its sub processes is merging; the other reuse sub process is integration. In this study they described the activities that compose this process and describe a methodology to perform the ontology integration process (Pinto et al., 2004). (Horridge et al., 2007) and stored as the knowledge base for further processing. The protégé is a free, open source ontology editor based on java platform. It is extensible, provides a plug-and-play environment, support graphic visualization. Noy and McGuinness (2001) discussed about the ontology creation techniques using Protege. Jena is the Java enabled semantic web API framework which can able to read and process the information from the knowledge base. Knowledge framework for Indian Medicinal Plants (KIMP) class classification of plants is done based on botanical classification (Joy et al., 1998). Disease terms mentioned in the KIMP ontology is mapped with the MeSH ontology automatically. User interface is created for the general users by giving the list of diseases and its corresponding properties available for those diseases. The overall architecture is shown in Fig. 1. Defining classes in the ontology, arranging the classes in hierarchy: Classes are the main focus of most of the ontologies. A class can have subclasses that represent concepts that are more specific than the super class. Plant Kingdom consists of Kingdom details which is the subclass of thing. Order is the subclass of Kingdom, Family is the subclass of Order, Genus is the subclass of Order, Species is the subclass of Genus and Plant is the subclass of Genus. Sample classification of the Plant classification is shown in Fig. 2. For the disease ontology, classification is not done at this time. Since the details of Plants and Disease are mentioned in the form of text in the input sources (http://www.tkdl.res.in/tkdl/langdefault/common/Home. asp?GL=Eng; http://en.wikipedia.org/wiki/Main-Page). OWL Properties/slots represent relationships among classes and instances. There are two main types of properties, Object properties and Data type properties. Object properties are relationships between two individuals. Object properties are used to relate two instances whereas Data type property used to relate one instance with any of the built in data types. For example Object property usedToCure is used to relate Plant instance and disease instance. Data property is another type of property which relates the instance with built in data types and their values (Vadivu et al., 2011). The application development of Ontology based knowledge querying is made simple by using Jena programming toolkit and its procedure is shown in Fig. 3. Class, property, individual creation is done using Protégé, which is shown in Fig. 4. Jena (http://jena.sourceforge.net/) aims to provide a consistent programming interface for ontology application development with the base of Java Programming. "OntClass" is used to represent OWL class or RDFS class. "OntModel"extends support for the kinds of objects expected to be in ontology: Classes (in a class hierarchy), properties (in a property hierarchy) and individuals. 2007) discussed about the knowledge querying. SPARQL is a Simple Protocol and RDF Query Language. SPARQL is a syntactically-SQL-like language for querying RDF graphs via pattern matching. The language's features include basic conjunctive patterns, value filters and optional patterns. Thus using SPARQL in Jena it is possible to retrieve more specific and semantically related resources can identified without affecting the existing data models (Vadivu and Hopper, 2010). MeSH, Medical Subject Heading, (http://www.nlm.nih.gov/pubs/factsheets/mesh.html) is the National Library of Medicine's controlled vocabulary thesaurus. It consists of sets of terms naming descriptors in a hierarchical structure that permits searching at various levels of specificity. Integrating this plant ontology and their medicinal uses with the existing Medical Subject Heading (MeSH) is useful to find more usage of the medicinal plants. Mapping is one of the sub processes of integration which is the process of building ontology in one subject and reusing it by one or more other subjects. The steps of mapping process are to identify the available ontologies and then finding the possible terms to be mapped. To find the terms to be mapped, semantic similarity between the ontology terms have to be calculated in automated way. Since for the large scale of data it is not possible to perform the manual mapping among the terms. Mapping of ontologies requires the class mapping, property mapping and instance mapping. The following algorithm shows the mapping procedure. 1. Initialize set of values t ∈ C, t∈ P, t ∈ I. Cclass, P-Property, I-Instance/Individual. 2. Repeat 3. Select values from C, P, I 4. Let G, G' from KO and MO 5. For (t, t')∈ G ×G' do a. Compute similarity of t, t'. b. Choose the highest similarity value of t, t' c. Add the mapping of m(t, t') into M 6. end for 7. Until no more values available. 8. Return M. Similarity measures: The similarity measuring methods are discussed in (Farooq et al., 2010) Similarity measure between classes, properties and individuals is used to find the mapping between the terms. In this study, we have implemented lexical and edge based counting measures. Lexical algorithms are based on the string matching algorithm. We have used Jaccard (http://en.wikipedia.org/wiki/Jaccard-index) and JaroWinkler, another lexical based (http://aliasi.com/lingpipe/docs/api/com/aliasi/spell/ JaroWinklerDistance.html) algorithm to find the lexical similarity between KIMP ontology and MeSH ontology terms. Wordnet, (Fellbaum,1998) database is used as the base database for finding the similarity score. Word Net is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the browser. Distance values calculated between 0 and 1, distance of 0 means, the character sequences share all of their terms, whereas a distance of 1 means they have no characters in common. The following is the code for Jaccard distance which will return the values between 0 and 1 based on the string similarity: For (String x: s1) if (s2.contains(x)) ++numMatch; int numTotal = s1. Size () + s2. Size ()-numMatch; return ((double) numMatch)/((double) numTotal); Jaro and Winkler lexical similarity algorithm is also used for the same purpose. Based on Jaro, the distance d j of two given strings s 1 and s 2 is: where: m is the number of matching characters; t is half the number of transpositions. (Wu and Palmer, 1994) distance uses a prefix scale p which gives more favourable ratings to strings that match from the beginning for a set prefix length l. Given two strings s1 and s2, their Jaro-Winkler distance d w is: Where: d j = The Jaro distance for strings s 1 and s 2 l = The length of common prefix at the start of the string up to a maximum of 4 characters p = A constant scaling factor for how much the score is adjusted upwards for having common prefixes. P should not exceed 0.25, otherwise the distance can become larger than 1. The standard value for this constant in Winkler's work is p = 0.1 double weight = (numCommonD/len1 + numCommonD/len2 +(numCommo-numTransposed)/numCommonD)/3.0; Distance values calculated between 0 and 1, distance of 0 means the character sequences share all of their terms, whereas a distance of 1 means they have no terms in common. Both Jaccard and JaroWinkler algorithms are used to find the lexical similarity between the string and conceptual similarity measure is not included for improving the mapping. Edge based counting algorithm is used to find conceptual relationship among the terms. We have used Wu and Palmer (Wu and Palmer, 1994) algorithm as the basic to find edge based algorithm and the related diagram is shown in Fig. 5 The above code is based on WuPalmer algorithm. We analyzed WuPalmer algorithm and identified that Wu and Palmer algorithm does not give more accurate values because it always considers the depth of the terms from the root node. Calculating the edge distance from the common node from where the terms are getting divided into different paths will give better results. Based on this we have developed KIMP_WuPalmer algorithm which gives more accurate similarity values than WuPalmer. RESULTS The sample mapping recommendation of Jaccard lexical based measure is shown below: DISCUSSION The result of Jaccard mapping values were analyzed with different threshold values and verified manually. This gives 27.81% of mapping recommendation. The results are obtained by Jaro_Winkler were analyzed with different threshold values for similarity measure and verified manually. This gives 60.96% of mapping recommendation. Based on Wu and Palmer the more similar words are identified based on the hierarchical structure of the MeSH ontology. 93% of the terms are mapped based on Wu Palmer algorithm. Comparison of Jaccard, Jaro Winkler and Wu Palmer is shown in Fig. 8. Figure 9 shows the comparative results of Wu Palmer and KIMP_WuPalmer and KIMP_Wu Palmer result gives more accurate results than Wu and Palmer (1994). CONCLUSION Thus it possible to find the specific resultant web page based on the user requirement in three different languages. Jaccard, Jaro Winkler algoritms are used to find the lexical similarity which considers only the string matching. Wu and Palmer (1994) consider the edges between the terms to find more conceptual relationship which gives more related terms. Our algorithm based on WuPalmer considers the depth of the terms with more appropriate value to find better results. The mapping with standardized ontology will be useful in analyzing and improving in identifying the uses of medicinal plants.
3,896.2
2012-08-16T00:00:00.000
[ "Computer Science" ]
Flap gate farm: From Venice lagoon defense to resonating wave energy production. Part 2: Synchronous response to incident waves in open sea We consider a flap gate farm, i.e. a series of P arrays, each made of Q neighbouring flap gates, in an open sea of constant depth, forced by monochromatic incident waves. The effect of the gate thickness on the dynamics of the system is taken into account. By means of Green’s theorem a system of hypersingular integral equations for the velocity potential in the fluid domain is solved in terms of Legendre polynomials. We show that synchronous excitation of the natural frequencies of Sammarco et al. (2013) yields large amplitude response of gate motion. This aspect is fundamental for the optimisation of the gate farm for energy production. equation approach to solve the radiation and scattering problem for a submerged horizontal circular plate. Here we find the solution in terms of Legendre polynomials. The Haskind–Hanaoka relation is utilised to check the accuracy and the computational cost of the semi-analytical method. We show that in the open sea there are P × ( Q − 1) out-of-phase natural modes similar Introduction The flap gate systems, i.e. one or more floating bodies hinged at the bottom of the sea and rolling under incoming waves, have recently proved very effective to extract energy from the sea (Whittaker et al. [1]). The mechanical behaviour of a rolling flap gate was initially investigated during the design phase of the storm barriers for protecting Venice Lagoon from flooding. For one array of gates spanning the entire width of a channel, experiments showed that the gates can be excited to oscillate at half the incident wave frequency with a very large amplitude (Mei et al. [2]). In that case, resonance occurs through a nonlinear mechanism when the frequency of the incoming wave is twice the eigenfrequency of the system (Sammarco et al. [3,4]). Li and Mei [5] found the (Q − 1) eigenfrequencies of one array made by Q identical gates spanning the full width of a channel. Later, Sammarco et al. [6] in Part 1 of this paper considered a P × Q gate farm, and showed that there exist P × (Q − 1) eigenfrequencies and associated modal forms. If the gates are not completely confined in a channel, radiation damping is always present, i.e. wave trapping is imperfect and therefore linear resonance of the eigenmodes is possible (Adamo and Mei [7]). In this paper a linear theory is developed in order to analyse the resonant behaviour of the P × Q gate farm in an open sea of constant depth. Unlike in previous models available in the literature (Renzi et al. [8], Renzi and Dias [9][10][11][12][13], Renzi et al. [14,15], Sarkar et al. [16]), all based on the "thin-gate hypothesis" (Linton and McIver [17]), in this work the gate thickness is assumed finite, i.e. comparable with the other gate dimensions. By means of Green's theorem a system of hypersingular integral equations for the radiation and scattering potential on the boundaries of the gate farm is obtained. Achenbach and Li [18] and Martin and Rizzo [19] adopted a similar procedure to solve crack and acoustic problems, while Parsons and Martin [20][21][22] used this method to solve scattering and trapping of water waves by rigid plates. Subsequently, Martin and Farina [23] and Farina and Martin [24] used the hypersingular integral equation approach to solve the radiation and scattering problem for a submerged horizontal circular plate. Here we find the solution in terms of Legendre polynomials. The Haskind-Hanaoka relation is utilised to check the accuracy and the computational cost of the semi-analytical method. We show that in the open sea there are P × (Q − 1) out-of-phase natural modes similar in shape to the case of the gate farm in a channel. The irregular frequencies (Linton and McIver [17]-Mei et al. [25]) are then evaluated. We also investigate the response of the gate farm to plane incident waves of varying frequency. The gate farm is designed to work in the nearshore, hence normal incidence of the waves is assumed. Large amplitude motions of the gates occur when the incident wave frequency approaches the eigenfrequencies. Hence a linear resonant mechanism of the natural modes in the open sea is effective. Finally, the P × Q gate farm and a system of P × Q isolated and independent gates are compared in terms of energy production. Governing equations for the P × Q gate farm As shown in Fig. 1, consider P arrays of neighbouring flap gates. Each array, p = 1, 2, . . ., P, is composed by Q identical floating gates (q = 1, 2, . . ., Q). Let a and 2b be, respectively, the width and the thickness of each gate and let w = Qa. Consider a three dimensional Cartesian coordinate system with the x and y axes lying on the mean free surface and the z axis pointing vertically upward. The y-axis bisects the first array (p = 1), while the x-axis is orthogonal to the arrays and is centred among them. All the gates of the pth array are hinged on a common axis lying on x = (p − 1)L, z = − h, where L is the distance between the arrays and h the sea constant depth. The symbol G pq denotes the qth gate of the pth array, while pq indicates the angular displacement of G pq , positive if clockwise. Monochromatic plane normal incidence waves of amplitude A, period T and angular frequency ω = 2 /T, coming from x =+ ∞, force the gates to oscillate back and forth. Let p (y, t) indicate the angular displacement function of the pth array: p (y, t) is a piece-wise function of y, still unknown. The analysis is performed in the framework of irrotational flow and in the limit of small-amplitude oscillations. Therefore, the velocity potential ˚(x, y, z, t) must satisfy the Laplace equation in the fluid domain ˝: On the free surface, the kinematic-dynamic boundary condition reads: while the no-flux condition on the seabed requires: On the p = 1, . . ., P arrays the kinematic boundary conditions are: Note that the no flux condition (6) is given on the finite edges of each array facing the open sea, without channel walls. The time dependence of ˚ and p can be separated by assuming a harmonic motion of given frequency ω: p (y, t) = Re{ p (y)e −iωt }. Semi-analytical solution The linearity of the problem allows the following decomposition of the potential (x, y, z): where is the potential of the plane incident waves incoming from x =+ ∞, S is the potential of the scattered waves and R pq is the potential of the radiated waves due to the moving gate G pq while all the other gates are at rest. In (10), k denotes the wave number, root of the dispersion relation ω 2 = gkth kh, while i is the imaginary unit. ch, sh and th indicate shorthand notation respectively for cosh, sinh and tanh. According to the separation (7) and (8) and the decomposition (9), both R pq and S must satisfy the Laplace equation (2), the kinematic-dynamic boundary condition on the free surface (3), and the no-flux condition on the seabed (4). Let x ± p indicate the x-coordinate of the rest position of the vertical surface of the pth array: Each gate G pq spans a y-width given by: The kinematic boundary conditions on the gate-farm surfaces then become: p = 1, . . ., P, q = 1, . . ., Q. Finally R pq and S must be outgoing when x 2 + y 2 → ∞. Separation of variables gives: where Z n (z) represents the normalized eigenfunctions: which satisfy the orthogonality property 0 −h Z n (z)Z m (z) dz = ı nm , n, m = 0, 1, . . . with ı nm the Kronecker delta. In (15), k n are the roots of the dispersion relation: Following (14), for each of the ϕ R n,pq , ϕ S n , the Laplace equation becomes the Helmholtz equation Now define the boundary S pq of the gate G pq as and the end boundaries of the pth array of width 2b We can so refer to the entire gate farm boundary S G as: The boundary conditions (13a)-(13e) become Note that in (24) only d 0 is non-zero. We also require ϕ R n,pq and ϕ S n to be outgoing as x 2 + y 2 → ∞. The solution of the boundary value problem defined by the Helmholtz equation (18) and by the boundary conditions (22a)-(22e) can be found by using Green's theorem and Green's functions. Consider the plane fluid domain ˙ enclosed within the boundary of the gate farm S G and a circle of large radius S ∞ surrounding the gate farm. Define the Green function G n (x, y ; , Á) as the solution of the Helmholtz equation: LG n (x, y; , with G n must be outgoing as r→ ∞, hence the solution of (25) and (26) is: In the latter, H 0 is the Hankel function of the first kind and order zero. Application of Green's theorem yields ˙ ϕ R n,pq (x, y)ϕ S n (x, y) LG n (x, y; , Á) − G n (x, y; , Á)L ϕ R n,pq (x, y)ϕ S n (x, y) d= = S G +S∞+S ϕ R n,pq (x, y)ϕ S n (x, y) ∂G n (x, y; , Á) ∂n − G n (x, y; , Á) ∂ ∂n ϕ R n,pq (x, y)ϕ S n (x, y) dS (28) where ˙ = ˙ \ ( , Á), S is a semicircle of radius → 0 centred at ( , Á) and finally ∂(·)/∂n is the derivative of (·) in the direction of the outward normal to the boundaries of ˙. Because of the governing Eqs. (18)- (25) and the behaviour of G n for r → 0 (26) and r→ ∞, Eq. (28) simplifies to (see also Linton and McIver [17]-Mei et al. [25]) where the line integral is now evaluated in terms of ( , Á) on the boundary S G . The radiation potential ϕ R n,pq and the scattering potential ϕ S n are expressed in integral form. Define ± p and Á q as follows: Since: substitution of the boundary conditions (22a)-(22e) inside Eq. (29), yields: Note that (32) and (33) are more complex than their thin-gate counterparts of Renzi et al. [14]. Since the radiation potential ϕ R n,pq and the scattering potential ϕ S n on the boundary of the gate-farm are unknown, the first four integrals inside the expressions (32) and (33) are still unknown. The integrals inside the summations are evaluated on the boundary of each array, except for the last integral of (32) which is evaluated on the boundary of the moving gate G pq . Imposing the boundary conditions (22a)-(22e) to (32) and (33) yields a system of hypersingular integral equations for ϕ R n,pq and ϕ S n evaluated on the boundaries of the gate farm. The solution of the system is found by expanding ϕ R n,pq and ϕ S n in terms of Legendre polynomials P m of integer order m = 0, . . ., M (see Appendix for details). Finally the radiation potential R pq due to the motion of the gate G pq , on the lateral surfaces of each arrayp = 1, . . ., P, is expressed as follow: while the scattering potential on the same surfaces is given by: where x p and y are dimensionless variables defined in [−1, 1]: while ˛R ± nmp,pq , ˛S ± 0mp , ˇR ± nmp,pq and ˇS ± 0mp are complex constants determined by solving the linear systems (A.38a)-(A.38c) and (A.39a) and (A.39b) with a numerical collocation scheme (see Appendix for further details). Gate dynamics Consider each gate G pq coupled with an energy generator at the hinge. Assume that the generator exerts a torque proportional to the angular velocity of the gate G pq , pto˙ pq , where pto is the power take-off coefficient. Conservation of angular momentum requires: where I is the moment of inertia of the gate about the hinge and C is the net restoring torque: with: where S A denotes the cross sectional area of the gate at the water line and V the water volume displaced by the gate in its rest vertical position. M g and z g are respectively the mass and the vertical coordinate of the center of mass of the gate. For the geometry of Fig. 1, I A xx and I V z are: Using (7)- (9) and the expressions of the potentials (10), (34) and (35), the momentum equation (37) gives where is the exciting torque due to the incident and scattered waves, while: represent, respectively, the added inertia and the radiation damping of the gate G pq due to the unit rotation of the gate G pq . Eq. (41) can be written in matrix form: where  is a column vector of length s = P × Q that contains all the angular displacements of the gates: . . I is the identity matrix of size s × s, M and N are respectively the added inertia matrix and the radiation damping matrix also of size s × s: where both M m m and N p p are symmetrical square matrices of size Q × Q: Finally, once the angular displacements of the gates are known, the average power absorbed over a wave cycle by the gate farm, is equal to: Eigenfrequencies and eigenvectors The momentum equations given by (45) are equivalent to a system of P × Q linear damped harmonic oscillators with given mass, stiffness and damping. In order to find the eigenfrequencies of the system, the exciting torque and the damping terms are set equal to zero. System (45) becomes homogeneous: To find non-trivial solutions the following implicit non linear eigenvalue condition must then be solved: Once the eigenfrequencies are known, the respective modal forms can be obtained by setting the displacement of the gate G 11 = 1 and then solving system (50). The radiation potential in the far field Consider the polar coordinates r and defined by (x, y) = r(cos , sin ). Following a similar procedure as in Renzi and Dias [10], the radiation potential in the far field (i.e. for r→ ∞), for unit rotational velocity of the gate G pq , can be approximated as R pq (r, , z) where represents the angular variation of the radially spreading wave (Mei et al. [25]). The latter can be used to derive some useful formulas that relate the hydrodynamic parameters. The Haskind-Hanaoka relation for the gate farm Consider the 3D Haskind-Hanaoka relation (Mei et al. [25]) where F pq is the exciting torque given by expression (42) while A R pq (0) represents the wave amplitude in the direction opposite to the incident waves Expression (55) has been used to check the numerical computation via the relative error One gate in the open sea: the effects of the gate thickness In order to evaluate the effects of the finite gate thickness 2b, the simplest case of P = Q = 1, i.e. the case of one gate in the open sea is considered. Inertia, buoyancy and width of the gate, and water depth, are listed in Table 1. Different values of the thickness 2b have been chosen, i.e. 2b ∈ [0.1; 1.5] m. The limit value of 2b = 0.1 m corresponds to the case where the "thin-gate" hypothesis can be applied (b/a 1 -Renzi and Dias [9]). Fig. 2 shows the values of the added inertia , the radiation damping and the magnitude of the exciting torque |F| versus the frequency of the incident waves for different values of b. The effects of the gate thickness on the added inertia and radiation damping are significant for ω ∈ [1, 3.5] rad s −1 . In particular, the larger the gate thickness the larger the added mass and radiation damping. As a consequence the eigenfrequency of the system decreases if the gate thickness increases. The eigenfrequency ω 1 of the single gate for five different values of 2b is listed in Table 2. The gate farm in the open sea With reference to Fig. 1, we consider P = 3 arrays each with Q = 5 gates. The input parameters are defined in Table 1. Eigenfrequencies and eigenvectors The eigenvalue condition (51) has been solved in order to find the eigenfrequencies of the system within a range of ω from 0 to 1.2 rad s −1 . The frequency range includes the P × (Q − 1) = 12 eigenfrequencies of the out-of-phase motion and the first two eigenfrequencies of the in-phase motion, where the pth array moves at unison. The numerical values of the eigenfrequencies are listed in Table 3 for the out-of-phase motion and in Table 4 for the in-phase motion. Solution of the momentum equations (50) gives the corresponding modal forms. Note that the generic out-of-phase natural mode N ij follows the same definition of Sammarco et al. [6], that is: for modes N 11 , N 21 , N 31 , and N 41 , each array has the same modal shape, but for the central array (p = 2); modes N 12 , N 22 , N 32 , and N 42 , are characterized by having the middle array (p = 2) with null angular displacement, while the last array (p = 3) is in opposite phase with respect to the first (p = 1); for the remaining modes N 13 , N 23 , N 33 , and N 43 , modal deformation is the same, but for the middle array (p = 2), which is in opposition of phase with the other two. N(ω 1 ) represents the in-phase natural mode characterized by the middle array in opposite phase with respect to the first and the last array. Similarly N(ω 2 ) represents the in-phase natural mode characterized by the middle array (p = 2) with null angular displacements while the arrays p = 1 and p = 3 are in opposition of phase. Let K be the number of the gates per modal wavelength of the first array, p = 1; the eigenfrequencies of the out-of-phase modes decrease as K increases. Irregular frequencies Because of the geometry of the gate farm, the integral equations (32) and (33) possess the so-called irregular frequencies when n = 0 (Linton and McIver [17]-Mei et al. [25]). Define the boundaries of the pth array as and let ˙ p be the interior of S p . We can so define ϕ p as the interior potential that satisfy the Helmholtz equation in ˙ p with boundary conditions The eigensolutions of the homogeneous Dirichlet problem (59) and (60) are found by separation of variables: where A nm is an arbitrary constant and n, m = 0, 1, . . .. The corresponding eigenvalues are while the related eigenfrequencies ω nm can be found via the dispersion relation These eigenfrequencies are the so-called irregular frequencies (Linton and McIver [17]-Mei et al. [25]). The lowest value of ω nm corresponds to the case of n = 0 and m = 1 and it is equal to ∼2 rad s −1 , i.e. higher than the range of our interest. For this reason we do not need to exclude them from the analysis. Forced response Extensive computations have been carried out for the range of interest of the incident wave frequencies ω = 0.1 − 1.2 rad s −1 without the PTO. The amplitude of the incident wave is A = 1 m. Resonance occurs at eight frequencies whose values are near the natural frequencies of the homogeneous system previously calculated. Because of the direction of the incident wave, orthogonal to the axes of the arrays, only the symmetric natural modes with respect to the x-axis can be excited; i.e, P × (Q − 1)/2 =6 out-of-phase and 2 in-phase natural modes are resonated. Let ω ij be the eigenfrequency of the out-of-phase mode N ij . In Fig. 3 we show the amplitude of the angular displacements versus the incident wave frequency and indicate the eigenfrequencies of the resonating natural modes. Note that the high and unrealistic values of the peaks are related to the weakness of the radiation damping corresponding to the resonance frequencies. In this case the gate-farm is almost undamped and radiates low energy at infinity. Fig. 4 and Fig. 5 the shapes of the gate-farm forced at the resonance frequencies ω ij are shown. Note that the number near each gate G pq represents Re{ pq } normalized with respect to Re{ 11 }. The values of Re{ 11 } at the resonance frequencies are listed in Table 5. The influence of the power take-off on the capture width A parametric analysis is performed to investigate the effect of the power take-off coefficient pto on the generated power P over a wave cycle (see (49)). Define the capture width ratio C F as the ratio of the generated power P per unit gate-farm width to the incident power per unit width of the crest (see Renzi et al. [15]): where C g is the group velocity: Waves of amplitude A = 1 m are normally incident on the flaps. Different values of the PTO coefficient have been chosen, i.e. pto ∈ 10 4 ; 10 8 kg m 2 s −1 . Fig. 6 shows the behaviour of the capture width ratio C F versus the incident wave frequency for three different values of the PTO coefficient. When pto = 10 6 kg m 2 s −1 and ω > 0.6 rad s −1 , the capture width ratio is equal to ∼0.5 for a wide range of frequencies. Consider the case of pto = 10 8 kg m 2 s −1 and the behaviour of the magnitude of the exciting torque |F p3 | on each gate G p3 shown in Fig. 7: the behaviour of C F is quite similar to |F p3 |. In other words, the dynamics is dominated by the exciting torque due to diffracted waves (see Renzi and Dias [10]). Differently, the behaviour of the capture width ratio for pto = 10 4 kg m 2 s −1 , resembles that of the amplitude of the angular displacements shown in Fig. 3, hence in this case the dynamics is dominated by the resonance effects. Wave power generation and efficiency: (P × Q) gate farm versus (P × Q) isolated gates In this section the (P × Q) gate farm and a system of (P × Q) isolated and independent gates are compared in terms of energy production. The single flap gate has the same characteristics for both systems (see Table 1 for the values). Consider the PTO coefficient that maximize the power output for incident wave frequency ω = 0.9 rad s −1 , i.e. a typical value in the Mediterranean Sea. The optimal PTO coefficient for a system of isolated gates pto,IG can be designed such that ( where and represent respectively the added inertia and the radiation damping of a single isolated gate at ω = 0.9 rad s −1 (see Fig. 2 for the values). The optimal PTO coefficient for the gate farm pto,GF is found numerically by maximizing the function (49) for a fixed ω. For ω = 0.9 rad s −1 , pto,GF = 7 ×10 6 kg m 2 s −1 . The difference between pto,IG and pto,GF is related to the behaviour of the exciting torque. Inspection of the different relations between radiation damping and exciting torque (Renzi and Dias [11]-Mei et al. [25]) shows that when ω is far from resonance the larger the exciting torque the larger the optimal PTO coefficient. In the present case the value ω = 0.9 rad s −1 is very close to the peaks of the exciting torque for the gate farm (see Fig. 7), while is distant from the peak of the exciting torque for a single isolated gate (see Fig. 2). As a consequence, pto,GF is larger than pto,IG . Hereafter, both pto,GF and pto,IG are fixed. Now define the capture width ratio of the gate farm C GF and the capture width ratio of (P × Q) isolated gates C IG as where P GF and P IG represent respectively the averaged power generated by the gate farm and by the single isolated flap gate. Fig. 8 shows the capture width ratio curves of both systems. The gate farm captures significantly more energy than a system of isolated gates. Also the bandwidth of the gate farm curve is larger than the other. Note that C GF behaves as the exciting torque magnitude shown in Fig. 7, hence the performance is dominated by diffracted waves. In Renzi et al. [15] have been obtained similar results. Now consider the amplitude of the angular displacements  33 of the gate G 33 and the amplitude of the angular displacements  IG of the isolated gate shown in Fig. 9. The maximum value for | 33 | is ∼0.2 rad, hence the influence of the PTO coefficient decreases significantly the unrealistic amplitudes of the gates without PTO damping (see Fig. 2 for the gate farm). This fact justifies the hypothesis of small-amplitude oscillations and the applicability of the linear theory. Conclusions A semi-analytical model has been developed in order to solve the dynamic behaviour of the P × Q gate farm when excited by planar incident waves. By means of the Green theorem, a system of hypersingular integral equations for the radiation and scattering potential on the wet surfaces of the gate farm is obtained. The system is solved in terms of Legendre polynomials of integer order. Then the expressions of the added inertia, the radiation damping and the exciting torque are derived. The theory takes into account the thickness of each gate without resorting to the "thin-gate" hypothesis. A parametric analysis of one gate in the open sea reveals the effect of the gate thickness on the eigenfrequency and on the gate response to incident waves. We have shown that the larger the thickness the larger the added inertia and the lower the eigenfrequency. Moreover, the radiation damping increases as the thickness increases, while the exciting torque shows negligible variations. The solution of the eigenvalue condition for the P × Q gate farm, gives P × (Q − 1) out-of-phase natural modes similar in shape to those of the P × Q gate farm in a channel of Sammarco et al. [6]. The system response is then evaluated for a wide range of incident wave frequencies. Numerical results show that the resonant peaks are close to the natural frequencies of the system. In particular, the narrow resonant peaks indicate that the radiation damping is small, hence synchronous excitation of the natural modes is significant. An asymptotic expression of the radiation potential is obtained in order to apply the Haskind-Hanaoka relation to the gate farm. The (P × Q) gate farm and a system of (P × Q) isolated gates are compared in terms of energy production. The results show that the gate farm capture more energy than a system of isolated gates. The amplitude response at the resonance frequencies is large and non-realistic, hence the hypothesis of small-amplitude oscillation at the basis of this linear theory, is not satisfied. However, the amplitude response is significantly reduced when the gates are coupled with a PTO device at the hinge. Also fluid viscosity and vortex shedding should be considered in order to better evaluate dissipation effects (see Wei et al. [27]). For this reason, the development of a non-linear theory is necessary. This will also allow the evaluation of the gate response when the natural modes are excited sub-harmonically by incident waves. Expressions (A.5a)-(A.6b) form two systems of 4 × P integro-differential equations whose unknowns are respectively ϕ R n,pq and ϕ S n evaluated on the boundary of the gate farm. Consider the case where the index of the summation p * is equal top. The integrals inside (A.5a)-(A.6b), given by ∂ ∂x are hypersingular when Á = ± y and = ± x. In this case, the inversion between the outer derivative and the integral sign is possible by means of the Hadamard finite-part integral H . Recalling the expression of the Hankel function H Expressions (A.38a)-(A.38c) and (A.39a)-(A.39b) define two systems of linear equations whose unknowns are respectively ˛R ± nmp * ,pq and ˇR ± nmp * ,pq for the radiation problem, ˛S ± nmp * and ˇS ± nmp * for the scattering problem. Each system has 4 × P × M + 1 unknowns, hence M + 1 evaluation points must be chosen for each side of the single array. A good choice for the collocation points (x p,j , y j ) is given by the roots of Chebyshev polynomials of the first kind ( R pq x ± p , y, z
6,871.2
2015-08-01T00:00:00.000
[ "Physics", "Engineering", "Environmental Science" ]
Concentration-based velocity reconstruction in convective Hele–Shaw flows We examine the process of convective dissolution in a Hele–Shaw cell. We consider a one-sided configuration and we propose a newly developed method to reconstruct the velocity field from concentration measurements. The great advantage of this Concentration-based Velocity Reconstruction (CVR) method consists of providing both concentration and velocity fields with a single snapshot of the experiment recorded in high resolution. We benchmark our method vis–à–vis against numerical simulations in the instance of Darcy flows, and we also include dispersive effects to the reconstruction process of non-Darcy flows. The absence of laser sources and the presence of one low-speed camera make this method a safe, accurate, and cost-effective alternative to classical PIV/PTV velocimetry processes. Finally, as an example of possible application, we employ the CVR method to analyse the tip splitting phenomena. Introduction The Hele-Shaw cell consists of two paralleled and transparent plates located within a uniform gap. When the gap is sufficiently thin, the flow inside the cell can reproduce a Stokes flow. This feature together with optical accessibility represents great advantages of the Hele-Shaw cell, making it a suitable tool to investigate in detail different flow configurations . In this work, we will focus on buoyancy-driven Hele-Shaw flows. The driving Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s0034 8-020-03016 -3) contains supplementary material, which is available to authorized users. 3 195 Page 2 of 16 force of the flow consists of density differences induced by the presence of a solute and, more precisely, by the solute concentration gradients existing in the domain. In these conditions, under certain assumptions, the Hele-Shaw apparatus may accurately mimic a Darcy flow in a porous medium. This problem is of practical interest for a number of geophysical and environmental applications such as water contamination (Molen and Ommen 1988;LeBlanc 1984), carbon sequestration (Huppert and Neufeld 2014;Emami-Meybodi et al. 2015), sea ice formation (Feltham et al. 2006;Wettlaufer et al. 1997;Middleton et al. 2016), and granular flows (Lange et al. 1998), to name a few. Velocity measurements in these flows are of crucial importance for the characterisation of the flow patterns, and it is a very active topic of research (Ehyaei and Kiger 2014;Kreyenberg et al. 2019;Kislaya et al. 2020;Anders et al. 2020). While numerical simulations can give detailed information in terms of both velocity and concentration fields, Hele-Shaw experiments provide qualitative (Fernandez et al. 2002;Salibindla et al. 2018;Thomas et al. 2018;Lemaigre et al. 2013;Pringle et al. 2002) and quantitative observations of the concentration field (Slim et al. 2013;De Paoli et al. 2020). However, accurate experimental measurements of the Eulerian velocity field are hard to obtain. In this work, we aim precisely at bridging this gap. We propose a new method to simultaneously reconstruct the concentration and velocity distributions in buoyancydriven Hele-Shaw flows. We investigate experimentally the evolution of a buoyancy-driven flow in a one-sided configuration (Hewitt et al. 2013;De Paoli et al. 2017). We consider a rectangular domain in which the fluid density is maximum at the top wall, so that the configuration is unstable. Density differences existing across the fluid layer are induced by the presence of a solute that, dissolving and mixing in time, controls the evolution of the flow. We developed a Concentration-based Velocity Reconstruction (CVR) method to accurately estimate the Eulerian velocity field in Hele-Shaw flows. The method is solely based on the fields obtained from the concentration measurements and relies on two main steps. First, we record the experiment by taking images of the solute distribution and we infer the solute concentration field over the cell. Then, we use the concentration distribution to solve the momentum equation of the flow and we reconstruct the Eulerian velocity field. The flow inside the Hele-Shaw cell is usually considered as an ideal Darcy flow. However, when the strength of viscous and diffusive contributions is comparable to that of convection, additional non-Darcy effects come into play and may considerably change the evolution of the flow. As a result, corrections to the ideal Darcy flow should be taken into account (Letelier et al. 2019;De Paoli et al. 2020). We show that the CVR algorithm can be easily adapted to non-Darcy flows and we employ this method to analyse the tip splitting phenomenon. The paper is organised as follows. In Sect. 2, we describe the experimental setup. In Sect. 3, we introduce the CVR algorithm and we benchmark our method vis-à-vis against numerical Darcy simulations. Finally, in Sect. 4, we employ the CVR to analyse the problem of tip splitting. Experimental setup We consider the process of convective dissolution in a Hele-Shaw cell. In particular, we chose a one-sided configuration, consisting of a rectangular domain initially filled with pure fluid. All boundaries are impenetrable to fluid and solute, but the top boundary, where the solute concentration is kept constant during the experiments. As a result, the solute dissolves and, in correspondence of the top boundary, convective structures form and control the evolution of the flow. The system is sketched in Fig. 1. We record the evolution of the flow and we infer density fields at high resolution from transmitted light intensity (Slim et al. 2013;Ching et al. 2017;De Paoli et al. 2020). The cell consists of two glass plates (10 × 140 × 250 mm 3 ) separated by rubber seals of different thickness, b, which define the dimensions of the fluid layer (width L and height H). The material used for the sealings is a high-quality impermeable rubber, based on aramid fibre with nitrile binder (Klinger-sil C-4400). Metal shims are located between the plates, which are held in place by an external metal framework and a set of 16 screws. The torque applied to all screws is constant to ensure the uniformity of the gap. Along the top wall, lies a steel mesh ( 100 m grid size) that contains the solute powder. The cell is illuminated from the backside by a tunable system, consisting of an array of 150 LED lamps covered by an opaque glass, which makes the light distribution uniform over the cell. In the following, we will describe the fluids adopted and the processes of concentration reconstruction. Working fluids An aqueous solution of potassium permanganate (KMnO4) and water are used to mimic, respectively, the denser and lighter fluids. In the present configuration, the cell is initially filled with water (pure fluid), i.e.,, the solute concentration C at time t = 0 is C(x, z, t = 0) = 0 , where x, z are the spatial coordinates in horizontal and vertical directions, respectively. The dissolution of KMnO4 takes place just below the grid on top of the cell (i.e., at z = H , with H fluid layer height). Therefore, we assume that at the top boundary, the fluid is saturated of solute and the concentration, C, is constant during the experiment [ C(x, z = H, t) = C s , being C s the saturation value]. Viscosity and diffusion coefficient are assumed constant and independent of the solute concentration (Slim et al. 2013), and are, respectively, = 9.2 × 10 −4 Pa ⋅ s and D = 1.65 × 10 −9 m 2 ∕s . Based on the correlations proposed by Novotný and Söhnel (1988), we assume that the density of an aqueous solution of KMnO4 depends on fluid temperature, , water density, (0) , and KMnO4 concentration. In particular, we measure the fluid temperature and we use it to estimate the water density. The density difference between the saturated solution and pure water is, s − (0) = 38 kg∕m 3 , being s = (C = C s ) and C s = 46 kg∕m 3 the effective saturation value of concentration. The density of the mixture, (C) , can be well approximated as a linear function of the solute concentration, as shown in (1) Grains of potassium permanganate (grain size greater than 200 m ) are poured on the grid sitting on top of the cell and are pressed to form a compact layer. In this way, the porosity of the layer is reduced and, as a result, it will be difficult for the liquid subsequently injected to form a liquid layer on top of the grid. The light fluid (water) is injected with the aid of a syringe pump from two channels located at the bottom of the cell and the fluid level is increased up to the upper boundary. We remark here that it is crucial that the level of water has to reach the height at which the grid is located, and the fluid has to get in contact with it along the whole cell width. During the first contact, water is sucked by the solute grains, which are initially dry. Afterwards, the water content of the solid powder increases, keeping the KMnO4 layer solid but humid. Finally, when the powder layer is saturated with water, the suction process stops and the dissolution of KMnO4 in water takes place. After the conclusion of this phase, which takes about 2 s from first contact to powder saturation, the pump is shut down. Solute dissolves in water and a fluid layer denser than water forms (x, z), and main components of the apparatus, consisting of camera, pump, illumination system. Along the steel mesh located at the top wall, the dye is poured, and therefore, the mixture is considered as saturated (i.e., solute concentration is constant, C = C s ) below the grid: After an initial diffusive phase, the layer of heavy mixture thickens, becomes unstable, and finger like structures form (Slim 2014). The onset of convection may start earlier than expected (i.e., fingers form earlier) in the presence of local perturbations of solute concentration (Riaz et al. 2006;Riaz and Cinar 2014, and references therein) and interface movement (Myint and Firoozabadi 2013). We preformed experiments in order to limit the solute and interface perturbations induced by the shutdown of the pump, which have an effect only on the initial flow evolution (Slim 2014). Reconstruction of the concentration field The solute concentration field is inferred from light intensity distribution. A flowchart of the reconstruction process is shown in Fig. 3. First, the fluid temperature is measured and it is used to compute water density, (0) . The evolution of the flow is recorded by the camera (Canon EOS 1300D, 3456 × 5184 px ). The collected images, which contain the value of light intensity in each pixel, I 0 (px, px) , are then corrected with basic image processing operations (conversion to grey scale, correction of trapezoidal distortion) to obtain the corrected light distribution, I(px, px). Finally, the spatial calibration is applied and the corrected light intensity distribution over the cell, I(x, z), is found. At this stage, the solute mass fraction field is determined using the intensity-mass fraction calibration data. The calibration process is summarised as follows. We prepared a number of samples of KMnO4 aqueous solution, corresponding to different values of the solute mass fraction , from pure water to saturation. To this aim, we used a highprecision scale (Sartorius Acculab Atilon model ATL-423-I, ±10 −4 g ). The cell is then filled with each fluid sample and the light intensity distribution is measured by the camera. In correspondence of each sample, the mean light intensity is computed. The calibration curve obtained in correspondence of fixed tension applied to the LEDs and gap width is shown in Fig. 2b, where the solute mass fraction is reported as a function of the normalised light intensity, i.e., the space-average light intensity measured for the sample, I( ) , divided by the space-averaged light intensity measured for pure water, I(0) (we defined as space average as the average taken over the fluid layer). We remark here that a proper calibration is required for each value of the cell gap and illumination condition (Alipour and De Paoli 2019). With the aid of the calibration curves (I) (fitted here via third-order polynomials), the solute mass fraction distribution, (x, z) , is reconstructed over the entire image. Since the solute concentration is defined as C(x, z) = (C) (x, z) and given Eq. (1), the concentration distribution over the cell is finally inferred. The concentration field C(x, z) may be subsequently used as input for the Concentration-based Velocity Reconstruction algorithm, as well as to estimate other quantities of the flow, such as the solute dissolution rate ( Concentration-based velocity reconstruction (CVR) Consider the system sketched in Fig. 4, where a side view of the Hele-Shaw cell is proposed. We refer to our experimental setup, in which the fluid properties ( Δ , , D ) are fixed. The only property that can be varied is the gap thickness, b. When the gap thickness is small, ideally infinitesimal, the flow in the Hele-Shaw cell is a Stokes flow and reproduces a two-dimensional Darcy flow (Fernandez et al. 2002;Slim et al. 2013;Oltean et al. 2008). More precisely, this condition occurs when the Reynolds number of the flow ( Re , based on the gap-averaged velocity and cell thickness) is Re → 0 . However, when a solute is present, density-driven convection represents the driving force of the flow and the system is controlled by the complex interplay of vertical advection and transverse (i.e., wall-normal) diffusion. In particular, if transverse diffusion dominates over vertical advection, the velocity at which the solute diffuses in y direction is much higher than the vertical advective velocity. The consequent wall-normal concentration profile is nearly flat (Fig. 4a), and the system can still be considered two-dimensional and controlled by the Darcy law. We define this regime as Darcy flow or Darcy regime. If the cell gap is increased, the hydrodynamic resistance experienced by the flow decreases, with a consequent increase of the velocity. Fig. 3 Process of Concentration-based Velocity Reconstruction. The algorithm consists of two parts: the concentration reconstruction (yellow box) and the velocity reconstruction (grey box). First, the light intensity distribution, I(px, px), is taken from the camera at a frame rate of 1fps. Using spatial and intensity calibration data, the spatial distribution of the solute mass fraction, (x, z) , is obtained. The fluid temperature ( ) is used to find the correct pure water density, (0) , and the concentration distribution C(x, z) is obtained with the correlations proposed by Novotný and Söhnel (1988). The solute concentration distribution is then used as input for the velocity reconstruction: after finding the stream function, (x, z) , from the momentum equation, the velocity components, u and w, are determined. The procedure is iterated for all snapshots considered (time loop) to find the time-dependent evolution of the flow. However, one frame is sufficient to reconstruct the instantaneous velocity field As a result, the wall-normal velocity gradient may induce an additional solute dispersion, also defined as Taylor or hydrodynamic dispersion (Taylor 1953). In this situation, a concentration gradient exists across the cell and the flow field is not anymore well described by a Darcy model. However, the flow can still be considered two-dimensional provided that the effect of solute dispersion induced by the presence of the walls is taken into account (Fig. 4b). In this case, the flow is defined as a Hele-Shaw flow. Letelier et al. (2019) have shown theoretically and numerically that corrections can be applied to the Darcy equation to recover the additional solute spreading induced by the hydrodynamic dispersion. This has been also confirmed experimentally in a recent study by De Paoli et al. (2020). Finally, if the cell gap is further increased, the formation of a second finger takes place (Fig. 4c), and the system has a full three-dimensional character (three-dimensional regime). To identify the flow regime, a quantitative estimation of the relative importance of vertical advection, transverse diffusion, and hydrodynamic dispersion is required, and two-dimensionless parameters are considered: the Rayleigh number and the anisotropy ratio, respectively: The first, Ra , measures the relative strength of vertical advection and diffusion along the cell height. The second governing parameter, , scales as the characteristic time of transverse diffusion (b 2 ∕D) and longitudinal advection . (H∕w) , where w is the gap-averaged vertical velocity. The combination of these two quantities gives a further dimensionless parameter, 2 Ra , which determines the flow behaviour (Letelier et al. 2019). When the gap thickness is infinitesimal ( b → 0 ), we have that 2 Ra → 0 : diffusion in wall-normal direction is faster then longitudinal convection, and the flow is purely two-dimensional and well described by the Darcy model (Fig. 4a). When b in increased, the flow is influenced by the presence of the walls, which introduce an additional solute dispersion. However, the behaviour of the flow can still be considered two-dimensional, and only one finger is present in y direction (Fig. 4b). In this case, we have that 2 Ra ≪ . Finally, if b is further increased and 2 Ra ≥ 1 , the flow assumes a three-dimensional character (Fig. 4c): more than one finger forms in wall-normal direction and the system cannot be considered uniform across the gap anymore. In Sect. 3.1, we describe the method of velocity reconstruction in a Hele-Shaw cell in the presence of a Darcy flow, i.e., when 2 Ra → 0 . When 2 Ra ≪ 1 (Hele-Shaw flow), the effect of dispersion has to be taken into account via corrective terms to the Darcy equation. However, the Concentration-based Velocity Reconstruction can still be applied, and it is presented in detail in Appendix A. Velocity reconstruction of Darcy flows We are in the presence of a Darcy flow if transverse diffusion dominates over convection (i.e., 2 Ra → 0 ), and the momentum equation is described by the Darcy law. We consider a Newtonian, incompressible fluid and we assume that the Boussinesq approximation can be applied. 1 Therefore, the two-dimensional velocity field is controlled by the equations: where is the Darcy velocity (i.e., the gap-average velocity), p is the pressure, and is the acceleration due to gravity (see electronic supplemental material for the derivation of the Darcy equation from the Navier-Stokes solution of a where P = p − (0)gz is the reduced pressure. The procedure of velocity reconstruction is based on the information available from the experimental measurements of concentration, assuming that the system is controlled by Eqs. (3) and (5). We introduce the stream function (Batchelor 1968, p. 78) defined as (u, w) = ( ∕ z, − ∕ x) . Taking the curl of Eq. (5) and assuming two-dimensional flow, we correlate the concentration gradient to the stream function via the Poisson equation: being = b 2 gΔ ∕(12 C s ) . The right-hand side of Eq. (6) is obtained from the experimental measurements and the problem is now formulated in terms of stream function, (x, z). The system is sketched in Fig. 5, with indication of the boundary conditions used for concentration (5a), velocity, and stream function (5b). At the top boundary, the concentration is constant [C(x, z = H) = Cs] and no-penetration condition is applied in z-direction [ w(x, z = H) = 0 ]. Equation (6) and definition of give: We assume no flux of solute ( C∕ x = 0 ) and no penetration ( u = 0 ) across the side walls, which give: for the left and right boundaries, respectively. Finally, at the bottom boundary, no-penetration condition ( w = 0 ) gives: From Eqs. (7)- (10), we conclude that is constant along the boundaries, and since the streamlines are impenetrable to fluid, all boundaries belong to the same streamline. We arbitrarily set the value of the stream function on the boundaries to zero, so that being Ω the boundary of the system. The Concentrationbased Velocity Reconstruction (CVR) proposed here consists of solving numerically Eq. (6), where the r.h.s. is determined experimentally and the boundary conditions applied are obtained from (7)-(11). To this aim, we employ a sixthorder nine-point compact finite-difference scheme (Nabavi et al. 2007), where velocity components and all the other spatial derivatives are estimated using a sixth-order finitedifference scheme. Algorithm validation with numerical results (Darcy flow) The CVR algorithm is initially validated with data obtained from numerical, two-dimensional Darcy simulations. To this aim, we proceed as follows: we generate a (numerical) database, consisting of a set of concentration and velocity fields. Then, we use these concentration fields as input for the CVR algorithm. Finally, the velocity fields reconstructed with CVR are compared with those obtained numerically. We will first describe the numerical model used for the simulations and then compare the results obtained in terms of velocity distributions. We assume that the flow field is incompressible and described by the Darcy law, i.e., it obeys the continuity (3) and the Darcy Eq. (5). In the absence of dispersion and assuming a constant diffusion coefficient, solute concentration is controlled by the advection-diffusion equation (Nield et al. 2013, p. 42): The idea behind this model is that the time scale of molecular diffusion is much smaller than the convective time scale In particular, = 0 on the boundaries is used to solve Eq. (6), based on the experimental configuration adopted. The reference frame (x, z) is also indicated, while stands for the unit vector normal to the boundary or, in other terms, the effect of inertia in negligible. This assumption is reasonable particularly in the instance of flows in narrow gaps. We consider a rectangular domain (height H = 50 mm and length L = 100 mm) . Since we want to validate the CVR algorithm in the instance of Darcy flow ( 2 Ra → 0) , we assumed a very narrow gap, b = 0.03 mm, to estimate the permeability value used in the two-dimensional simulation. We discretise Eqs. (5) and (12) numerically in space using second-order finite volume and sixth-order compact finite-difference schemes, respectively, in a uniform domain consisting of 400 × 200 nodes. Discretisation in time is achieved using an explicit third-order Runge-Kutta scheme. The code has been adapted from Hidalgo et al. (2012Hidalgo et al. ( , 2015, where further numerical details are available. The boundary conditions are the same shown in Fig. 5, and are here summarised. For the solute transport equation (12), concentration is fixed along the top boundary ( C = C s ) and no-flux condition is assumed at the side and lower boundaries ( C = 0 ). For the flow field, no penetration is applied along all the boundaries ( ⋅ = 0 ). The domain is initially filled with pure water, i.e., C(x, z < H, t = 0) = 0 , and characterised by a velocity = (u, w) = 0 . The concentration field is initially perturbed with a random noise of amplitude 10 −3 C s . We remark here that the initial perturbation influences only the onset of convection, whereas the late convective dynamics is independent of the perturbation amplitude (Slim 2014;De Paoli et al. 2017). A snapshot of the concentration field obtained numerically is reported in Fig. 6a, where the concentration distribution is shown together with the iso-contours of concentration. Numerical data are used for the validation as follows. The horizontal concentration gradient in Eq. (6) is computed from the numerical concentration field C(x, z). Following the algorithm described in Sect. 3.1, we obtain the stream function and reconstruct the velocity field. A comparison of the velocity fields obtained with numerical simulations and CVR is provided in Fig. 6b-c, where the fingers, identified by the black rectangle in Fig. 6a, are considered. We observe qualitatively that the flow field (black vectors) is similar in both Fig. 6b, c, and the flow structures are well captured by the CVR. The core of the finger is characterised by a strong downward velocity and, due to continuity, the flow in the region between two fingers has a remarkable upward velocity that supplies the upper layer with fresh fluid. A more quantitative validation is proposed in Fig. 6d-g: the horizontal (u) and vertical (w) velocity components are measured along the solid ( z = 45 mm ) and the dashed lines in Fig. 6a ( z = 25 mm ), respectively, shown in Fig. 6d-e and 6f-g. We observe that, in correspondence of both values of z considered, the results obtained via CVR are in excellent agreement with the numerical data. The reconstructed velocity is compared with the numerical results for a long time range, and further results can be found in the electronic supplementary material available online (Movie 1). This validation refers to the case of Darcy flows 2 Ra → 0 , which represents an ideal situation that can be attained in the instance of infinitesimal gap thickness, b, and low-density differences, Δ . However, the dynamics in the case of Hele-Shaw flows is controlled by the effect of the gap-induced solute dispersion, and some corrections to the Darcy equation should be taken into account (see Appendix A and electronic supplementary material). Quantification of uncertainties We analyse here the uncertainty on the values of the velocity reconstructed. The uncertainty values are obtained from propagation of error analysis (Taylor 1997). For the velocity components reconstructed as in Eq. (5), the uncertainty is given by: where x i indicates the uncertainty on the quantity x i . We assume that the properties , Δ , C s , and g are estimated with constant relative uncertainty equal to 1% . We consider the accuracy on the gap thickness b = 10 m. All the uncertainties introduced are constant and independent of the value of concentration measured, but the uncertainty on the concentration itself, C , varies with the value of local concentration. In particular, since C = , we estimate the uncertainty on C as: The mass fraction is obtained from the transmitted light intensity ratio, I( )∕I(0) , as reported in Fig. 2b. We compute [I( )∕I(0)] = , where indicates the standard deviation of I( )∕I(0) , measured over ten images in correspondence of each mass fraction. Then, the uncertainty is estimated from [I( )∕I(0)] (Fig. 2b). Although the uncertainty is very limited for high-light intensities, we observed that the corresponding relative uncertainty is independent of I( )∕I(0) , and it is ∕ ≈ 0.08 . We approximate the uncertainty on the density measurement as ∕ ≈ (0)∕ (0) ≤ 0.02% . Therefore, the contribution of the uncertainty of the mass fraction to the uncertainty of the concentration measurement is dominant, and we assume C∕C ≈ ∕ . As a result, due to Eq. (13), the uncertainty on the velocity measurement for a cell of thickness b = 0.30 mm is u∕u = 0.1. Example application: tip splitting dynamics The mechanism by which the tip of a finger separates in two branches is defined as tip splitting (Malhotra et al. 2015). This phenomenon is currently subject of interest for the implications which it can bear in porous media flows (Amooie et al. 2018), e.g., the enhancement of the dissolution efficiency in buoyancy-driven flows (Jha et al. 2011). In the context of miscible fluids, the first detailed study on this phenomenon dates back to the seminal work of Tan and Homsy (1988). With the aid of numerical simulations, they have shown that once a finger is sufficiently large, the concentration gradient across the finger front becomes steep, as a result of the stretching produced by the flow. In turn, the tip of the finger becomes unstable and splits. When the finger splits into two parts and both new fingers continue to grow, the mechanism is defined as even tip splitting. When after the splitting, one finger shields the growth of the other one, the mechanism is defined as uneven tip splitting (Malhotra et al. 2015). More complicated scenarios occur when the branches are more than two. The tip splitting phenomenon is the result of complex non-linear interactions between the fingers, which makes predictions on its evolution hard to obtain. As a result, a variety of different flow configurations has been studied in the last decades. Numerical works (De Wit and Homsy 1997;De Wit 2004) report that variations in the permeability of the medium and domain size have an influence on the splitting process of the fingers. The presence of reactive components in the mixture will also foster the tip splitting process, via a mechanisms called chemically induced tip splitting (De Wit and Homsy 1999). A similar behaviour is observed when the permeability distribution of the medium is anisotropic (Kawaguchi et al. 2001). The situation becomes even more complicated when three-dimensional flow dynamics are taken into account. In the numerical work of Oliveira and Meiburg (2011), the full three-dimensional flow in a Hele-Shaw cell is investigated in detail for the first time. They observed that fingers may split also longitudinally, and not only at their tip, giving rise to a different phenomenon defined as inner splitting. This result is also supported by the previous experimental work of Wooding (1969). Recently, the first three-dimensional experimental observation of tip splitting in porous media has been reported by Suekane et al. (2017). The authors preformed concentration measurements in a cylindrical porous medium, and observed that the splitting mechanism is similar to that observed in the two-dimensional case. However, an accurate experimental characterisation of the flow around the fingers during the tip splitting is still missing and we aim precisely at this gap. In the following, we will first investigate the tip splitting phenomenon via detailed analyses of the concentration and vorticity fields (Sect. 4.1). Then, using this information, we briefly present two different splitting dynamics observed in our experiments (Sects. 4.2-4.3). Vortex formation and tip splitting We consider an experiment characterised by b = 0.30 mm . The concentration field measured experimentally is used to reconstruct the velocity field via CVR. In the present configuration, since b = 0.30 mm , non-Darcy corrections have been applied (see Appendix A). The corresponding vorticity field, defined as (x, z) = ∇ × , is then obtained and used for the analysis of the flow dynamics in correspondence of the finger tip. The evolution of the concentration field, C, and out-of-plane vorticity component, Ω y , are shown in Fig. 7, in top and bottom panels, respectively. The vorticity contours are coloured according to the value of the vorticity: positive (negative) values of vorticity correspond to blue (red) contours and are associated with vortices rotating in clockwise (counter-clockwise) direction. We observe that the evolution of the fingers is characterised by a complex dynamics. Initially, the finger spreads and the width of the structure increases (Fig. 7a, f). This process is driven by the flow field: The fluid is redirected away from the axis of the finger. As a result, the concentration gradient at the finger tip becomes steeper, preparing the ground for the instability to take place. Two pairs of counter-rotating vortices have the effect of stretching the structure. Moreover, small deviations from the vertical direction produce a symmetry breaking in the finger structure, making the finger unstable and promoting, eventually, the bifurcation of the tip (Fig. 7b, g). The two counter rotating vortices coexist for about 10 s (Fig. 7c, h to d, i), during which they grow in a nearly symmetric fashion. Finally, approximately 30 s after the tip splitting has started (Fig. 7e, j), the right-hand branch detaches from the main body due to a slightly unbalanced growth of the two fingers tips. We conclude that the tip splitting observed here is controlled by the local velocity field. This observation is the first accurate simultaneous measurement of velocity and concentration field and is in excellent agreement with the previous numerical observations (Chen and Meiburg 1998). Uneven tip splitting We consider now an experiment again characterised by a cell gap b = 0.30 mm , but we focused on a different phenomenon consisting of the differential growth of the finger branches. An example is shown in Fig. 8. The finger is initially symmetric and grows downward [8a,e]. However, when the splitting takes place [8b,f], a pair of counter rotating vortices forms at the finger tip. This time, their growth is strongly unbalanced from the beginning, and within few seconds, the left-hand finger is considerably larger than the right-hand counterpart [8c,g]. The amount of solute present in the two branches is also unbalanced: The larger finger mixes within surrounding fluid more slowly compared to the right-hand side branch. In this case, the combined action of flow and convective dissolution leads to the formation of a dominant finger that prevents the growth of the other branch. This phenomenon is also known as shielding (Tan and Homsy 1988). Post-merging tip splitting Finally, we analyse the most recurrent phenomenon which we observed in our experiments, consisting of finger merging and subsequent separation. We consider the system shown in Fig. 9. Initially, two fingers merge and form the large finger reported in Fig. 9a, e. The merging process took place only at the boundaries of the fingers: the footprint of the core of the two initial fingers is still apparent and represented by the two high-concentration regions [dark distinct parts in Fig. 9b, f]. This heterogeneity in the solute distribution triggers the subsequent splitting (Fig. 9c, g) which, in this case, is nearly symmetric (even splitting). However, we observed that in our experiments, the uneven splitting is more likely to happen. It is also interesting to note the presence of a stagnation region in correspondence of the finger splitting point (located approximately at (x, z) = (30, 125) mm). Conclusions We proposed a new method for the simultaneous measurement of concentration and velocity fields of Newtonian fluids in convective Hele-Shaw flows. We developed an algorithm, defined here Concentration-based Velocity Reconstruction (CVR), to obtain the gap-averaged velocity distribution from one single snapshot of the concentration field. An example is shown in Fig. 10, where the input data, consisting of a high-resolution experimental image (Fig. 10a), are used to reconstruct the concentration field (Fig. 10b). The flow stream function is computed (Fig. 10b) via CVR, and the horizontal and vertical gap-averaged velocity components are reconstructed (Fig. 10c-d). The system considered is controlled by the complex interplay of convection and diffusion. If solute diffusion across the gap dominates over vertical advection (e.g., when the cell gap is infinitesimal), the flow can be considered as two-dimensional and described by a Darcy model. When the cell gap is increased, the velocity gradient in wall-normal direction promotes solute dispersion (Taylor dispersion), making the flow to depart from the ideal two-dimensional Darcy behaviour. However, when the strength of vertical convection is equivalent to that of transverse diffusion, the gap-averaged flow can still be considered two-dimensional and it is defined as a Hele-Shaw flow (Letelier et al. 2019;De Paoli et al. 2020). The CVR algorithm has been validated with two-dimensional numerical simulations and with particle tracking velocimetry measurements (further information is available in the electronic supplementary material), in the instance of Darcy and the Hele-Shaw flows, respectively. When the gap width is large enough to allow the formation of more than one finger in transverse direction, the flow is three-dimensional and the CVR algorithm cannot be applied. We employed the CVR algorithm to analyse the phenomenon of tip splitting in convective Hele-Shaw flows. This well-known problem received renovated attention due to the beneficial effects which it can bear in mixing in porous flows. Via high-resolution reconstruction of the flow velocity around the fingers, we are able to analyse experimentally, for the first time with this level of detail, the flow configuration and dynamics during the splitting process. In this work, we applied the CVR to a buoyancy-driven Hele-Shaw flow in one-sided configuration. However, this method can be easily adapted to different systems with minor modifications (e.g., Rayleigh-Taylor flows in confined Hele-Shaw geometries, De Paoli et al. 2019). A further advantage of the present method consists of the possibility of performing velocity measurements at high resolution and on the whole domain, including the finger cores, where solute concentration is high and other methods based on particle tracking may fail. Finally, the absence of laser sources makes current approach a cost-effective and safe alternative to classical PIV/PTV methods. The need of a single camera and the possibility of relying on one picture for a detailed reconstruction of the solute concentration and velocity fields, reducing costs and complexity of the experimental facilities, represent further advantages of the reconstruction method proposed. where Sc = ∕ D is the Schmidt number, standing for the ratio of momentum to mass diffusivity, and ẑ is the unit vector aligned with the vertical axis, z. The influence of inertia has been investigated in this specific configuration by De Paoli et al. (2020), who observed that it has a minor role and, therefore, the corresponding terms in the momentum equation can be neglected. Moreover, we have here that the (15) 2 Ra Sc Schmidt number is large, Sc ∼ O(10 3 ) . As a result of these assumptions, Eq. (15) reduces to: By taking the curl of Eq. (16), the momentum equation can be written in terms of stream function as: where = 10 ∕b 2 and = ∕(21D) . Numerical discretisation adopted for the Darcy flow [Eq. (6)] is used to solve Eq. (17). We remark here that Eq. (17) consists of a Helmholtz problem and the numerical solution is computationally more demanding than the corresponding Darcy problem. Moreover, dueto the high order of the derivatives in Eq. (17), the solution is more sensitive to the input data. The same boundary conditions considered for the Darcy model and discussed in Sect. 3.1 are applied here. In the instance of Hele-Shaw flows, the CVR has been validated against Particle Tracking Velocimetry (PTV) measurements, showing both quantitative and qualitative agreement. Further information is available in the electronic supplementary material. We analysed the results of the CVR algorithm in the instance of Hele-Shaw flows. In particular, we compared the predictions of the Darcy model, Eq. (4), with the Hele-Shaw model, Eq. (16), in different experimental conditions. To this aim, we used the database of De Paoli et al. (2020), consisting of 18 experiments split over three different gap thicknesses ( b = 0.15 , 0.30 and 0.50 mm). Each test has been repeated three times, and the results reported represent the average of these three experiments. The flow field is reconstructed with the CVR algorithm using the Darcy model and the Hele-Shaw model. To evaluate the difference in the predictions given by the Darcy and the Hele-Shaw model, following Kähler et al. (2016), we use the Pearson's correlation coefficient, r. We estimate, in each sample corresponding to time t, the space-average magnitude of the local fluid velocity: (17) [(i, j)] 2 the local fluid velocity magnitude, i, j the pixel indices, and N x , N z the image dimensions. Then, we compute the time-averaged correlation coefficient r between the velocity magnitude reconstructed in the Darcy (D) and in the Hele-Shaw (HS) cases as: where T is the time at which the domain saturation occurs (see electronic supplementary material for further details). Results are shown in Fig. 11 in terms of r( 2 Ra) . When 2 Ra ≤ 0.01 , the value of the correlation coefficient is close to 1, suggesting that the velocities obtained with the two models are strongly correlated and, therefore, we are in the presence of a Darcy flow. For larger values of 2 Ra , the value of the correlation coefficient r is lower, but still positive. This means that the velocities, V D and V HS , lie on the same side of their respective means, V D and V HS , but are less correlated compared to the cases when 2 Ra ≤ 0.01 . The correlation factor diminishes when 2 Ra is further increased: The prediction given by the Darcy model is less and less accurate and vertical advection has a more dominant role. Fig. 11 Correlation factor r between the Darcy and the Hele-Shaw model, averaged from the beginning to the shutdown phase, for 18 experimental data (De Paoli et al. 2020). When 2 Ra ≤ 0.01 , the results given by the CVR with the Darcy and the Hele-Shaw model are in excellent agreement. For larger values of 2 Ra , the correlation factor diminishes, indicating that the prediction given by the Darcy model will be less and less accurate when the role of convection becomes dominant
9,431.8
2020-08-10T00:00:00.000
[ "Physics" ]
High efficient metasurface quarter-wave plate with wavefront engineering Metasurfaces with local phase tuning by subwavelength elements promise unprecedented possibilities for ultra-thin and multifunctional optical devices, in which geometric phase design is widely used due to its resonant-free and large tolerance in fabrications. By arranging the orientations of anisotropic nano-antennas, the geometric phase-based metasurfaces can convert the incident spin light to its orthogonal state, and enable flexible wavefront engineering together with the function of a half-wave plate. Here, by incorporating the propagation phase, we realize another important optical device of quarter-wave plate together with the wavefront engineering as well, which is implemented by controlling both the cross- and co-polarized light simultaneously with a singlet metasurface. Highly efficient conversion of the spin light to a variety of linearly polarized light are obtained for meta-holograms, metalens focusing and imaging in blue light region. Our work provides a new strategy for efficient metasurfaces with both phase and polarization control, and enriches the functionalities of metasurface devices for wider application scenarios. functionalities with great potential in applications are demonstrated, such as metalens [11][12][13][14][15], meta-holograms, [16][17][18] and polarizers, [19][20][21] to name a few. Among them, simultaneous control of the polarization and phase plays a vital role and has already aroused numerous researches to explore its full potential. [22][23][24][25][26] Attempts such as utilizing two plasmonic nanopillars per period with different distance and orientation angle only work for oblique incidence. [27] Another method is exploiting the superposition of the two output circular polarization (CP) beams through two sets of nanopillars with different dimensions and starting orientation angles under linear polarization (LP) incidence. [28] Both of them are based on super unit cells with spatial superposition of inclusions, which would result in lower efficiency, inferior image quality, and lower space-bandwidth product. Recently, it is of great interest to combine the propagation phase and geometric phase (i.e., Pancharatnam-Berry (PB) phase) to realize full control of the polarization and phase on a single subwavelength unit cell. [29][30][31] However, they are usually focused on the polarization multiplexing to enhance the information capability, e.g., differe nt functionalities are encoded with different polarization states. In fact, incorporating the same wavefront engineering with different polarization states to realize specific polarizatio n conversion is quite useful but remains rarely explored. As a basic optical component, quarter waveplate (QWP) (normally convert the CP light to LP light and vice versa) plays an important role in light manipulation. [32,33] It would be highly desirable to find ways to implement QWPs on a single metasurface. Here, we provide a straightforward design principle for metasurfaces (e.g. meta-hologra ms and metalens) to achieve the QWP functionality by utilizing both the co-and cross-polarized spin light. By combing the propagation phase with PB phase, we first demonstrate the modulate capacity with spin-selected holographic images (the commonly unmodulated co-polarized light and cross-polarized light) with SiNx metasurfaces in the visible spectrum. We further realize the QWPs with wave manipulation abilities by controlling the superposition of two output CP light. Other elliptical polarizations with designed wavefront are also produced experimenta lly. The polarization reconstruction and wave manipulation based on two orthogonal CP bases certainly expands the practical application possibilities and could trigger versatile functio n integrations for advanced compact systems. The Design Principles PB phase-based metasurfaces can achieve a full phase control by adjusting the orientation angle of the meta-atoms with identical geometry. [34] The cross-polarized light will have extra ∓i2σθ phase modulation under normal CP light incidence, where θ is the rotation angle from the x-axis, and σ indicates the handness of the CP light. However, the co-polarized scattered light is usually ignored, which unavoidably results in background noises or a dazzling spot in the hologram image. [28,31,35,36] In order to surmount this restriction, we propose to modulate the co-and cross-polarized light independently by combining the propagation phase and PB phase. The total Jones matrix describing the relation between the input electric field (Ein) and the output electric field (Eout) in a circular base can be written as: where Rc(θ) is the rotation matrix, ϕRL, ϕLR, ϕRR and ϕLL is the propagation phase. In this work, we consider the widely used rectangular nanopillars, so the phase shift ϕRR=ϕLL and ϕRL=ϕLR due to the mirror symmetry. In this case, if the incident wave is right circularly-polarized (RCP), the output electric field Eout becomes: Phase modulation capability As a proof of concept, we consider the Silicon nitride (SiNx) metasurfaces consisting of nanopillars with different shapes covered on the fused-silica substrate. SiNx was chosen because of its low loss in visible light and compatibility with CMOS processes. The design wavelength is 470 nm and the metasurface works in a transmitted way. As illustrated in Figure 1(a), the unit cell period is chosen as 300 nm satisfying the Nyquist theory. [37] The nanopillar height is set as 800 nm to provide phase change covering 0~2π. To verify the phase modulation ability, we first demonstrated an independent spin polarization hologram metasurface with RCP light incidence. The transmitted RCP light is modulated to produce a far-field hologram image with "NJU" based on propagation phase (ϕRR) and LCP light is manipulated to present a representative building of Nanjing University (i.e. a 600-years building named Bei-Da) in the far field based on propagation phase (ϕRR) and PB phase (ϕRL-2θ). The optimized phase profiles are based on Gerchberg-Saxton algorithm. [38][39][40] The metasurface with a footprint of 150 μm × 150 μm is fabricated using a conventio na l nanofabrication process (see Experimental Section) and its scanning electron microscopy (SEM) image is shown in Figure 2 Meta-holograms with QWP effect After verifying the phase modulation capability, we further demonstrate the polarization state manipulation. The output RCP and LCP light are designed with the same function (e.g. the same focal length or the same hologram image) by modulating the propagation phase ϕRR(x,y) and the compensatory PB phase -2θ(x,y). To realize the QWP functionality, we choose the same nanopillars as marked in Figure 2(b) to obtain equal aR and aL. The reference phase φ (related to the extra rotation angle θ0) is modulated to get different LP light. As shown in Figure 3(a), the red dots on the Poincaré sphere mark the designed output linear polarization state under RCP incidence. The SEM image of the metasurface with a footprint of 150 μm × 150 μm for xpolarized (dot A) hologram is shown in Figure 3(b). The insert picture is the enlarged image. The directly (without analyzer) measured holographic image is shown in Figure 3(c). Due to the utilization of both the co-and cross-polarized light, the middle zero spot is nearly negligib le, indicating a very high diffraction efficiency (98%). The zero spot is difficult to completely disappear because of k-space imaging of the light passing through the gap between nanopillars. To verify its polarization properties, we add a polarizer to detect the relative intensity profile. When the polarizer is set as 0° (the transmission axis is parallel to the x-axis), the hologram intensity profile (see Figure 3(d)) is nearly the same as the measured image without analyzer (Figure 3(c)). When the polarizer is rotated as 45° and 90°, the intensity of the output hologram image change from attenuation to disappearance that surely verifies the linear polarization (xpolarized) properties (see Metalens with QWP effect To further demonstrate the manipulation capability, we design a meta-QWP for focusing and imaging. As illustrated in Figure 4(a), the designed single-layer metasurface can act as a QWP and a lens at the same time. The phase profile follows [41]   where the focal length f=100 μm. Figure 4(b) shows the SEM image of the fabricated metalens (D=150 μm, NA=0.6) with x-polarization output under RCP incidence (the detailed analysis are provided in Supplementary Figure S2). As demonstrated above, other LP output can also be obtained as long as rotating the metasurface. The directly measured focus spot is shown in Metalens with elliptic polarizations generation In addition, this method can be scaled to other polarization states, such as elliptic polarizations E and F on the Poincaré sphere (see Figure 5(a)). Specifically, for the polarizatio n state E, aR/aL=1/2 and φ=0, the amplitude distribution of the selected nanopillars are shown in Figure 5(f) and the focusing efficiency is calculated as 87%. Figure 5(g) illustrates the image of Sector Star Target which resolution at the central circle is 8.7 μm. SEM images of these samples can be found in Figure S6. Conclusion In conclusion, we have demonstrated a straightforward method for realizing phase Experimental Section Numerical simulations: The simulated material parameters of SiNx is adopted from the experimental measurement. The wavelength is fixed at 470 nm, and the refractive index is Supporting Information High efficient metasurface quarter-wave plate with wavefront engineering Chen Chen 1,2
2,027.2
2020-11-10T00:00:00.000
[ "Physics" ]
Seismic Full Waveform Inversion Accelerated by Overlapping Data Input and Computation Seismic full waveform inversion (FWI) is a powerful technology to obtain high-precision and high-resolution images of subsurface structures. However, FWI is a data-intensive algorithm that needs to read extensive seismic data from disks, which significantly affects its performance. We proposed a portable parallel framework to improve FWI by overlapping data input and computation (ODIC). The framework is based on POSIX threads (Pthreads), which is a standard thread API library and can create a parent thread and a child thread in the FWI process. The former is used to perform computation and the latter to read data from disks, both running simultaneously. This framework has two attractive features. First, it is broadly applicable; it can run on almost any computer from a laptop to a supercomputer. Second, it is easy to implement; it can be readily applied to existing FWI programs. A 3D FWI example shows that the framework speeds up FWI considerably. Introduction Seismic full waveform inversion (FWI) can be used to obtain sufficiently accurate information from seismic recordings to reconstruct velocity models of the subsurface, which are often highly accurate and high-resolution images of the subsurface structures (Rao & Wang, 2013;Rao et al., 2016;Virieux & Operto, 2009;Wang & Rao, 2009).However, this often requires reading massive data from disks, which hinders their wide application, especially in 3D cases with TBs or PBs of data.Therefore, it is necessary to develop parallel frameworks that work well for FWI. For FWI, efficiently accessing an enormous amount of data has always been a major challenge. Distributed storage computing is one of the most effective technologies (Arrowsmith et al., 2022).Distributed file systems, such as Google File System (GFS) (Ghemawat et al., 2003) and Hadoop Distributed File System (HDFS) (Shvachko et al., 2010), and parallel computing frameworks, such as MapReduce (Dean & Ghemawat, 2008), Hadoop (Shvachko et al., 2010) and Spark (Zaharia et al., 2010), enable high throughput processing of TB or PB data.Using Hadoop and HDFS, Addair et al. (2014) implemented a global-scale cross-correlation analysis of a 1 TB seismic waveform dataset and achieved an average data processing rate of 16.7 GB/ min and accelerated the processing by 19 times.Magana-Zook et al. (2016) extended this experiment to analyze a dataset of over 40 TB using Spark and accelerated the analysis by 15 times. However, distributed storage and computing depend on high-performance computers such as clusters and supercomputers, limiting their use on traditional computers, e.g.laptops and desktops.Moreover, migrating large amounts of data from conventional storage to distributed storage and converting existing code from C/C?? or other languages to Hadoop or Spark are quite difficult. This paper is primarily about developing a novel and portable parallel framework that works well for FWI.The CUDA stream technique overlaps computation on GPU and data transfer between CPU and GPU (Cheng et al., 2014).Inspired by this technology, we propose a framework that can parallelize computations and data accesses in FWI.We then apply the proposed framework to a shot-encoded FWI (Krebs et al., 2009).We will show that the framework is widely applicable and simple and can significantly speed up FWI. Recap of POSIX Threads (Pthreads) In modern Unix/Linux operating systems, a process is the instance of a program executed by one or more threads (Silberschatz et al., 2004) and a thread is the smallest sequence of programmed instructions that can be managed independently by a scheduler (Lamport, 1979).A thread can thus be considered a component of a process, and several different threads in a given process share resources, such as memory, and can be executed synchronously via multithreading technologies. Pthreads is a parallel execution model and allows a program to control several different work flows which can be overlapped in time.Each flow is called a thread and the creation and control of these flows is done by calling Pthreads APIs specified by a standard POSIX (Butenhof, 1956). As one of the basic API libraries in Linux operating system, Pthreads can run on almost all computers, namely laptops, desktops, workstations, servers, clusters and supercomputers.Moreover, it is a portable library developed in the C language, so it can be easily applied to existing FWI programs. For a serial program (Fig. 1), a parent thread and a child thread are created by Pthreads.The first performs computations (subfunction1) and the second reads data from the disks, both executing simultaneously.This parallelizes the serial programs and thus improves its performance. Full Waveform Inversion Improved by Pthreads In FWI, the reconstruction of subsurface velocity models is implemented iteratively.The synthetic seismic response based on the estimated models increasingly matches the observed field data.Therefore, the objective function is generally defined in terms of data misfit as follows: where m is the model to be inverted; d obs and d cal are vectors of the observed data and the synthetic data, respectively.The objective function is minimized by iteratively updating the model.The model updating is described as follows (Wang, 2017) where k is the number of iterations, a k is the optimal step length determined by a line search method, g k ¼ rJ m k ð Þ is gradient vector determined by cross-correlating theoretical and back-propagation wavefields, and B k is an inverse Hessian matrix.The model updating follows the negative direction of the gradient vector (Wang, 2017).In a L-BFGS method, B k g k can be calculated by a recursive algorithm (Nocedal & Wright, 2006;Rao & Wang, 2017). In a shot-encoded FWI (Fig. 2), which is performed serially, one of the inversion iterations can be divided into five steps: (1) Reading and encoding the data from hard disks (RED); (2) Calculation of the theoretical wavefield of the initial model estimate (CTW); Of these steps, step (1) and step (2) are independent and can therefore be carried out synchronously, while the others may only be implemented after the first two steps have been completed. In a single inversion iteration (Fig. 3a), a child thread and a parent thread are created in an FWI process.The former is used to implement step (1) and the latter to implement step (2), both of which are executed simultaneously.This allows us to parallelize the computation and data input to improve FWI. In the two adjacent inversion iterations (Fig. 3b), step (1) in the next iteration is independent of step (3)-(5) in the current iteration, and step (1) and step (2) are also independent in the next iteration.Thus, when step (1) is completed in the current iteration, a new child thread for step (1) in the next iteration can be started immediately and executed simultaneously with step (3)-( 5) in the current iteration and step (2) in the next iteration.This further parallelizes data input and calculation.If the time to execute step (1) and step (2)-( 5) is the same, FWI reaches the maximum speed-up of two times. Effectiveness of the Parallel Framework To test the effectiveness of the proposed framework, we apply it to a 3D FWI.A SEG/EAGE Overthrust model (Fig. 4a) is used as the actual velocity model.The size of the model is 8 Â 8 Â 1.86 km 3 .This velocity model is discretized into 401 Â 401 Â 94 grids with a cell size of 20 Â 20 Â 20 m 3 . We set a Ricker wavelet with peak frequency of 15 Hz as the source signature and generate synthetic shot-gathers from 961 shots located at the surface with a shot interval of 240 Â 240 m 2 .Each shotgather is composed of the traces from 34,596 receivers with a trace interval of 40 Â 40 m 2 .The total volume of the entire data set is about 751 GB.All individual shots are combined, according to a shotencoding method (Krebs et al., 2009), to form a super shot-gather, which is used as input for FWI. We use a multi-scale inversion strategy (Ravaut et al., 2004) to implement the shot-encoding FWI and split by bandpass filtering the super shot-gather into three frequency bands: 0.2-6, 6-18, 18-30 Hz.A smoothed overthrust model is used as the initial estimate for the inversion of the first frequency band.Then, the inverted model of the lower frequency band is used as the initial estimate for the inversion of a higher frequency band; 100 iterations are performed in each inversion segment, and the final inversion results (Figs. 4b, 5c, 6c) are obtained after 3 Â 100 iterations. Figure 7 compares velocity slices of the true model, the initial model and the inverted model (at depths of 0.4 km, 0.6 km and 0.8 km).From these inversion results, we can see that the FWI implementation accelerated by the proposed parallel framework can reconstruct the overthrust model stably and reliably. The 3D FWI is implemented on a single-node server, and Table 1 shows some main computer configurations.To evaluate the performance of the proposed framework, we give the computation time for 100 iterations in the last inversion segment (18-30 Hz). Figure 8 shows the computation timelines of the improved shot-encoded FWI and the traditional version.The two timelines (Fig. 8a) have almost the same values for some iterations because the source codes are fixed every five iterations.It is obvious that the new parallel framework can speed up FWI considerably.Table 2 gives the computation time of 80 iterations in Fig. 8b.It should be noted that before applying the proposed framework, our FWI was improved by a heterogeneous parallel scheme (MPI ?CUDA) and achieved a speedup of about 20 times, and a shot-encoding method reduced the number of waveform simulations in FWI from 3n (n, total number of shots) to 3. Based on these improvements, our framework continues to improve FWI and provides a speed-up of 1.55 times. Conclusions In this paper, we have proposed a novel and portable parallel framework that works well for FWI.We have shown that the framework significantly speeds up FWI by overlapping data input and computation (ODIC).The advantages of this framework are its wide applicability and low complexity.Unlike distributed storage and computation, the framework can run on conventional computers such as laptops, desktops, workstations and servers.It also has the potential to achieve higher performance when run on clusters or supercomputers.In addition, applying this framework to existing FWI programs is easy. As a basic API library in Linux operation system, Pthreads is universally applicable for various scientific computation tasks.Therefore, the developed framework is also suitable for other numerical Figure 1 Figure 1 Overlapping data input and computation (ODIC) in a serial program via Pthreads Figure 2 Figure 2 Workflows of a shot-encoded FWI Figure 5 Figure 5 Velocity profiles at the position of y = 4 km.a The true model.b The initial estimate.c The inverted model Figure 6 Figure 6 Velocity profiles at the position of x = 4 km.a The true model.b The initial estimate.c The inverted model Figure 7 Figure 7 Velocity slices at depths of 0.4 km (left), 0.6 km (middle) and 0.8 km (right).a The true model.b The initial estimate.c The inverted model
2,536.8
2023-09-06T00:00:00.000
[ "Computer Science", "Engineering" ]
From Flatworms to Humans: Demonstration of Learning Principles Using Activities Developed by the Laboratory of Comparative Psychology and Behavioral Biology – Additional Exercises Since the mid-1990s, the Laboratory of Comparative Psychology and Behavioral Biology at Oklahoma State University has developed a number of exercises appropriate for classroom use to demonstrate principles of learning and other forms of behavior. These activities have primarily focused on the use of invertebrates such as planarians, houseflies, earthworms, and honey bees. We have also developed exercises using fish based on an inexpensive apparatus called the “Fish Stick.” Other exercises to be discussed are “Salivary Conditioning in Humans;” “Project “Petscope” which turns local pet stores into animal behavior research centers; “Prey Preferences in Snakes”; and “Correspondence in the Classroom” which helps students learn to write letters to scientists in the field of learning research. These various teaching activities are summarized, and the advantages and limitations are discussed. Additional material developed since 2011 is included. This material includes a low cost microcontroller, history of comparative psychology projects, and additional animal exercises. 2 We may not be voicing the popular opinion, but, in our experience, the similarity between using an animal and a computer simulation is at best superficial. In one study, we compared a classical conditioning computer demonstration with a live earthworm conditioning demonstration. The results showed that of 63 students from an introductory psychology class and an experimental psychology class, 97% indicated that the live demonstration gave them a better feel of what it is like to conduct a classical conditioning experiment and over 77% thought that the live demonstration gave them a better understanding of conditioning. Their comments were revealing. One student stated that, "With computers you might think you know what is going on, but,when it comes time to prove it with real animals, you know what is only on the screen." Another student wrote, "It was really cool to see it work on the worms. It helped me understand the concepts in a realistic way" (Abramson et al., 1996). In addition to teaching students about the nuances of conditioning, working with live animals engages students and encourages them to actively participate in the learning process. Students gain a better appreciation for life, the natural world around them, and the influence of animals on the local environment (Place & Abramson, 2006). As mentioned in our previous publications, invertebrates have several advantages for classroom use (Abramson, 1986(Abramson, , 1990(Abramson, , 2004Abramson et al., 1996). They are inexpensive to buy, feed, and house. Cockroaches, earthworms, and houseflies, for example, can all survive for weeks with minimal care. They can be ordered from biological supply houses such as Ward's, Carolina Biological Supply Company, and Connecticut Valley Biological Supply, or, in some cases, be procured at home! Unlike laboratory rats, invertebrates can be released into their home environment when the demonstration is finished. Students can train their own animals in a variety of apparatuses that cost dollars rather than hundreds of dollars. A Y-or Tmaze for flies, planarians, and ants, for example, can be nothing more than an appropriately shaped plastic tubing connector. If a multiple unit maze is needed, it is easily constructed from more connectors. A set of Legos® also makes an excellent maze for crawling invertebrates, and a styrofoam ball placed in a cup of water makes an effective running wheel. Invertebrates can be used in conjunction with existing demonstrations or alone to illustrate and gain an appreciation of experimental design, taxonomies of learning, inconsistencies in the definition of learning phenomena, comparative analyses of behavior, homologies, analogies, and limitations of cognitive concepts (Abramson, 1997). Although our demonstrations have been tested primarily on students in the United States, we have also successfully used them in Turkey and France. Many of our early invertebrate learning demonstrations are available in the laboratory manual Invertebrate Learning: A Laboratory Manual and Source Book (Abramson, 1990). Experiments are described using habituation in protozoans and earthworms, classical conditioning in planarians, earthworms, and honey bees, and instrumental/operant conditioning of lever-pressing in the crab, leg position in locust, maze learning in ants, and discrimination learning in free-flying honey bees. Instructions on how to construct apparatuses are also available, as are variations in species and experimental design. In the years following publication of the laboratory manual, additional conditioning exercises were published for the earthworm, housefly, planarian, and honey bee (Abramson, 2004;Abramson et al., 1996;Abramson, Kirkpatrick, et al., 1999;Abramson et al., 2007). Photographs and descriptions of the procedures for many of these experiments can be found on the website http://psychology.okstate.edu/Psychology_Museum/Classroom_Experiments.html. In the housefly and honey bee experiments, harnessed flies or honey bees are classically conditioned to extend their proboscises to an odor followed by a feeding of sucrose. Defensive conditioning of earthworms was accomplished by pairing a floral odor with the odor of butanol. Butanol elicits contraction in the earthworm and after a number of odorbutanol pairings, the earthworm contracts to the floral odor. The planarian experiment demonstrates instrumental conditioning in which the planarian reverses its direction to terminate airpuffs. Conditioned and unconditioned stimuli are easily presented. If odors are used as stimuli, an odor cartridge is prepared by using a 20 cc plastic syringe. A piece of filter paper is impregnated with an odorant, such as rose oil, and secured to the plunger of the syringe with a thumbtack. To present the odor, simply depress the syringe. If sucrose is used as an unconditioned stimulus, a piece of filter paper is dipped into sugar solution; a microsyringe or eye dropper can also be used. In the instrumental experiment described earlier in which the direction of movement of a planarian is reversed by the application of airpuff, the airpuff was administered by a plastic syringe without odor. The training apparatus was a small plastic cutting board with grooves located along its perimeter. The grooves were filled with spring water, and the planarian glided within the grooves. The entire conditioning situation cost less than $5.00 USD. The invertebrate experiments are highly effective and use inexpensive equipment and readily available species. As with any live animal exercise, a few limitations should be mentioned. If honey bees are used, the instructor must have access to a colony and associated support material. It is also difficult to use honey bees during cold weather, and some students may be afraid of honey bees and/or allergic to insect venom. Earthworms and houseflies can be used throughout the year, but earthworms are limited in what they can do, and the media in which houseflies are reared can smell quite bad (Abramson et al., 1996). Planarians are interesting to work with but, like earthworms, are limited in what they can do. There is also the issue that some results related to classical conditioning of planarians are difficult to replicate (Nicolas et al., 2008). Other issues that an instructor should consider include unfamiliarity with the species, motivation to try the unconventional, and institutional restrictions on the use of animals in the classroom. The Fish Stick The rationale behind the development of the fish stick was to create an inexpensive operant conditioning situation for a popular vertebrate that can be used in place of rats. One of the limitations of using invertebrates is that there are few traditional lever-press operant-conditioning situations suitable for the classroom. Readers interested in the evolution of operant conditioning devices for honey bees and blow flies should consult Sokolowski and Abramson (2010a) and Sokolowski and Abramson (2010b), respectively. The fish stick is a simple hand held device for conditioning fish in the classroom or at home as an independent or class project. One end of a 30-cm-long plastic tube contains an LED and vibrator for discriminative stimuli and a feeding nipple in which a reinforcer or unconditioned stimulus consisting of Gerber Green Peas baby food flows to the fish. The other end of the feeding nipple is located inside of the plastic tube and connected with aquarium tubing to a 20 cc plastic syringe filled with the baby food. To administer the baby food, the experimenter depresses the syringe thus releasing a small amount of food at the appropriate time. Push buttons on top of the device turn on the LED or vibrator as needed. The device can be constructed for approximately $15.00 USD. To keep the cost down, the device is unautomated. Students observe the fish hitting the nipple and at the appropriate time administer the baby food by depressing the syringe. Background information, a detailed parts list, illustrations, construction schematic, circuit diagram, and sample data are available (Miskovsky et al., 2010). A YouTube video is available (https://youtu.be/FsonPCR6EZg). We have used the fish stick for demonstrating principles of operant conditioning including shaping, discrimination learning, and the effects of reinforcement schedules. Classical conditioning of approach behavior and of general activity can also be explored using the fish stick. Studies of habituation related to the initial presentation of the device can also be carried out. For some instructors, the lack of automation may be a disadvantage. In our experience, this was not the case. Students have an opportunity to experience the need for automation and are challenged to design an automated version. (Sokolowski et al., 2010). Salivary Conditioning in Humans The rationale behind human salivary conditioning was to expand the range of conditioning demonstrations from invertebrates and fish to humans. After several pairings of a conditioned stimulus with lemon powder applied to the tongue, the student begins to salivate to the conditioned stimulus. This demonstration was not unique to us and has been reported several times (Cogan & Cogan, 1984;Gibb, 1983;Weinstein, 1987). What is unique with our demonstration are two refinements. First, we present conditioned stimuli using a PowerPoint file. This allows the user maximum flexibility in selecting conditioned stimuli. We have used color, shape, sound (including the Pavlovian bell), and their combinations. One unique conditioned stimulus we have successfully used is to present a conditioned stimulus PowerPoint slide with the equation "7-1 = 6" paired with lemon powder. After several pairings, a conditioned-stimulus-only test trial is presented with the equation "4+2 =?" Students will salivate even though the number 6 is not presented. In addition to flexibility in the range of conditioned stimuli, our use of PowerPoint allows the instructor to accurately select timing intervals associated with conditioned stimulus duration, interstimulus interval, and intertrial interval, and students have a practical example of what these intervals are and their importance. PowerPoint can also be used to demonstrate more advanced features of conditioning including blocking, discrimination, higher order conditioning, overshadowing, and temporal conditioning. Control groups such as unpaired, conditioned stimulus only, and unconditioned stimulus only can easily be incorporated. The second unique feature of our version is the way salivation is measured. Rather than relying solely on self-report measures, students collect saliva from underneath the tongue using a pipette. The saliva from the pipette is dispensed it into a small dish for weighing. Prior to conducting the demonstration, the instructor should make sure that no student will experience an adverse reaction to sugar. Details of the demonstration can be found in Abramson et al. (2011). Prey Preferences in Snakes The previous exercises illustrated in this paper involve some aspect of learning. In this classroom exercise, prey preferences in snakes are studied to illustrate the relationship between predator and prey and the importance of sign stimuli in attraction. The exercise is also useful for teaching students the importance of gathering and analyzing quantitative data. Snakes, like invertebrates, have much to recommend them for classroom investigation. They are easily captured in the field or obtained from the same biological supply houses used to purchase invertebrates. Snakes can be handled easily and are relatively inexpensive to house and maintain. Snakes can be housed, for example, in plastic containers. Much information is also available on their natural history and behavior. Snakes rely on chemical cues to such an extent that the tongue is protruded to gather such cues. Prey preferences are easily observed and measured by recording changes in tongue flick rates. The greater the preference, the more tongue flicks. If greatly excited by the presence of a chemical stimulus, the snake may attack the source of the stimulus. The exercise can be done with a single species but is more interesting when several different species are used. Garter snakes, water snakes, rat snakes, kingsnakes, ringneck snakes, hognose snakes, and redbelly snakes can all be used. The exercise takes advantage of the fact that snakes consume a wide range of vertebrates and invertebrate prey. Prey extracts are prepared and presented to the subject on cotton swabs. The experiment begins by placing a snake in a test chamber. The chamber can be a glass aquarium or plastic storage container. After a 15-min adaptation period, the tip of the cotton swab is saturated with the prey extract and placed within 2 cm of the snout. The extract is presented for one minute, and the number of tongue flicks is recorded. Various prey extracts can be presented to the same snake. This exercise has been used for several years without incident. The vast majority of our students enjoyed working with the snakes. There are some students, however, who are afraid of snakes in much the same way that the occasional student is afraid of insects. In such cases, we encouraged the student to try the exercise. Often, such encouragement works -especially when the instructor works individually with the student. If encouragement and individual attention do not work, the student can assist in data collection and making extracts or simply not participate. It should be kept in mind that snakes bite, constrict, and can release pungent secretions. Therefore, when handling snakes, students should wear protective gloves. To reduce the possibility of a student being bitten, the instructor, rather than the student, can transfer the snakes from the home container to the test chambers. The student will never touch the snake yet can observe behavior, present stimuli, and record data. If a single species is used, the garter snake makes an excellent choice because of its wide availability. When the demonstration is over, the snakes are returned to the laboratory colony. We do not release them into the wild because of personal preference -we enjoy working with them. Although we have not done so, if the snakes must be removed from a university setting, a pet store might take the animals as a donation or perhaps purchase them. Details of the procedure, extract formulations, discussion questions, and suggested snake species and their prey are available in Place and Abramson (2006). For readers interested in modifying the demonstration for studies of snake learning, suggestions are available in Abramson and Place (2008). Project Petscope Project Petscope turns pet stores into animal behavior research centers . The rationale behind the development of this project was to provide animal behavior experiences to students not located near zoos. Pet stores carry a range of species appropriate for comparative studies, are ideal for ethological studies of various species including humans, and do not drain departmental resources. For the Petscope project to be effective, it is essential that a good working relationship exists between the instructor and the pet store owner/manager. Permission must be sought before students begin any project, and issues related to the possibility of students handling some of the animals be addressed. Other issues to be worked out include creating observation stations in front of the animal enclosures, establishing observation times that do not interfere with normal business operations, availability of first aid, the extent to which pet store staff can assist students, and the possibility of students manipulating the animals' environment either by feeding or adding enrichment devices. Obtaining a list from the pet store of animals that students can work with will help the instructor design projects. There are many projects that can be conducted at pet stores. One exercise is to have students create "Petscope cards." These cards are similar to the old Time-Life animal cards so familiar to an earlier generation. The cards contain both a library research component and an observational component. Once the instructor decides on the species, students obtain information on that species including classification (class, order, family, genus, and species), behavior, related species, range, physiology, and anatomy. A useful addition is to include local professors and other individuals who have worked with the species. This part of the card would include citations and research summaries. The observation portion of the card contains information gathered by the students. This information can include anatomical descriptions such as shape, color, length, feeding strategies, growth rate, and popularity. A sample ethogram designed to study play behavior of captive elephants is available as an example (Abramson & Carden, 1998). 6 Once the cards are completed, comparisons of the cards are made. Class discussions can be focused on importance and difficulty of classification and the role of evolution and ecology in shaping biological, anatomical, and behavioral processes. Correspondence in the Classroom Correspondence in the classroom is an activity where students interested in animal behavior write to scientists in order to increase their understanding of the field. The point of the activity is to get students to open a dialogue with an individual scientist whose work excites them. The letter-writing task can be presented to students as a structured activity in which a series of questions are asked, or it can be presented as a more involved and creative activity where students develop their own mini-survey. The letter writing task is suitable as an individual or group activity. We suggest that a set of core questions be asked that serve as a comparison and as a stimulus for discussion. These questions include: What is your main area of focus? What do you consider your most significant contribution? Whom do you consider to be your greatest influence? What is your prediction for the field? Would you recommend that I enter this field? What are the job prospects? Details, sample survey questions, and variations are provided in Abramson and Hershey (1999). Additional Material Published Since 2011 Since 2011, our laboratory continues to publish teaching related articles relevant to comparative psychology. The goal in creating and publishing these exercises is to increase interest in comparative psychology either as a formal course or as an independent study project. Comparative psychology continues to decline as demonstrated by few courses offered, scant and uninspiring coverage in introductory psychology textbooks, and declining student interest (Abramson, 2015a;Abramson, 2015b;Abramson, 2018). One way to combat the extinction of our field is to develop teaching activities. Our new activities are in several different areas, including the adaptation of a low-cost experimental controller, development of a mathematical model of learning, the history and philosophy of comparative psychology, "thought" papers, and learning demonstrations. Adapting the Propeller Microcontroller For Comparative Research The propeller (Parallax, Inc.; Rocklin, California) is a low-cost microcomputer that we have adapted to research and teaching. By using the propeller, a comparative and teaching laboratory can literally be placed in the palm of one's hand. In contrast to controllers costing thousands of dollars, a controller appropriate for teaching and research can be developed for approximately $150.00 USD. Information on the controller can be found in Varnon and Abramson (2013) and in a detailed monograph (Varnon & Abramson, 2018). Programs are also freely available that replicate the classic operant conditioning and classical conditioning teaching laboratories so familiar to a previous generation of students (http://cavarnon.com/experiment-controller). Recording Infrasound We have used the microcontroller to control a wide range of behavioral experiments, and programs are available to enable students to explore the classic teaching demonstrations. Most recently, we have used the controller to record infrasound from elephants (Bergren et al., 2019). Students can use the device to record infrasound at zoos as an independent project. Mathematical Model of the Learning Process For a number of years, our laboratory has collaborated with Dr. Igor Stepanov of the Institute for Experimental Medicine in St. Petersburg, Russia (Pavlov's institute), to develop a mathematical model of the learning process. The model is based on the application of the transfer function of the first order linear system in response to a stepwise input. Among other applications, we have used this model to detect subspecies differences in the maze performance of rodents (Stepanov & Abramson, 2008), pesticide effects in honey bees (De Stefano et al., 2014), and interpreting results of the California Verbal Learning scale in individuals suffering from Type 2 diabetes mellitus (Stepanov et al., 2011). Most recently, we have begun to incorporate the model into our comparative psychology course. We have done this by asking students to use a product that purports to influence memory and then to use the model to determine if it actually does so . The model has also been used for class demonstrations involving honey bee and human learning. Moreover, there is something to be said for having students exposed to mathematical models. Such models have much to recommend them including the ability of summarizing research findings and directing research. History Projects Our laboratory has developed a number of history related projects, such as using Google maps to visit historical sites in comparative psychology (Stevison et al., 2010) and to create historical calendars and baseballlike trading cards . One of our more popular history of comparative psychology projects is the development of a time-line highlighting the contributions of several comparative psychologists (https://comparativepsych.wixsite.com/mysite). An offshoot of this project has students develop comparative psychology stamps (Abramson & Long, 2012). These stamps are legal in the United States and QR codes direct the user to a website highlighting the contributions of the comparative psychologist of interest. The website can be found at https://comparativestamps.wixsite.com/comparativestamps. Unfortunately, the company that makes the stamps -Zazzle -no longer offers do-it-yourself postage stamps. However, the United States Postal Service has this option. One of the most important websites that my students have helped develop is the site devoted to Dr. Charles H. Turner . Dr. Turner was an African American comparative and biopsychologist about whom I have extensively written (e.g., Abramson, 2009). Despite his many contributions, he is seldom discussed. The website is https://psychology.okstate.edu/museum/turner/turnerbio.html. In addition to the creation of websites devoted to comparative psychology, our laboratory has spent time collaborating with other scientists to write articles highlighting the contributions of philosophers to comparative psychology. Our first attempt was to highlight the importance of an Aristotelean-Thomistic approach (Brown & Abramson, 2019). Another article on the contributions of Arthur Schopenhauer is under review. Testing Of Consumer Products Another exercise we have developed teaches students how to use principles of comparative psychology to evaluate consumer projects (Kieson & Abramson, 2015). The rationale behind this exercise is two-fold. First, we wanted to develop an exercise to show students that comparative psychology is applicable to their daily lives. Many of the exercises challenge them to compare the effectiveness of, for example, pet products. How does a student know which pet food their animal prefers? The answer is to do a comparative experiment. In the course of doing the experiment, students learn how to design a choice experiment, the importance of subject variables, data analysis, and graphing -among many other skills. One of the more interesting exercises is for students to test the effectiveness of electronic insect/rodent repellents. These repellents purport to be effective by manipulating sound and/or disruptions in magnetic fields. If the student lives in a home with a backyard, a feeding station for insects can be established and once established, the electronic repellent is activated and the results recorded. If the student lives in an apartment, insects can be collected and placed inside of a cage. Secondly, we wanted to develop an exercise that can be used for science fairs and for students in middle and high school. The goal of this exercise is for students to think comparatively. New Learning Exercises: Human And Planarian Learning In addition to the exercises discussed in the opening of this article, we have developed a maze exercise and a new planarian exercise for the study of learning. The maze exercise uses the wooden labyrinth to investigate, for example, gender differences in performance and, with the addition of physiological measures, changes in heart rate as the student negotiates the labyrinth (Baskin et al., 2013). We have also found that the labyrinth is an excellent tool to generate locus of control in the classroom (Riley et al., 2017). The planarian exercise is rather unique. In contrast to the many classical conditioning/alpha conditioning/nonassociative learning demonstrations, this exercise uses shaping to train planarians to travel longer and longer distances to find water. We were able to train the animals to travel approximately 10 mm to find water. Animals not specifically trained are unable to find the water (Chicas-Mosier & . Extrasensory Perception (ESP) One of the more esoteric exercises we have developed is the use of comparative psychology to test telekinesis (Somers et al., 2020). While the phenomena of telekinesis is illusive at best, what is not in dispute is that it captures the imagination of students. We use this imagination to stimulate a student's interest in comparative psychology. The exercise uses a "levels" approach, in which telekinesis is used to influence the movement of single cell organisms and, if an effect is found, to move on to more multicellular organisms. If an effect is found, than genetic and biochemical tools can be used to determine the molecular basis of ESP phenomena. While, we have not found telekinesis effects, the student discussions are enthusiastic, and this enthusiasm has been used to encourage students to learn more about comparative psychology. Increasing Comparative Psychology Around The World Another activity to increase interest in comparative psychology is to ask students how comparative psychology might be effective in developing countries. Unlike the other activities I have outlined, this activity requires students to think about comparative psychology on a global scale. Students select a country (or region), learn about that country and then determine how aspects of comparative psychology can be applied to that country (or region). For example, students will discover that many developing countries are not familiar with therapeutic horse riding programs and/or the use of service animals. They will also discover that there are few, if any, comparative psychology courses offered at the major universities in that country. For many developing countries, such an article might be the first time educators have heard about the many benefits that comparative psychology offers. For those students who put in the effort, there is the possibility of coauthoring a publication that highlights the benefits of comparative psychology in that particular country. This exercise has produced several publications in country-specific journals (Abramson & Kitching, 2018;Abramson & Radi, 2019;Stauch et al., 2019) and has led to some fruitful discussions about establishing comparative programs in countries such as Egypt and South Africa. Videos Our laboratory has developed several videos highlighting some of our research. One of the better videos shows a rattlesnake trained to press a lever to turn off a heat lamp. Another interesting video highlights the importance of comparative psychology. The videos are: (2015) https://youtu.be/26zKz0nbqNw Discussion The overriding rationale behind each of the activities is to reverse the decline in comparative psychology. They have all been classroom tested and are effective in generating student interest. For instructors with limited access to the standard laboratory rat, invertebrates make excellent subjects to demonstrate handson conditioning principles. Habituation, sensitization, classical conditioning, and instrumental/operant conditioning can all be demonstrated with invertebrates. The planarian instrumental conditioning activity in which animals are trained to seek out water is especially interesting for students. If invertebrates are somehow prohibitive, salivary classical conditioning in humans is a good alternative. Snakes can used to demonstrate principles of learning as well as predator-prey interactions. The labyrinth exercise is also an excellent activity in generating student interest. If it is not possible to use animals of any type in the instructor's home institution, Project Petscope may provide an alternative. Pet stores contain many species suitable for ethological investigations including pet-human interactions. In addition to observational research, students learn about the importance of comparative investigations. Correspondence in the classroom, while not an active animal learning exercise, is important because it can help stimulate some students to become more interested in learning and behavior. Such an interest might lead an instructor to try some of the activities summarized in this article. Since the publication of this exercise in 1999, I have received over 200 letters from students asking about comparative psychology. Another way that we have tried to encourage students to enter comparative psychology is the "psychmobile." The psychmobile is essentially a personal truck filled with many of the exercises discussed in this paper. The psychmobile visits schools at all levels of the educational system and science events such as the EPSCoR Women in Science program. The Laboratory of Comparative Psychology and Behavioral Biology also serves as a clearing house to disseminate these exercises and offer advice on how to establish comparative psychology programs. The overriding goal is to increase interest in comparative psychology. Graduate students (and advanced undergraduates) associated with the laboratory are trained to develop their own psychmobile programs with the expectation that they will encourage other students (and faculty) to appreciate what comparative psychology has to offer. Finally, one of the stumbling blocks in using animals in the classroom is a lack of expertise in the use of the activities. The author will gladly to assist any faculty member or student in implementing the activities discussed in this article. As mentioned above, almost all of the activities summarized in this paper have been published previously if additional details are needed.
6,895
2020-01-01T00:00:00.000
[ "Psychology", "Biology", "Education" ]
UKAEA capabilities to address the challenges on the path to delivering fusion power Fusion power could be one of very few sustainable options to replace fossil fuels as the world's primary energy source. Fusion offers the potential of predictable, safe power with no carbon emissions and fuel sources lasting for millions of years. However, it is notoriously difficult to achieve in a controlled, steady-state fashion. The most promising path is via magnetic confinement in a device called a tokamak. A magnetic confinement fusion (MCF) power plant requires many different science, technology and engineering challenges to be met simultaneously. This requires an integrated approach from the outset; advances are needed in individual areas but these only bring fusion electricity closer if the other challenges are resolved in harmony. The UK Atomic Energy Authority (UKAEA) has developed a wide range of skills to address many of the challenges and hosts the JET device, presently the only MCF facility capable of operating with both the fusion fuels, deuterium and tritium. Recently, several major new UKAEA facilities have been funded and some have started operation, notably a new spherical tokamak (MAST Upgrade), a major robotics facility (RACE), and a materials research facility (MRF). Most recently, work has started on Hydrogen-3 Advanced Technology (H3AT) for tritium technology and a group of Fusion Technology Facilities. This article is part of a discussion meeting issue ‘Fusion energy using tokamaks: can development be accelerated?’ UKAEA capabilities to address the challenges on the path to delivering fusion power I. T. Chapman and A. W. Morris Fusion power could be one of very few sustainable options to replace fossil fuels as the world's primary energy source. Fusion offers the potential of predictable, safe power with no carbon emissions and fuel sources lasting for millions of years. However, it is notoriously difficult to achieve in a controlled, steady-state fashion. The most promising path is via magnetic confinement in a device called a tokamak. A magnetic confinement fusion (MCF) power plant requires many different science, technology and engineering challenges to be met simultaneously. This requires an integrated approach from the outset; advances are needed in individual areas but these only bring fusion electricity closer if the other challenges are resolved in harmony. The UK Atomic Energy Authority (UKAEA) has developed a wide range of skills to address many of the challenges and hosts the JET device, presently the only MCF facility capable of operating with both the fusion fuels, deuterium and tritium. Recently, several major new UKAEA facilities have been funded and some have started operation, notably a new spherical tokamak (MAST Upgrade), a major robotics facility (RACE), and a materials research facility (MRF). Most recently, work has started on Hydrogen-3 Advanced Technology (H3AT) for tritium technology and a group of Fusion Technology Facilities. This article is part of a discussion meeting issue 'Fusion energy using tokamaks: can development be accelerated?' 2019 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/ by/4.0/, which permits unrestricted use, provided the original author and source are credited. Introduction Fusion power could be one of a very few sustainable options to replace fossil fuels as the world's primary energy source. Fusion offers the potential of predictable power generators that have no carbon emissions, fuel sources lasting for millions of years and many natural safety features. Fusion is low in land-use, has high energy yield and suitably designed power plants can have very little long-lived radioactive waste and no proliferation issues. In short, it is a highly attractive energy source. However, fusion is notoriously difficult to achieve in a controlled, steady-state fashion on Earth. The fusion power comes from reactions between two light nuclei. The easiest reaction to initiate is between deuterium and tritium: d + t → 4 He (3.5 MeV) + n (14.1 MeV), where the neutron takes energy to the outside world. The fusion yield becomes significant in plasmas with temperatures in the 10-20 keV range (100-200 million kelvin). These plasmas need to be confined, kept hot and be sufficiently dense to provide fusion power densities on the order of MW m −3 . The alpha-particles provide most of the heating and the most promising confinement path is via magnetic confinement fusion (MCF), the JET [1] and ITER [2] 'tokamaks' 1 being the pre-eminent examples of this approach. Although the conditions for sufficient fusion power density have been reached [3,4] much remains to be done to turn scientific success into commercial electrical power. An MCF power plant requires many diverse interconnected systems and many different science, technology and engineering challenges to be met simultaneously. ITER and the coordinated European effort designing its successor, DEMO [5,6], have shown that this requires an integrated approach from the outset; advances are needed in individual areas but only bring fusion electricity closer if the other challenges are resolved in harmony. A global systems engineering approach will be used, all the way from the plasma to the turbines, via the blanket-a thermodynamically efficient neutron-to-heat convertor made from materials resilient to neutron damage. All must be buildable, highly reliable and maintainable, mostly robotically, and then endorsed by nuclear regulators and industrial and other stakeholders. This calls for a broad and comprehensive R&D programme combined with innovation and industrial techniques. The UK Atomic Energy Authority (UKAEA) (https://www.gov.uk/government/organisations/ukatomic-energy-authority) has a mission to make major contributions to the development of fusion power as a large-scale carbon-free commercial energy source. It is and will continue to be a major player in the global fusion enterprise, building on its long involvement in plasma research together with experience of operating JET, constructing the new MAST Upgrade device [7] (http://www.ccfe.ac.uk/mast_upgrade_project.aspx, http://www.ccfe.ac.uk/assets/ documents/other/MAST-U_RP_v4.0.pdf) and the more recent expansion into materials science and now wider fusion technology. UKAEA contributes in many areas of science and technology, has growing ties with many universities and increasingly acts as a link to industry, which will be a major contributor and stakeholder in the future. It acts as the hub of UK fusion research and a gateway to the wider communities. This paper focuses on the existing and imminent facilities at Culham and the ways in which they can be exploited by UK, other European and international researchers to address several of the key challenges. Challenges on the path to delivering fusion power Fusion power relies on the design of integrated solutions for DEMO and power plants, constrained by major technical challenges. This integrated design must simultaneously achieve: (i) the creation and sustainment of a controlled burning plasma over long timescales with fusion-born alpha-particles dominating the plasma heating; (ii) the controlled exhaust of heat and helium 'ash' from the burning plasma core; (iii) the development of (a) structural The main challenges which must be overcome to produce an integrated fusion reactor design. The UKAEA portfolio of capabilities seeks to address each of these challenges. (Online version in colour.) materials for the tokamak structures which have to sustain, for many years, large forces and pressures at high temperatures in the presence of high magnetic fields and exceptionally intense neutron fluxes, without generating unmanageable radioactive waste, and (b) functional materials resilient to neutron and gamma irradiation, e.g. for electrical and thermal insulators, tritium permeation barriers, diagnostic windows and breeding (e.g. lithium-containing ceramics); (iv) the development and design of components with these materials, notably the breeding blanket and plasma-facing components, which can survive in the demanding conditions within a fusion reactor; (v) the requisite high availability and efficiency of the machine and its systems to produce a viable cost of electricity; and (vi) the ability to breed and handle tritium fuel as well as de-tritiate components at end-of-life to minimize tritiated waste. These challenges are depicted in figure 1. These and other constituent parts such as the high field magnets, plasma and plant control systems, buildings and the systems to convert fusion power to electricity must be brought together in an integrated multi-disciplinary nuclear design satisfying regulation and safety requirements. Fusion is different from most other technologies in that a full test is only possible in a complete device, and the cost and timescale of each step means that a succession of small-increment full physical prototypes is unrealistic. Making large steps leads to two additional challenges: (vii) development of extensive theory-based models and an advanced computing programme for optimization and then robust, low uncertainty predictions of the plasma, and materials performance; and (viii) comprehensive in silico design, digital prototypes and finally models of components and systems to support convincing qualification of the solutions. Solutions to the last challenge, in particular, can have much wider application to large-scale industrial activities where large steps can reduce development time and cost. Overview of UKAEA's contributions to the fusion research and development challenges The breadth and depth of experience and wide knowledge of the integrated fusion needs accumulated by UKAEA over the years, together with the design, construction and operation of major fusion facilities, has led to an organization capable of both specialist and integration contributions. UKAEA has developed a portfolio of facilities and capabilities which allow us, in partnership with EUROfusion, 2 to fusion electricity (https://www.euro-fusion.org/eurofusion/roadmap/). JET is a model for ITER [2] and is the best facility to mitigate risks ahead of ITER operation. -MAST Upgrade [4,8], together with NSTX-U [9], are the world's largest 'spherical' tokamaks. MAST Upgrade has unique features focused on the exhaust issue, challenge (ii), and will be developed and exploited with EUROfusion. MAST Upgrade will have a very extensive set of detailed measurements using, for example, advanced spectroscopy, atomic physics, lasers, neutrons and microwaves [5]. These facilities are part of a roadmap to the first demonstration reactor to produce fusion power (DEMO). Figure 2 shows this roadmap from present-day devices, JET and MAST Upgrade, through the first burning plasmas in ITER, to designing the first reactors, DEMO, and exploring the spherical tokamak as a possible way to drive down the cost of fusion power. (a) Creation and sustainment of a controlled burning plasma ITER is the flagship facility on the European roadmap to fusion energy and JET plays a critical role in the development of integrated plasma scenarios of operation which are needed for ITER and for reactors thereafter. ITER's primary goal is to demonstrate a power gain of 10 in the plasma (Q = 10). DEMO and power plants would aim for higher Q, but in general ignition (Q infinite) is not sought; rather, the aim is a controlled burn where the plasma is mainly heated by the fusion alpha-particles, augmented by a modest amount of auxiliary heating to allow the fusion power to be more accurately controlled. Predictions of plasma performance in ITER are mainly based on models developed from a large database of tokamak results in deuterium plasmas, studied in devices with carbon plasmafacing components and externally supplied heating, but these cannot yet capture all aspects of the conditions anticipated in ITER, e.g. ITER's mixture of high and low Z wall materials (tungsten and beryllium) change the boundary conditions on the core plasma; the transport of heat and particles changes with the fuel isotope (i.e. D and T); alpha-particle heating is determined nonlinearly by the temperature and pressure profiles; fast alpha-particles can, on the one hand, excite plasma instabilities (in particular Alfvén eigenmodes) and, on the other hand, can reduce turbulent transport. JET is the world's reference facility to prepare for ITER operation, with several unique aspects, including being the only machine of its size with high-performance, high energy plasmas, the only machine able to operate with tritium fuel and the only machine with an ITER-like mixture of wall materials, namely beryllium and tungsten (chosen to reduce tritium retention in the vessel walls-an important operational constraint). A major task is to mitigate the main risks facing ITER in its research programme, so JET is focused on developing ITER-relevant integrated plasma scenarios with both deuterium and tritium (the isotope dependence of performance is a key scientific question), developing techniques to moderate the effects of losing control of the plasma (disruptions), and a range of other topics. The transfer of the results and experience to ITER plasmas will make extensive use of progressively improved modelling (the mixture of physics mechanisms will change somewhat when moving from JET to ITER-purely empirical extrapolation is not sufficient). DEMO devices are likely to need advances compared with the reference ITER plasmas, for example high radiative losses from seeded impurities to spread the heat load on the plasmafacing components more widely. If steady state is needed in a power plant, this will require further developments; non-inductive current drive needed for steady state can require large amounts of power, reducing the overall efficiency of a power plant. The new EU-Japan tokamak JT-60SA (http://www.jt60sa.org/pdfs/JT-60SA_Res_Plan.pdf) will have a strong focus on long pulse and steady state, and UKAEA is engaged in modelling its plasma scenarios. High-quality plasma modelling tools will underpin the design of high-performance plasmas on ITER and DEMO. JET will play an important role in their development. State-of-the-art models must describe: turbulent transport of heat and particles in the core and edge plasma; stability; fast particle physics; heating and current drive physics; strong radiative cooling by seeded impurities as a part of the exhaust solution; the exhaust plasma in the scrape-off layer and divertor (outside the region of nested magnetic surfaces); and plasma wall interactions, and other aspects of plasma dynamics. Transport and stability set the minimum size of a plasma which can generate net powerscaling from the best substantiated operating scenarios lead to the dimension chosen for ITER, and since DEMO will have to generate substantially more power to generate net electricity, it needs to be somewhat larger, if based on the same plasma regimes. A deeper understanding of the transport processes that will occur in burning plasmas may allow alternative scenarios to be found which could allow more compact plasmas, including smaller major radius (i.e. lower aspect ratio, as explored on MAST Upgrade). Today, these are far from the maturity needed to be considered for DEMO and power plants, although several mechanisms for turbulence reduction have been seen in experiment and theory [10,11]. Some of the ideas being developed, including by other institutes including UK universities (for example, van Wyk et al. [12]), can be explored with MAST Upgrade. (i) Recent results Interestingly, the ITER-like metal wall on JET introduced operational constraints that initially resulted in reduced plasma performance compared to the previous carbon wall [13,14]. However, a combination of a divertor heat-handling technique together with central heating to expel impurities from the core enabled the performance in JET to be restored to previous levels [15,16]. Results from the first ITER-like wall campaign have shown a significant (approx. 20×) decrease in fuel retention compared with the previous carbon wall [17], and dust/particulate generation in the divertor is a factor approximately 100× lower [18]. (b) Controlling the exhaust of heat and helium 'ash' Present tokamaks usually operate with a modest fraction of power radiated from the main plasma, to avoid performance degradation. This means that most of the exhaust power is channelled along field lines to the divertor targets. In burning plasmas, this leads to very high power densities on the materials, and in ITER and particularly DEMO, this can easily exceed the material limits [19]. Furthermore, these limits are expected to be reduced after neutron irradiation, which degrades the materials' mechanical and heat conduction properties (https://www.gov.uk/ government/organisations/uk-atomic-energy-authority). If the plasma at the material surface is too hot, sputtering of the material generates impurities which can degrade the core plasma, and in particular, erosion of the surface can lead to unacceptably short lifetimes or drive designs with worse thermal performance. To address this problem, various strategies are adopted: increased radiative losses from the main plasma to spread the load over the first wall [20], spreading of the heat load on the divertor by modifying magnetic geometry [21], and most importantly making 'detached' plasmas where the plasma is cooled by injecting deuterium and radiating impurities into the divertor [22]. A detached plasma can have very low power flux at the materials (but often large particle fluxes); the challenge is to ensure that it remains securely detached, without the detachment region and seed impurities degrading the core plasma, or even causing a disruption. The core plasma scenario may need to be adapted in other ways, for example, fast transients (such as edge localized modes, a type of plasma instability that are almost ubiquitous in ITERlike plasma scenarios) can severely shorten the lifetime of the plasma-facing components and can break through the detached region. Slow transients, such as variations in the fusion power, can also cause reattachment. MAST Upgrade is a uniquely flexible facility for studying the underlying physics of plasma exhaust and comparing different geometries for exhausting heat and particles from hot plasmas. It is particularly notable in its capability to operate with a super-X configuration optimized for fully exploring the characteristics of this divertor concept (http://www.ccfe.ac.uk/ mast_upgrade_project.aspx, http://www.ccfe.ac.uk/assets/documents/other/MAST-U_RP_v4. 0.pdf). The super-X has features well suited for spherical tokamaks and provides a bounding configuration for conventional aspect ratio devices such as the EU DEMO. It is important to emphasize that MAST Upgrade is primarily intended to be a very flexible test-bed to study exhaust physics in many divertor configurations, from conventional divertor (as in JET and ITER) to X-divertor, snowflake, super-X and even extended inner-leg configurations, in single and double null versions [23]. The design of exhaust solutions for DEMO will rely heavily on theory-based models, given the change in physics parameters from existing or planned devices, so MAST Upgrade will be used to confront and thus improve the models that allow the next steps to be taken, rather than being a prototype [7]. As one example of the new physics MAST Upgrade can address, we consider the control of plasma detachment: the extent of the radiative and detachment regions, their expansion/movement from the target towards the X-point region and hot core plasma are not well understood. They involve a detailed interaction of the atomic physics of ions and neutrals, and the local and non-local turbulent and classical transport of Maxwellian and non-Maxwellian electrons and multiple ion species. These studies will be supported by flexible plasma heating, fuelling and pumping to explore the parameter space and to act as control actuators, and a wide range of high-resolution diagnostics to explore the physics, test models in detail and provide advanced control 'observers'. (i) Recent results Observations of plasma filaments in the scrape-off layer (SOL) and divertor and new modelling are starting to reveal what sets the filament behaviour and its relation to the density SOL width [24][25][26]. Finding a way to change their behaviour to increase the width could provide a means of alleviating the exhaust problem. Modelling of detachment shows how the variation in magnetic field strength along the divertor can play a key role in detachment control and the operation window [21,27]. (c) Developing materials for fusion reactors The focus of the UKAEA programme is on structural and plasma-facing materials for DEMO [28], which are low activation ferritic steels [29], such as EUROFER, and tungsten (or alloys), respectively. Critical issues for the steels include the operating temperature range after irradiation, especially the ductile-to-brittle transition temperature [30] (a lower bound to operation under strain), which increases with irradiation, especially by 14 MeV neutrons which generate helium in the material, driving embrittlement, and phase changes which reduce the strength (setting the upper temperature limit). Widening the operating window at both ends is important to increase the lifetime and improve the thermodynamic efficiency of the fusion plant by allowing higher temperature operation [31]. At present, there is no high flux, high fluence source of a fusion-like spectrum of neutrons (this is the purpose of the proposed IFMIF [32,33] and IFMIF/DONES [34] facilities), so predictions and materials selection and development have to use advanced multiscale modelling, which is extremely challenging theoretically and computationally. However, much of the material in DEMO will be subjected to a fission-like neutron spectrum, so progress can be made there with materials test reactors and advanced analysis of the irradiated samples, developing theory and modelling interactively. Furthermore, ion irradiation can sometimes be used as a partial proxy for neutrons, when accompanied by suitable theory, and this allows accelerated collection of data. A major challenge is developing high-fidelity multiscale modelling, bridging from atomistic to macroscopic properties-engineering design needs the macroscopic properties of materials, including failure modes, of materials after irradiation. The properties of materials are also critically dependent on the manufacturing process, and since additive manufacture provides a highly promising path for affordable fabrication of complex structures (such as optimized heat transfer with very small coolant complex-path channels), the properties of these materials need to be predicted and tested. The UK universities and research organizations such as UKAEA work together at the forefront of advancing the ab initio understanding of radiation damage (including gas embrittlement) of fusion steels and tungsten, with exacting comparisons with experiments, now complemented by analysis facilities in the MRF. The MRF's role is to allow fission and fusion scientists to process and analyse samples too radioactive for university premises but not requiring the facilities of a nuclear licenced site. The MRF has a purpose-designed building, hot cells, processing equipment and a range of fine-scale mechanical, thermo-physical and electron microscopy characterization equipment. It will expand significantly in the coming years, with the emphasis on further mechanical and thermo-physical testing capability over a wide range of scales and temperatures, plus further hot cell capability, allowing larger scale testing. The MRF will be exploited to examine how radiation degrades materials and to analyse JET tungsten and beryllium tiles after exposure to deuterium and tritium, and often high heat and particle flux. Moving to larger scales, studies of the engineering properties will require the Materials Technology Laboratory (see below), which will operate in harmony with the MRF, leading to the design rules needed for components and bringing in industry expertise. (i) Recent result A spin-lattice dynamics simulation program, SPILADY [35], has been developed and made widely available. This is an important tool for modelling the critical effects of magnetism in fusion steels. A new scaling for the size distribution of defects in irradiated tungsten has been discovered, in work carried out in collaboration with co-workers from Finland, Oxford University and Argonne National Laboratory [36,37]. (d) Developing components to work inside the tokamak Components, such as divertor targets and other plasma-facing components which combine tungsten, CuCrZr or steel-cooling pipes and steel structures, need to operate with high reliability. Steel structures such as the blanket modules will have many welds, some of which will need to be cut and re-welded during maintenance. The performance and, in particular, the failure modes (hence lifetime) of many components are very challenging to predict since they combine the properties of the base materials, joints between dissimilar materials and modified regions such as welds. The approach to these challenges is phased. Initially joints and components will be developed with the combined loads without including irradiation effects, and in any case, the components need to work well before they are significantly irradiated. It is assumed that the best components will use the radiation-resilient materials identified or developed by the fusion community's materials science programmes and manufactured taking account, as far as possible, of expected effects of irradiation on, e.g. joints and welds. In accordance with this thesis, the new FTF is the UKAEA's vehicle. These bespoke facilities tailored to meet the needs of fusion will enable thermal, mechanical, hydraulic and electromagnetic tests on prototype components under the conditions experienced inside fusion reactors (without the nuclear effects at this stage). Comprised of three independent laboratories, the FTF offers a complete development life cycle for materials and components. The Materials Technology Laboratory (MTL) specializes in the development and qualification of small sample testing techniques to reduce costs and volumes of testing and offering 'in-service' examination. Exploiting the opportunity to bring cutting edge advances in testing techniques into new nuclear design codes, the MTL will develop multi-axial testing, fracture mechanics of brittle materials and true stress true strain analysis among other techniques. The MTL contains two load frames of 5 and 7.5 kN capable of operating at 700°C and 1000°C, respectively (so can reach the upper limits expected for steel operation), with operation in inert atmospheres and in vacuo with Digital Image Correlation measurement. Automated hardness testing, sample preparation and heat treatment and characterization capabilities are also provided. The JAMTL will enable the development of critical material joining and manufacturing technologies required to deliver fusion, such as the qualification of laser welding, developed in the EUROfusion remote maintenance project. Building on previous experience, powder and wire metallurgical advanced manufacturing (AM) methods are being developed to create novel cooling architectures in plasma-facing components such as divertors and functionally graded joints, supporting the development of a high-quality industrial supply chain for these applications. The development of fusion compatible non-destructive testing techniques and condition-monitoring sensors, in addition to manufacturing for maintenance, are key themes for JAMTL. These activities are supported by a test stand for small component testing (heat by induction to verify extremes (HIVE)), to provide fusion-relevant heat transfer that allows rapid prototyping of AM and sensor technologies [38]. The third member of the FTF is the Module Test Facility (MTF) that is planned to offer fusionrelevant testing environments for metre-scale components in static and time-varying magnetic fields to investigate the impact of induced forces and plasma disruptions [39]. A key feature of the MTF will be the high degree of instrumentation and data collection and handling capability that will allow the adoption of virtual twin philosophy for component engineering. This allows the validation of computer models of the component that may be used for lifetime studies and off normal event simulations, avoiding expensive prototype build. As well as developing techniques for manufacturing effective components, these facilities, and the MRF, will help provide lifetime estimates and failure modes, important both for component optimization and for determining the maintenance approach to maximize plant availability. (i) Recent result Advanced component analysis with image-based finite-element modelling has been used to determine and correct manufacturing flaws leading to better high heat flux components [40]. Additive manufacturing has been used to make high heat flux prototypes with narrow and optimized internal cooling channels that would not have been possible to make by conventional approaches [39]. (e) Achieving a high availability for the fusion plant As indicated above, a fusion plant comprises a wide range of science and technology, with many interactions and constraints, and DEMO(s) will provide the first tests. Achieving a consistent design from plasma to electricity grid is itself a major challenge. However, it is essential that the plant operates predictably and reliably (so each component and system needs to have extremely high reliability since there are so many), and that maintenance can be done rapidly minimizing down-time and, in a timely way, pre-empting component failures as far as possible. All maintenance of the tokamak structure during operation with DT plasmas (and highperformance DD plasmas) will need to be done remotely [41]. Therefore, remote maintenance needs to be designed in from the outset, and it has a major impact on design and architecture choices [42,43]. UKAEA has chosen to put a major focus on remote maintenance since it is so key to the holistic plant design [44]. A fusion reactor is perhaps the ultimate challenging environment for reliable operation and maintenance [21]: ∼500 K, vacuum, liquid metals, confined spaces and kGy/hour radiation. Remote maintenance (RM) will be a fusion power plant 'device defining driver' whether a power plant is on a similar scale to ITER or a way has been found to make a small modular reactor based on a spherical tokamak. RM needs to take into account the design, build, inspection, maintenance, operation and decommissioning of the power plant (and vice versa as above). For a fusion power plant, necessary RM components must be developed to the appropriate technology readiness level to demonstrate viability before the next (more expensive) phase of design and/or build. As well as using virtual engineering, mock-ups will need to be designed and fabricated, then qualified to enable regulation of power plants. To produce a robust and qualified remote maintenance system requires: augmented and virtual reality testing; advanced control systems for a neutron environment; cutting and joining radiation-damaged steel pipes and inspecting to ensure acceptable quality; manipulating large irradiated components to extremely tight tolerances, all to nuclear standards. The key technical risks for the RM of power plants centre around the movement of large 'flexible' loads such as groups of blanket modules which may together have a mass approaching 100 t, and rapid and reliable connection of many component service pipes to satisfy the requirements of a nuclear regulator. RACE provides a flexible facility which spans the development cycle for robotic maintenance solutions, from the in silico virtual design, to prototype testing, and operation of the JET remote maintenance system for two decades. The UK plays a substantial role in the design of remote maintenance systems for both ITER (where UK industry is involved in all contracts issued so far) and DEMO. There are strong synergies with other areas where human access is undesirable or impossible-for example, the target area of the European Spallation Source where UKAEA is working on the remote maintenance. (i) Recent result Following the theme of holistic design, a first view of the integrated remote maintenance of an EU DEMO has been created [45]. The handling of massive blanket modules requires innovative approaches (the 'crane' cannot be many times the mass of the blanket module-i.e. unlike the approach used for most crane systems), and a hybrid kinematic manipulator concept has been developed building on approaches used in other industries [44]. (f) Breeding and managing tritium Fusion plants must breed all their tritium, with some margin to cover decay during maintenance periods, tritium temporarily resident in materials and the tritium plant and not available for fuelling the plasma, and for starting up new fusion plants [46]. In addition, the site inventory will be tightly restricted by the regulator [47], so the amount of tritium outside the plasma at any time must be minimized and losses eliminated wherever possible. This means that very efficient low inventory fuelling systems are needed, the volume of the tritium plant must be minimized, there needs to be fast extraction of tritium from the breeding material and the amount of tritium retained in materials has to be minimized. Finally, the tritium inventory of items leaving the plant site must be kept to extremely low levels to simplify waste handling and minimize its cost [48]. The breeding occurs in the blanket surrounding the plasma which also converts the energy in the fusion neutron into bulk thermal energy, which is the primary heat source for the electricitygenerating turbines and, at the same time, shields the vacuum vessel so it can be made from more conventional steels; the blanket is an interesting multi-disciplinary project in itself. H3AT will offer the ability to pursue tritium-related R&D in several key areas that currently challenge fusion. Detritiation is one example relevant to ITER and DEMOs and is required at various points in the lifecycle [49]. In the fuel cycle, isotope separation and rebalancing is critical, particularly as the process time is a major contributor to the tritium inventory required to start a fusion power plant. H3AT will provide facilities to support studies in these areas together with tritium pumping and storage technologies. Recovering tritium from coolants and materials will be essential to minimize the active waste inventory of ITER and DEMO, and H3AT will provide facilities for R&D on tritium removal at low and high concentration from solid, liquid and gaseous materials. Preventing or minimizing tritium migration is obviously required along with the development of tritium removal techniques from different breeder blanket designs, and H3AT will offer facilities to investigate these areas. Tritium control, monitoring and accountancy are all essential for operation and licensing, and R&D programmes for technologies in these areas can be accommodated. (i) Recent result The idea of a small inner loop in the DEMO fuel cycle is likely to be key to an acceptably low tritium inventory. The concept [50] originated at the Karlsruhe Institute of Technology, and UKAEA has been collaborating with its further development [51]. (g) Developing an integrated design for fusion reactors As stressed above, an integrated approach from the outset is critical for holistic fusion reactor design. The main systems and features to integrate in an MCF reactor are -the plasma; -superconducting magnets and their high strength support structures; -blanket (converts fast neutrons to heat and tritium and shields the vessel); -divertor (for exhaust); -heating and current drive systems for plasma production, sustainment and control; -measurement systems (plasma and plant); -tritium plant and fuelling system; -balance of plant-the turbines, power conversion systems, cryoplant, power supplies; -safety and waste; and -qualification process to satisfy regulators and investors. Major strides in integration have been taken in recent years, especially in ITER and EUROfusion's DEMO design activity. There are many examples of unexpected issues emerging when integration is attempted. For instance, the number of toroidal field coils affects the viability of remote maintenance; first wall armour to protect from plasma heating can reduce the tritium breeding; the blanket operating temperature (thermodynamic efficiency) is constrained by steel properties and the coolant pumping power, and, in turn, constrains the fusion power from the plasma; short pulse length substantially reduces the recirculating power to sustain the plasma and may increase the overall efficiency compared with steady state. Realizing DEMO and the First-Of-A-Kind Fusion Power Plants requires a multi-disciplinary approach with the capabilities and facilities to address all of the challenges outlined in §2 simultaneously. This integrated design capability will need to bring together a top-level fusion power plant design capability, incorporating systems codes with cutting edge models for all aspects of the design, socio-economic assessments, commissioning, maintenance, operations, waste management and decommissioning, with a rapid prototyping and validation programme. The prototypes, and later DEMO, can be used as test beds to subject the virtual models to representative and extreme scenarios to understand real world performance, failure modes and through-life issues. UKAEA will work with a wide range of partners in industry and academia, both nationally and internationally, to move along this path to delivering fusion reactor designs. A factor not discussed much so far is the cost, but this will be critical in the end: the overall cost of electricity and also the capital cost of the plant, including the largest single investment, the tokamak itself. The holistic approach described in this paper applies to any concept, and it allows coherent exploration of alternative approaches which might lead to lower cost of the tokamak core, by uncovering or stimulating plasma and technology innovations that are not applicable to the ITER-like approach. To this end, UKAEA will, alongside its major contributions to the EU DEMO programme, work with collaborators to seek innovations and features that would allow smaller physical size and lower capital cost, focusing on the spherical tokamak and making a key next step with MAST Upgrade, but always taking a holistic view. Finally, it is also worth noting that successful delivery of fusion power will be dependent on a supply of highly trained capable scientists and engineers. UKAEA has a strong training programme at all levels, from apprentices, to graduates, to post-graduate students and postdoctoral researchers. Conclusion and future perspectives Since 16 MW of fusion power was achieved in JET in 1997, the headline progress in fusion has appeared to the outside world to slow down. This belies substantial technical progress and greatly improved understanding of the science and technology in the field. However, it is representative of the fact that fusion requires a burning plasma, where the fusion reaction provides products which sustain the reaction, before fusion on a commercial basis can be considered possible. ITER will provide that demonstration, and as such is critical to the success or failure of commercial fusion power. UKAEA will continue to contribute significantly to ensure ITER reaches its goals, in many technical and scientific areas and in providing expert advice to industry-many of ITER's needs can be satisfied by existing industrial capabilities. However, while ITER will show fusion is possible, it will not provide net fusion electricity. UKAEA's evolving portfolio of facilities will help to address some of the challenges in the transition to DEMO and power plants, but a concerted, multinational endeavour will be needed in parallel with ITER to address them all appropriately. For instance, the MRF is an important facility for testing and validating numerical models of small material samples irradiated by low-energy (usually fission-spectrum) sources. However, only with samples exposed to a fusion-relevant spectrum of very energetic neutrons at high fluence can materials be qualified for use in internal reactor components really be validated, and this requires a major facility such as IFMIF/DONES, which is envisaged as a multinational collaboration. As well as a multinational collaboration to provide the requisite capability and facilities to enable fusion to be commercialized, a supply chain capable of designing and building fusion reactors must also be developed. UKAEA plays a central role in enabling UK industry to deliver fusion-specific components and systems for ITER and will increasingly foster contributions from industrial partners and transfer knowledge to address the fusion challenges to the supply chain. The realization of fusion remains elusive, but its potential remains vast. ITER will be the first burning plasma and a DEMO designed on the same basic principles as ITER, while incorporating discoveries and innovations as far as possible, is the highest confidence path. Given the impact cost-competitive and reliable fusion power would have in meeting the world's demands for reliable low carbon energy, it is important to keep innovating and optimizing at all levels from materials to whole concepts (e.g. exploring spherical tokamaks) to bring down both the capital and the operating cost. Ultimately, the penetration of fusion power into the market may be driven by capital cost of reactor build more than the overall cost of electricity. Funding. This work has been part-funded by the RCUK Energy Programme (grant no. EP/P012450/1).
8,898.4
2019-02-04T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Explore the Factors that Influence Elderly Poverty Many countries have experienced aging phenomenon. However, it seems to be a shocking trend for the world population as this issue would somehow affect the development process of the country. Poverty among the elderly has been a global concern, as stipulated in the Madrid International Plan of Action on Aging 2002 (United Nation, 2002). People in sub-Saharan Africa are among the poorest in the world, not only in terms of real income but also in terms of access to social services. Many people regard Abstract Introduction Many countries have experienced aging phenomenon.However, it seems to be a shocking trend for the world population as this issue would somehow affect the development process of the country.Poverty among the elderly has been a global concern, as stipulated in the Madrid International Plan of Action on Aging 2002 (United Nation, 2002).People in sub-Saharan Africa are among the poorest in the world, not only in terms of real income but also in terms of access to social services.Many people regard _____________________________________________________________________________ ______________ Nasreen Khan, Shereen Khan, Olivia Tan Swee Leng, Tan Booi Chen and Rossane Gale Vergara (2017), Journal of Southeast Asian Research, DOI: 10.5171/2017.938459 'absolute poverty' as no longer existent in a Western European context.The EU Member States agreed on using poverty indicators, which are one-dimensional (monetary) and relative (based on a threshold defined in relation to the distribution of income within each country) (Eurostat, 2005).A poverty measure commonly used within the EU is to describe individuals as poor whose net equivalence income is less than 60 per cent of the median income in a given population (BMGS, 2005).Since the late 1980s/early 1990s, researchers and policymakers alike have increasingly acknowledged the multidimensional character of poverty.The key question for European, national, regional, and local policymakers concerned with the alleviation of poverty in old age is how to reduce the risk of poverty in the long run. A general assessment of poverty in old age at provincial level reveals that the majority of the aged are poor due to unemployment/retrenchment (Madzingira, 1997).Carter and May (2001) further clarify that functional illiteracy is higher among the chronically poor, and other risk-reducing measures such as insurance and savings are important issues for poverty reduction.Issues such as the impact of unemployment and the prospect of early involuntary retirement should not be neglected.In the United Kingdom, aged population is projected to increase rapidly and a significant minority of people of pensionable age fully depends on state-based financial assistance.Besides due to the absence of money, causes of poverty are; intergenerational worklessness and economic dependency, family breakdown, serious personal debt, educational failure, and addiction to drugs and alcohol (McKee, 2009).The incidence of poverty among older persons is not only based on income, it also depends on factors such as health, education, and labor market opportunities.Thus, poverty is certainly a multidimensional issue. Global Perspective on Causes of Poverty One route for investigating the causes of poverty is to examine the dimensions highlighted by poor people.The basic ingredients for a review of the causes of poverty are much similar across countries in the region: low income, unemployment, no education/low level of education, no proper financial planning, spends too much money on the children, lack of support from children, inadequate pension program, and expensive health care.Some of the reasons could be due to low income and assets to attain basic necessities.Other relevant variables as education, health, housing, water, sanitation, and labor market opportunities also should not be ignored. (i) Government support: In many nonwestern countries, there is considerable debate over how much government support should be provided for the care of the older population.The lack of comprehensive social security system in most developing countries including Malaysia increases vulnerability of the elderly to poverty especially among older women and the self-employed (Caraher, 2003).Whereas in the United Kingdom, aged population is projected to increase rapidly and a significant minority of people of pensionable age fully depends on state-based financial assistance.Aging Americans like other age groups are feeling the effect of challenges at old age poverty.Currently, 3.4 million senior ages 65 and older live below the poverty line and retirement income adequacy will decline due to insufficient amount of social security benefits and less certain income from employer pensions (Munnell and Soto, 2005).Although the government has increased the social security benefits to reduce elderly poverty, high medical cost can reduce the income available to meet the other needs (Cawthorne, 2008).Echenberg (2009) identified four ways to reduce the elderly poverty by having income from investment at early age, income from work, income from government transfers such as providing incentives to participate in _____________________________________________________________________________ ______________ Nasreen Khan, Shereen Khan, Olivia Tan Swee Leng, Tan Booi Chen and Rossane Gale Vergara (2017), Journal of Southeast Asian Research, DOI: 10.5171/2017.938459 labor market, and support particular behavior or activities and non-monetary benefits such as affordable child care, social housing, and recyclable affordable used clothing.Canadian federal and provincial financial support programs provide the financial support to nearly all provinces (Ruggeri et.al,1994) and also ensured that there should not be any individual falls below the threshold (Sarlo, 2001).Feedbacks from retirees conclude that retiring Canadians have adequate financial resources, with the exception of those who retired involuntarily as a result of poor health (Alan et al., 2007).Unlike the U.S. system, which relies on the earnings-related pension component, Canada's system offers a guaranteed income in the form of Old Age Security (OAS), regardless of past participation in the labour force.Thus, Canada Government has well designed an effective old age fund for its citizen (The Conference Board of Canada, 2016).Surprisingly, Australia has the higher rate of elderly poverty, and nearly 40 per cent of Australian seniors live in relative poverty.An OECD (2009) report notes that increase in elderly poverty in Australia is due to inefficient income-support payment program. (ii) Financial planning-Causes of poverty in developing countries are due to lack of sufficient income and resources to live a full life.Some analysts view poverty as the outcome of personal decisions such as dropping out of school, having a child at early age, becoming addicts to drugs/alcohol or refusing to relocate the employment.Other analysts argue that poverty is a product of government programs that are not well structured.Meanwhile, limited financial resources, coupled with inability to manage financial resource, can lead to financial problems (Suwanrada, 2009).Today's older Malaysians are unguarded to poverty due to forced retirement, lack of saving during younger years, limited social security coverage, and coupled with changing family structure and lifestyles, (Masud and Haron, 2014).Increased cost of living becomes another factor for the need to have good financial practices.Good financial planning and practices during younger years can be a factor to ensure financial security in old age since one of the recommended financial goals is savings for old age (Garman and Fougue, 2004).Bardasi et al. (2002) suggest the importance of income and its effects of having low income on poverty at old age.In developed economic systems, those with high household income often consider themselves and their employers as the most important sources of retirement income whereas; households with lower incomes report the government and their families as most important (HSBC, 2008).Besides, retirement preparation among those employed influences income during old age.Moen et. al., (2006) discover that spouses' decision making in the form of retirement planning tends to be positively related to lower poverty at old age. (iii) Social demographic-Research evidence points to the crucial importance of education -the higher the level of education and, thus, the socio-economic status -of any individual (young or old); the less likely is s/he to be affected by poverty.These are the strongest determinants of health and quality of life in old age (Marmot & Wilkinson 1999).In the case of Latin America and the Caribbean, the incidence of poverty among older persons is not only based on income.It also depends on factors such as health, education, and labor market opportunities.Thus, poverty is certainly a multidimensional issue.Southeast Asian countries are sometimes grouped with East Asian "developmental stages" which maintained a low level of spending on welfare but with social policies being used in the overall pursuit of economic development (Kwan, 2005).The reason was due to the initial focus on contributory social insurance programs, household saving, and universal access to education.The policies of these countries have also been described as productivity in emphasizing investment in education and public health as housing were likely to be most at risk of needing a social and economic safety net in old age.Widowhood is one of the trigger events with adverse impacts on the financial well being of older persons (Emmerson and Muriel, 2008).Some have reported that there is no relationship between divorce and help given or received in old age (Pezzin and Schone 1999).Researcher such as Johnson and Favreault (2004) argued that there is a significant loss of pension entitlements among women who had children and who experienced marital disruptions.In the same vein, other research also has shown that marital disruptions over the life course may have adverse consequences for social support and connectedness at older ages. (iv) Social support -In the United States where government provided substantial transfers to the elderly through social security and medicare, some non-western countries are trying to reinforce family support-networks (Da Vanzo, 1994,).In South Korea, families and close relatives provided the majority of financial support for the elderly.The government took advantage of this and did not prepare any extra measures to provide the elderly with pensions (Cook and Kim, 2010).The public pension scheme was only introduced in 1988, and retirees do not have accumulated pension entitlements in the new system when they retired in the mid-2000s.With modernization, tensions and gaps between generations will diminish the elderly roles in society and could lead to less responsibility for the younger generations to support the elderly.Thus, the root cause that contributes to the higher rate of elderly poverty in Korea is due to late preparation for proper pensions system for the citizens; secondly is due to the lack of support from families; thirdly the elderly do not save enough to sufficiently support themselves; and finally they have provided extensive support for their children (Lee, 2014). Nevertheless, despite an increased need for social support for older people, there is evidence that the size of social networks (NST, 2015) and whereas 50% of retirees finished pension fund within 5 years, 70% of them finished their saving within 10 years, and 14% of them finished it within 3 years.These facts are alarming especially those who are still thinking to rely fully on their pension fund saving for the postretirement income.Rapid increase in the aged population, together with the longer life expectancy reflect that well planned personal financial planning turns out to be of utmost importance (Mohidin et. al., 2013). According to Kim (2003), most of the people were not afraid to retire but they are not well prepared for retirement due to lack of money.More than 90% of Malaysians do not prepare for retirement and they do not take into account inflation rates and rising medical costs (Lai et al, 2009).The Government has always maintained that it is the responsibilities of children to provide care and support for their aging parents (Chan, 2005).Traditionally, it has been the norm and cultural practice of all ethnic groups for children to repay their parents (Yaacob, 2000).There is an increasing literature on the importance of culture for determining the effects of children to repay their parents (Ashraf et al., 2016), as well as the growing literature that examines the effects of gender-related cultural norms (Giuliano, 2014;Alesina et al., 2015).However, Malaysia has undergone rapid demographic transition, with continuing decline in fertility and increasing life expectancy over several decades.Since independence from the British rule in 1957, the total fertility rate has declined from 6.1 per woman to 2.1 in 2010 (Departmental Statistics of Malaysia, 2014).The increasing number of older persons with diminishing family size will put more stress on traditional family support systems.Previous findings in Malaysia revealed that older Malaysians, especially those living in rural areas, largely depend on financial support from their children (Tengku Aizan, H and Jariah, M, _____________________________________________________________________________ ______________ Nasreen Khan, Shereen Khan, Olivia Tan Swee Leng, Tan Booi Chen and Rossane Gale Vergara (2017), Journal of Southeast Asian Research, DOI: 10.5171/2017.938459 2010).Family support for the elderly may be eroding due to social-demographic changes such as the trend towards delayed marriage and non-marriage, shrinking family size, out migration of the children, increased female labor force participation, and living in condominium (Department of Statistics Malaysia; 2014).Thus, the recent changes in the structure and functions of family may also have profound effects on the perception and provision of social support for older people (Abd Samad, 2013) Pertaining to retirement financial planning, most working Malaysians are depending on monthly contribution to employee pensions fund (EPF).It has an average contribution rate of around 23% (i.e.employer 12% and employee -11%) of their gross salary every month to their retirement savings with EPF (Ong and Lee, 2001).Malaysia's EPF was established in 1951 and it is the oldest provident fund (PF) scheme in the world (Thillainathan, 2004).Compared to other East Asian countries, Malaysia is in higher rank with respect to the overall contribution rate to pension fund, however with respect to retirement age, it has only recently legislated to be increased to 60. Nowadays most Malaysians do not have enough saving when they get retired and they used up their pension fund within few years.Statistics shows that Malaysia's pensions fund has higher contribution rate of total 23% to pensions fund compared to the United States which has 12.4%, even Japan as 23% and the U.K as 23.8%, however Malaysia retirees still do not have enough saving.Statistics further shows that Malaysia's per capital income of $ 10,432, which is far lower than the United States with the amount of $ 51,749, the United Kingdom with the amount of $ 39,093 and Japan with the amount of $ 46,720.Whereas Singapore, which is a closer neighbor of Malaysia, has higher income capital $ 51,709 with the highest pension contribution rate with a total of 38% (Holzmann, 2015).Malaysian elderly are living longer which increases the percentage of the total population.Taking into consideration the normal retirement age as approximately 55 years, there is an urgent need to establish an adequate system that covers sufficient saving when they get retired.This raises the questions of: What are the causes of elderly poverty?What direction they should plan for the future retirement so that they can live comfortably at old age? Therefore, the purpose of the study is to explore the causes of poverty at old age and also to recommend the action to reduce poverty at old age. Proposed framework and hypothesis development The absolute concept of poverty means one's inability to obtain minimum necessities to maintain physical efficiency or to fulfill basic human needs (Jamilah, 1994).According to Deaton and Paxson (1998), poverty is a syndrome affecting people in situations characterized by malnutrition and poor health standard, low income, unemployment, unsafe housing, lack of education, inability to acquire modern necessities, insecure jobs, and a very negative outlook on life.Global Age Watch Index (2015) provides a good working framework to review the measures of vulnerability for older people.The framework identifies four domains of well being for older persons.These are 1) financial security, using indicators on the pension income beneficiaries ratio, older people's incomes or consumption relative to the rest of the society, and poverty risk among older people, 2) health status, using healthy life expectancy at 60, and psychological well-being as indicators of physical health and mental well-being, 3) Employment and education among older people as a proxy for the coping attributes of older people, given that lacking these attributes makes them more vulnerable; and 4) Enabling environments, using indicators pertinent to enabling age friendly attributes 1 shows the causes of poverty at old age.Due to insufficient retirement funds, many elderly persons are confronted with serious financial problems (Gardyn, 2000).A survey on the elderly in public service shows that the elderly in Malaysia anticipate financial constraint to be the major challenge upon retirement (Merriam and Muhamad, 2000).Many of the elderly still need to work at retirement age as they do not have careful financial planning in earlier years.Based on this statement, the research proposes as follows. Proposition 1: There is relationship between financial planning (income and saving) and elderly poverty. Older people might be poorer just because they are less educated than the younger generations.Due to the lower education level, the elderly have less working capabilities and have limited sources of income (Mat & Taha, 2003).Therefore, this research proposes as follows. Proposition 2: There is relationship between social-demographic (employment status & education) and elderly poverty. The government's ability to manage a social security system is important in determining the system best suited to a country.Many countries have had serious problems managing their social security systems and consequently lead to elderly poverty (OECD, (Yaacob, and Nurdin. 2000.).Thus, the research proposes as follows. Proposition 3: There is relationship between Government support (retirement age &, social security) and elderly poverty.Most of Malaysian elderly depend on their adult child to support them but those whose children do not have enough money to support them, have serious problem to manage their living costs (Merriam and Mohamad, 2000).Although allowances from children significantly reduce elderly poverty, it is not clear whether this practice does indeed last in the long run.Thus, the research proposes as follows. Proposition 4: There is relationship between social support (family, community) and elderly poverty. Future Directions for Retirement Plan Old-age financial protection has become a key focus of policy interest and research efforts in South-East Asia, including Malaysia.In developed countries the combination of strong social security systems, welldeveloped capital markets, and small households contribute to higher living standards for the elderly.Due to demographic, social and economic changes, there is a need for an effective system in income provision for the elderly.Reliance on traditional means of family support combined with individual savings may not reduce poverty at old age.To prevent such a scenario, action needs to be taken to ensure that all workers are covered by a system that offers a minimum guaranteed income through periodical payments.However, any such reforms are dependent upon the political will of government to address such concerns. The government should provide assistance through effective public policy to protect the welfare of the elderly, involve private entities through corporate social responsibility, encourage both formal and informal sectors for voluntary savings, ensure the adequacy of savings to support post-retirement living, and create the awareness of the benefits of savings from the young through early education in schools and parental guidance (Suhaimi, 2013).Indeed, most developed countries have introduced policies and organizational practices that target older workers, including: reducing incentives for workers to take early retirement, encouraging later retirement and flexible retirement, passing legislation to counter age discrimination and helping older workers find and keep jobs (The conference board of Canada, 2016).As the number of the elderly grows annually, their demand for healthcare also relatively increases.Hence, in the future, Malaysia government should build more hospitals and clinics especially in the rural area to cater for the huge healthcare demand from the elderly. Lifelong learning has an important potential contribution to poverty reduction.Individuals engaged in lifelong learning are more likely to improve their livelihoods through better employment opportunities, higher income, broader understanding of financial markets, better health and healthier behaviors, access to health services, knowledge of health conditions, among others.Therefore, the government must work hand in hand with private sectors to ensure maintaining the skills of the current workforce; upgrading the skills of those with the greatest needs to increase their employability; and allowing adults to re-skill to find employment in other areas (Sabates, 2008). It is important for one to have sufficient financial knowledge, as it would help in understanding one's own financial status.Those who have financial knowledge can have well prepared financial planning for the future (Joo, S.H., & Grable, J.E. (2005) and thus can avoid poverty at old age.Thus, the government should establish a governmentlinked institution body to create awareness throughout the country and provide _____________________________________________________________________________ ______________ Nasreen Khan, Shereen Khan, Olivia Tan Swee Leng, Tan Booi Chen and Rossane Gale Vergara (2017), Journal of Southeast Asian Research, DOI: 10.5171/2017.938459 professional advice to the citizens.In addition, the government should make it compulsory for higher education learners to take the financial planning as a required subject or co-curricular activities.Besides that, both public and private companies should provide training on financial planning for the working people.The practice will not only help them to prepare for better future financial planning but also it indirectly helps to country economy growth. Children in Asia were generally positive and responsible towards their elderly parents.These elderly parents are being cared for not only financially, but also in the form of food preparation, purchase of daily necessities, housekeeping, doing laundry and transportation to visit relatives/hospital/clinic (Chor & DaVanzo, 1999).In many other regions, such as Singapore, China, the United States and Canada, adult children are required by law to support their elderly parents.Legislation of parental support explicitly states that it is the responsibility of the family, rather than the government, to care for their elderly parents.This legislation of parental support and practices should also be applied in other Asian countries like Malaysia where the culture still value the closed knitted family.That will help the elderly to have comfortable life with people surrounding them.In addition, every community in the region also must play a role in ensuring that the elderly are well taken care of. Figure 1 : Figure 1: Framework of causes of poverty at old age (Cook and Pincus, 2014)___________________________________________________________________Nasreen Khan, Shereen Khan, Olivia Tan Swee Leng, Tan Booi Chen and Rossane Gale Vergara (2017), Journal of Southeast Asian Research, DOI: 10.5171/2017.938459generallydecreases with age as a result of the loss of close friends and family members(Wrzus et.al., 2013)Available evidence suggests that social support makes an important contribution to health(Kendler et al., 2005)and a lack of social support may have negative effects on physical and mental health among general populations(Lakey & Cronin, 2008).Singapore is expected to experience rapid aging of its population in the next two decades.Therefore, old-age income security is increasingly becoming an important economic, social, and political issue.Citizen concerns are growing across a range of social issues, including: relative poverty; access, equity and affordability in health care; and retirement income provision.Both in Singapore and Malaysia, social protection relies primarily on personal savings and family support networks and the whole government support is channeled towards public provisions of health and education services(Cook and Pincus, 2014)
5,257
2017-01-30T00:00:00.000
[ "Sociology", "Economics" ]
Impact of mandatory IFRS adoption on economic growth: the moderating role of Covid-19 crisis in developing countries Research Question: Does Covid-19 crisis moderate significantly the relationship between mandatory International Financial Reporting Standards (IFRS) adoption and economic growth in developing countries, especially in the MENA (Middle East and North Africa) region and SSA (Sub-Saharan Africa) countries? Motivation: Two sources of motivation are behind this study. First, research works on the impact of mandatory IFRS adoption on macroeconomic indicators such as economic growth are still scarce. Second, studying the impact of mandatory IFRS adoption on economic growth before and during the Covid-19 crisis allows to better understand this relationship in times of crisis. Idea: This article aims to investigate the moderating role of Covid-19 crisis in the relationship between mandatory IFRS adoption and economic growth in developing countries. Tools: The study was conducted based on panel data from 30 developing countries (15 MENA countries and 15 SSA countries) during the period 2017–2020. Collected data were analysed by using the Generalized Least Squares (EGLS/weighted cross-section) with fixed effect estimation technique. Findings: The main results of the study show that mandatory IFRS adoption has a positive impact on economic growth of the full sample, and that this positive impact is reduced during Covid-19 crisis. Contribution: The study results are very useful to policymakers and regulators in developing countries, especially in crisis periods. Impact of mandatory IFRS adoption on economic growth: the moderating role of Covid-19 crisis in developing countries Introduction Undoubtedly, improving economic growth is considered as the main objective of economists, economic researchers and policymakers.A robust economic growth can be a major driver of poverty alleviation (Thorbecke, 2013;Anand et al. 2014;etc.),unemployment reduction (Sadiku et al., 2015;Bayrak & Tatli, 2018;Xesibe & Nyasha, 2020;Hjazeen et al., 2021;Alabed et al., 2022 etc.), etc. Nowadays, many developing countries have made efforts to promote economic growth.Among these efforts, the adoption of International Financial Reporting Standards (IFRS) is largely recommended by the international organizations such as the World Bank (WB) and the International Monetary Fund (IMF) in their several Reports on the Observance of Standards and Codes (ROSC) on the Accounting and Auditing (AA).In its literature review on the economic impact of IFRS adoption, Nurunnabi (2021) indicates that the adoption of high-quality set of harmonized accounting standards (IFRS) can improve trade, foreign direct investment (FDI) and reduce information asymmetries. During the two last decades, adoption of IFRS has known a remarkable trend (Elhamma, 2014;Bengtsson, 2021), especially after the decision of the European Union (EU) in 2002 (regulation No. 1606(regulation No. /2002) ) to require IFRS for all listed companies on EU stock markets, from 2005 onwards.According to a report by the IFRS Foundation in 2018, 144 or 87% of jurisdictions around the world require IFRS for all or most companies (IFRS foundation, 2018).To better understand the macroeconomic consequences of the mandatory IFRS adoption, some researchers have been interested on several topics such as the impact of IFRS adoption on FDI inflows (Márquez-Ramos, 2008;DeFond et al., 2011;Gordon et al., 2012;Akisik, 2014;Yousefinejad et al. 2018;Musah et al., 2020;Siriopoulos et al., 2021;Elhamma, 2023;etc.),economic growth (Oppong & Aga, 2019;Akisik et al., 2020;Owusu et al., 2022;etc.),etc.However, accounting and economic empirical studies on the relationship between mandatory IFRS adoption and macroeconomic indicators in Covid-19 crisis period are absent.According to Rinaldi et al. (2020: 180), "while accounting and accountability research has developed and become more sophisticated, the challenges posed by the Covid-19 pandemic provides both a challenge and need for research in this area to become more impactful". In this context, this study aims to examine the moderating role of Covid-19 crisis in the relationship between mandatory IFRS adoption and economic growth in developing countries, especially in the Middle East and North Africa (MENA) region and Sub-Saharan Africa (SSA).The key question in this research is therefore: Does Covid-19 crisis moderate significantly the relationship between mandatory IFRS adoption and economic growth in developing countries, especially in MENA and SSA? MENA is a large region that is characterized by substantial petroleum and natural gas reserves.According to the January 1, 2009 issue of the Oil and Gas Journal, MENA countries hold 60% of the world's oil reserves and 45% of the world's natural gas reserves.Currently, seven (Algeria, Iran, Iraq, Kuwait, Libya, Saudi Arabia and United Arab Emirates) of the thirteen countries of the Organization of the Petroleum Exporting Countries (OPEC) are part of the MENA region.SSA is composed of countries located in the area that lies south of the Sahara.The United Nations Development Programme (UNDP) applies the "sub-Saharan" classification to 46 of Africa's 54 countries. Our objective in this research is to explain the evolution of economic growth in relationship between the mandatory IFRS adoption before and during the Covid-19 crisis.In order to achieve this objective, a panel data from 30 developing countries (15 MENA countries and 15 SSA) spanning 4 years over the period 2017-2020 was used.Therefore, the full sample consists of 120 observations.Two main results should be highlighted.First, mandatory IFRS adoption has a positive and significant impact on economic growth for the full sample.Second, the interaction between mandatory IFRS adoption and Covid-19 crisis has a negative and significant impact on economic growth for all selected developing countries.This implies that Covid-19 crisis decrease significantly the positive impact of mandatory IFRS adoption on economic growth. There are both theoretical and practical contributions of this present research.For theoretical contributions, the present study results add knowledge to the existing economic and accounting literature on the relationship between mandatory IFRS adoption and some macroeconomic indicators, especially in the crisis periods by providing some empirical evidence on the moderating effect of Covid-19 crisis on the relationship between mandatory IFRS adoption and economic growth in developing countries.To the author's knowledge, this is the first research study to investigate the impact of mandatory IFRS adoption on economic growth by using Covid-19 crisis as a moderator.For practical contributions, this research work is very useful to policymakers and regulators in developing countries. The remainder of the paper is structured as follows.In the second section, we provide literature review and hypotheses development.In the third section, we present our methodological choices.In the fourth section, we report the empirical results.Finally, in the fifth section, we report summary and conclusion. Literature review and hypotheses development 2.1 Impact of mandatory IFRS adoption on economic growth Research studies on the link between IFRS adoption and macroeconomics indicators such as economic growth are very limited (Elhamma, 2023).On the hand, the accounting scholars are interested more on issues at the firm level (Owusu et al., 2017;Mameche & Masood, 2021).On the other hand, economic researchers investigating the drivers of economic growth have not studied IFRS mandatory adoption much (Oppong & Aga, 2019).Akisik (2013) examined the relationship between accounting regulation, financial development, and economic growth in fifty-one developed and emerging market economies over the period 1997-2009. The study results showed a positive relationship between economic growth and highquality (international) accounting standards.In the same sense, Özcan (2016), by using a sample of 41 countries that have adopted IFRS and 29 countries that have not yet adopted IFRS during the period between 2005 and 2015, studied the relationship between adoption of IFRS and countries' economic growth.The research results showed that IFRS adoption has significantly increased countries' economic growth.Based on a sample of 28 countries in the EU and data from 2005 to 2014, Oppong and Aga (2019: 792) demonstrated that "IFRS adoption improves the economic growth and that IFRS adoption matters for developing economies than developed ones".According to these results, the authors recommended to enforce the adoption of IFRS, especially in developing economies. Recently, Akisik et al. (2020), by using data from 41 African countries over the period 1997 and 2017, have shown two main results.First, the relationship between IFRS adoption and economic growth is statistically insignificant.Second, the interaction between FDI and IFRS adoption (FDI x IFRS) has a significant and positive effect on economic growth, suggesting that the adoption of IFRS "alone may not be beneficial for economic growth, the use of IFRS appears to enhance the positive impact of FDI on economic growth" (Akisik et al., 2020: 11).More recently, Owusu et al. (2022) investigated the relationship between IFRS adoption and economic growth in 78 developing economies during the period between 1996 and 2013 and examined the role of the country-level institutional quality in this relationship.The study results show that countries that adopt IFRS experience better economic growth than non-adopting countries.Also, the findings demonstrate that the good institutions moderate significantly the relationship between IFRS adoption and economic growth.Thus we can formulate the following hypothesis: Hypothesis H1.Mandatory IFRS adoption has a positive and significant impact on economic growth. IFRS and Covid-19 crisis By using the agency theory, the IFRS adoption is originally intended to reduce the informational asymmetry between managers and shareholders, and therefore it is considered as a mechanism to discipline managers and reduces agency conflicts between the two parties.The agency relationship is defined by Jensen and Meckling (1976: 308) as "an agency relationship as a contract under which one or more persons (the principal[s]) engage another person (the agent) to perform some service on their behalf which involves delegating some decision-making authority to the agent". According to this theory, and to reduce the information asymmetry problems between firms (managers) and its investors, the financial reporting information can be used as an important mechanism by the investors.The adoption of the IFRS standards limits the discretionary accounting choices that can be used by managers and therefore reduce their opportunistic behavior (Cuijpers & Buijnik, 2005).However, the question that arises is the following: Does the adoption of IFRS have the same role in times of crisis such as that of Covid-19? Since the end of 2019, according to the World Health Organization (2022), the Covid-19 pandemic has resulted in more than 600 million confirmed cases and about 6 million deaths globally (https://covid19.who.int ).This health crisis, quickly transformed into an economic and social crisis, has produced several concerns about financial accounting.Several accounting scholars have issued calls for research for special issues or academic conferences (Rinaldi et al., 2020).Certainly, the year 2020 has witnessed the economic crisis caused by the pandemic of the coronavirus (Covid-19).The high degree of uncertainty caused by Covid-19 crisis (lockdown measures for example) is certainly to trigger updates to management's operating plans.In this situation, the most entities have several difficulties in estimating some items in financial statement by using IFRS standards.For instance, "management uses projections of future cash flows when testing for impairment of non-financial assets like goodwill; recognizing and measuring the impairment of financial assets; assessing whether the going concern basis of preparation is appropriate; and measuring the fair value of financial assets" (Tokar & Kumar, 2020: 2).In front of this difficult situation, The International Organization of Securities Commissions (IOSCO) has issued in 29 May 2020 a statement acknowledging that it is possible to use imperfect information in the preparation of financial statements.According to the IOSCO, "we remind issuers of their responsibility to use the best available information in making well-reasoned and supported judgments".In addition, IOSCO insists on using Non-GAAP financial measures, also called Alternative Performing Measures (APMs) when they are presented consistently from period-to-period, defined adequately, or are used to supplement financial information (IOSCO, 2020). In this context, we can formulate the following hypothesis: Hypothesis H2.Covid-19 crisis moderates unfavourably the relationship between mandatory IFRS adoption and economic growth. Sample of the study and data The empirical analysis covers 30 developing countries over the period between 2017 and 2020.The sample is composed of: • 15 MENA countries: Algeria, Bahrain, Egypt, Iran, Iraq, Jordan, Kuwait, Lebanon, Mauritania, Morocco, Oman, Qatar, Saudi Arabia, Tunisia and the United Arab of Emirates (UAE). Impact of mandatory IFRS adoption on economic growth: the moderating role of Covid-19 crisis in developing countries • 15 sub-Saharan African countries: Burkina Faso, Burundi, Cameroon, Cote d'Ivoire, Gambia, Ghana, Kenya, Mali, Niger, Nigeria, South Africa, Tanzania, Togo, Zambia and Zimbabwe. In this research, we use a secondary data.We obtain data especially from World Development Indicators (WDI) database published by the World Bank, World Governance Indicators (WGI), Deloite's ias.plus, and IFRS foundation website. Measurement of variables and source of data In this research, seven variables are used: economic growth (dependent variable), IFRS adoption (independent variable), Covid-19 crisis (moderator variable), foreign direct investment (FDI), inflation, trade openness and unemployment (control variables).Measurement and sources of data are reported in Table 1. The scope of the present research is not limited to the impact of mandatory IFRS adoption on Economic growth, it explore the moderating effect of Covid-19 crisis on the relationship between mandatory IFRS adoption and economic growth.Equation 2 can be written as: (Eq.2): lnGdpCapit=β0+ β1IFRSit + β2Covid19it + β3(Covid19it*IFRSit) + β4lnFdiit + β5Inflatit + β6lnUneRatit + β7lnOpenes + εit Where "lnGdpCap" represents natural logarithm of GDP per capita (current USD); "IFRS" represents mandatory IFRS adoption; "Covid19" represents Covid-19 crisis; "lnFdi" represents natural logarithm of FDI net inflows (current USD); "Inflat" represents annual inflation (consumer prices-annual %); "lnUneRat" represents natural logarithm of annual unemployment rate; "lnOpenes" represents natural logarithm of trade openness; "i" denotes the country subscript."t" is the time period."βi" is the coefficient related to different types of variables and "εit" is the error term. To avoid Heteroscedasticity and serial correlation in our models, the Generalized Least Squares (EGLS) method with fixed effect was employed (Hausman, 1978). Impact of mandatory IFRS adoption on economic growth: the moderating role of Covid-19 crisis in developing countries Descriptive statistics and correlations The main data's descriptive statistics, including mean, standard deviation, minimum and maximum values are summarized in Table 2.In panel A for the full sample that includes MENA and SSA countries, lnGdpCap has a mean of 8.07 and IFRS has a mean of 1.19.When we compare panels B and C for MENA and SSA, lnGdpCap for MENA (9.03) is higher than that for SSA (7.10).In these countries, the comparison of lnFdi, Inflat and lnOpenes yields interesting results.The FDI for MENA countries is higher than that for SSA countries (2 vs. 1.93).This superiority is present also in terms of inflation control (5.54% vs. 17.95%) and trade openness (3.67 vs. 3.24).In terms of IFRS adoption and unemployment, these two groups resemble each other. Regression analysis In this sub-section, we test the two hypotheses of the present research.Firstly, we investigate the impact of mandatory IFRS adoption on economic growth (H1), and secondly, we highlight the moderation role of Covid-19 crisis in the relationship between mandatory IFRS adoption and economic growth (H2).Table 4 shows EGLS regression results.As shown in table 4, and in line with the prior researches, the study results show that the impact of mandatory IFRS adoption on economic growth is positive and statistically significant for the full sample (β=0.0387;p< 5% in model 1 and β=0.0960; p< 1% in model 2).These results confirm the validation of the first research hypothesis (H1) according to which the mandatory FRS adoption affects positively and significantly economic growth in developing countries, implying that countries that have adopted IFRS experience better economic growth than nonadopting countries.Our present results confirm those obtained by some research studies (Akisik, 2013;Özcan, 2016;Oppong and Aga, 2019;Owusu et al., 2022;etc.).However, the MENA region and SSA countries don't have same results.Mandatory IFRS adoption has a positive and significant impact on economic growth of the MENA region (β=0.1268;p< 1% in model 3 and β=0.1257; p< 1% in model 4).This relationship is not significant in SSA countries (model 5 and 6).These results can be explained by the fact that among the 15 SSA countries included in our sample, 6 countries are African French-speaking in the OHADA (Organization for the Harmonization of Business Law in Africa) system (Burkina Faso, Cameroon, Côte d'Ivoire, Mali, Niger and Togo).These countries have only recently adopted IFRS, exactly on January 1, 2019.This recent adoption of IFRS will not have a significant impact on economic growth in the very short run. The interaction between mandatory IFRS adoption and Covid-19 crisis (IFRS*Covid19) has a significant negative impact on economic growth (β=-0.0649;p< 1% in model 2), this implies that the Covid-19 crisis moderates unfavorably the relationship between mandatory IFRS adoption and economic growth, especially the covid-19 crisis reduce the positive impact of IFRS adoption on economic growth.These results confirm the validation of the second research hypothesis H2.This relationship is significant for the both groups of countries (MENA and SSA), implying that Covid-19 crisis reduce the impact of IFRS adoption on economic growth in the both studied regions. For the panel A that include the full sample, it is noteworthy that lnUneRat (β=-0.1198;p< 1% in model 1 and β=-0.0466;p< 5% in model 2) have a significant negative impact on lnGdpCap.Implying that low unemployment is crucial for improving economic growth of developing countries.This result confirms those obtained by other researchers in other contexts.Li and Liu (2012) found in China that unemployment has a negative impact economic growth.Soylu et al. (2017) examined the relationship between economic progress and unemployment in Eastern European Countries for the period of 1992-2014.The findings show the presence of an adverse long-term cointegration between unemployment and economic progress. Summary and conclusion This investigation was conducted to provide some empirical evidence of the moderating effect of Covid-19 crisis on the relationship between mandatory IFRS Impact of mandatory IFRS adoption on economic growth: the moderating role of Covid-19 crisis in developing countries adoption and economic growth in developing countries.A panel data of 30 MENA and SSA countries covering a period of 4 years, from 2017 to 2020, was analyzed by using the Generalized Least Squares (EGLS) with fixed effect estimation technique.Our study found two main results: • Firstly, mandatory IFRS adoption has a positive and significant impact on economic growth for the full sample.• Secondly, the interaction between mandatory IFRS adoption and Covid-19 crisis has a negative and significant impact on economic growth, implying that Covid-19 crisis decrease significantly the positive impact of mandatory IFRS adoption on economic growth. This study makes important contributions to both accounting and economics literature.It is among the firsts to examine the effects of mandatory IFRS adoption on economic growth in times of crisis.To the author's knowledge, this is the first research study to investigate the impact of Covid-19 crisis on the relationship between mandatory IFRS adoption and economic growth in developing countries.In addition, our results have important managerial implications for policymakers and regulators.The mandatory IFRS Adoption improves economic growth, but this positive impact is significantly reduced in times of crisis (Covid-19 crisis). Finally, we present the study limitations alongside the directions for further research.Indeed, two main limitations should be discussed in the present investigation.First, this empirical study focused only on two regions of the world (MENA and SSA).Therefore, to better understand the relationship between mandatory IFRS adoption and economic growth in Covid-19 crisis in developing countries, future researches may be conducted to study other regions in Asia, Latin America, etc.Second, this study did not account countries that have adapted their local accounting with IFRS standards (for example Algeria, Egypt, Tunisia, etc.).These countries are considered as "non-adopters of IFRS" in this research.Future studies may be conducted to test the impact of this adaptation on economic growth in Covid-19 crisis.Although these limitations, the present study results remain very useful to both academicians and policymakers in developing countries.
4,454.8
2023-09-30T00:00:00.000
[ "Economics" ]
A mobile learning framework for higher education in resource constrained environments It is well documented that learning oppourtunities afforded by mobile technology (m-learning) holds great potential to enhance technology-enhanced learning in countries and communities with low socio-economic conditions where web-based e-learning has failed because of limited infrastructure and resources. Despite the potential for m-learning, its actual uptake has been low. The extant literature in this sphere provides some theoretical insight, with evidence of limited on-the-ground practical studies that often do not progress beyond the pilot phase. Failure to embed sustainable learning opportunities has been attributed to the absence of a contextual framework suitable for the heterogeneous nature of many developing countries. This paper thus presents an m-learning framework that considers the sociocultural and socio-economic contexts of low-income economies. The framework is based on a range of studies conducted over four years, including the outcome of two empirical studies conducted in a Nigerian university. Documenting the research underpinning the design provides practitioners and policymakers with a framework for a potentially sustainable strategy for long-term mainstream m-learning integration in higher education in low-income countries. Introduction It is well documented that the required infrastructure and cost associated with establishing and implementing technology-enhanced learning (TEL), particularly e-learning -that is, education delivery and/or learning over networked computing devices has impacted some higher education institution's ability to provide such learning opportunities (Eltahir, 2019;Hadullo et al., 2018;Kigotho, 2018;Olutola & Olatoye, 2015). This is especially so in countries classified as low and lower-middle-income economies (LMICs) (World Bank Group, 2018). At a more fundamental level, some of these countries' educational systems are challenged by inadequate funding, leading to crowded classrooms and limited facilities and resources for effective teaching and learning (Ewiss, 2020;Kuchah, 2018;Suresh & Kumaravelu, 2017). It has also been widely recognised that despite these circumstances, the availability of mobile technology platforms in all countries regardless of economic status provides an opportunity for improved educational systems and TEL opportunities in the form of mobile learning (m-learning) (Briggs, 2014;Lamptey, 2020;Traxler & Vosloo, 2014). Nevertheless, m-learning, a form of TEL using portable handheld mobile devices (e.g., phones and tablets) to facilitate and enrich learning regardless of circumstance, time, place and context, has not been widely adopted (Bikanga Ada, 2018;Okai-Ugbaje et al., 2017). Barriers to adoption reportedly include the absence of policies to drive implementation; commitment by institutional leadership and educators' attitude; resources, knowledge and skill, including the pedagogical knowledge to support m-learning; and contextual theories (Farley et al., 2015;Lamptey & Boateng, 2017). The need for the adoption of m-learning, which appears to be a more practical solution to the realities of LMICs due to the deep penetration of mobile technology in the region and the prospects of mobile devices to enhance and enrich learning cannot be overemphasised (Lamptey, 2020;Mohammadi et al., 2020). The coronavirus pandemic has buttressed this need as the suspension of face-to-face teaching and learning to curtail the spread of the virus led to increased uptake of synchronous and asynchronous online teaching and learning globally (Dhawan, 2020;Li & Lalani, 2020;Pokhrel & Chhetri, 2021). In many LMICs, despite pockets of remote learning opportunities, including broadcast education via television and radio (Vegas, 2020;World Bank Group, 2020a), the physical shutdown of institutions halted the majority of students' education (Olisah, 2020;Thomas, 2020). For many, their education only resumed when they were able to return to face-to-face classes. The gap in education was a result of poor infrastructure, inadequate funding and a lack of requisite skills to conduct e-learning (Lawal, 2020;Zalat et al., 2021). The troubling result is that the COVID-19 pandemic further weakened an already struggling educational system (World Bank Group, 2020b), the long-term impact of which is yet to be determined. Against this backdrop, this paper attempts to provide a pathway for the seamless integration of m-learning into existing educational practices within the contexts of LMICs, to enhance and enrich teaching and learning in higher education. Such opportunities may provide a reasonable compromise or substitute to continuing the delivery of education in the event of circumstances warranting the shutdown of face-to-face delivery. The pathway presented is an m-learning framework that aligns theory with practice in using mobile devices to facilitate teaching and learning. The framework considers the pedagogical, socioeconomic, sociotechnical and sociocultural contexts of LMICs. These considerations are important because successful technology adoption not only relies on the availability of the required infrastructure but most importantly, a contextual understanding of the ground realities to ensure technology interventions are fruitful . The research presented herewith brings together the findings of a variety of studies conducted over a four-year period, culminating in the creation of the framework. This paper concentrates on demonstrating the shift from the current realities facing higher education in LMICs to a potentially sustainable technology-enhanced and student-centred learning practice. Following this introduction, the paper presents a review to demonstrate the gap in literature warranting the creation of a contextspecific m-learning framework for LMICs. Then, the conceptual framework that provided the theoretical basis for the proposed framework leading to the resulting framework is explained. The subsequent sections present the study's methodology, reflections on the findings and further research directions for researchers wishing to extend this work. Literature review This section is comprised of three parts. The first provides an overview of m-learning adoption and implementation. This is followed by a critical review and analysis of existing m-learning models and frameworks focused on pedagogies and the learning environment. Finally, a conceptual framework informed by the critical review findings is presented. M-learning adoption and implementation To encourage m-learning adoption and implementation, prior studies have attempted to define its attributes in the form of theoretical models and frameworks, as these denote analytical principles, concepts and ideas that explain phenomena, events or behaviour with a structure, outline or plan (Nilsen, 2015). Hsu and Ching (2015) reviewed 17 m-learning models and frameworks to find the pattern in m-learning research. The models and frameworks were grouped into categories based on the emphasis of each study. The findings showed a growing interest in the pedagogical aspect of m-learning and advocation for m-learning adoption. The review also revealed that most models and frameworks are derived from the context of educationally advanced countries. More recent studies have continued to show these trends. For example, Romero-Rodríguez et al. (2020), in a review of 19 m-learning studies from the context of higher education, most of which were published between 2019 and 2020, also found an increasing trend in the pedagogical aspects of m-learning. Further, the majority of studies were theoretical, as only four of the 19 studies were based on practical concepts. A narrower examination of m-learning studies from the contexts of LMICs showed that m-learning research is gaining momentum in the region (Kaliisa & Picard, 2017;Lamptey & Boateng, 2017). However, again, the vast majority of studies are abstract rather than practical and not underpinned by theory despite the importance of theory to guide educational interventions (Okai-Ugbaje et al., 2017). Lamptey and Boateng (2017) attribute the poor theoretical underpinnings to the absence of theories that consider low-income countries' pedagogical and socio-economic contexts. Review and analysis of m-learning models and frameworks This section presents a critical review and analysis of m-learning theories, focusing on m-learning models and frameworks on pedagogies and the learning environment. In addition to targeting studies focused on m-learning pedagogies, only those with the tangible outcome of a model or framework were considered. For an in-depth analysis, the underpinning theoretical and/or pedagogical approaches of each model or framework, target audience, relationship with other learning approaches (traditional face-to-face, d-learning, and e-learning) and whether the models/frameworks were evaluated or validated were noted. A summary of the findings is presented in Table 1. The literature review findings are consistent with claims that there are limited m-learning models/frameworks from and in the context of developing countries (Hsu & Ching, 2015;Lamptey & Boateng, 2017;Romero-Rodríguez et al., 2020). While the overall number of m-learning models/frameworks is relatively low, the analysis has also shown that a comprehensive framework that is grounded in empirical investigation and considers the pedagogical and socio-economic contexts of higher education in LMICs is missing. Although Irugalbandara and Fernando's (2019) work provides some perspective, its applicability appears limited, given it is designed to push vocational knowledge and the content delivered is at primary and junior secondary education levels. Arguably, any of the models or frameworks from the context of educationally advanced countries may apply to learners in LMICs. However, caution should be taken in importing pedagogical solutions from educationally advanced to lessadvanced countries due to differences in educational opportunities in the regions (Apiola & Tedre, 2012;Okai-Ugbaje, 2021). A good example is that many LMICs have a deficit in resources required to provide adequate pedagogical support to both faculty and students. Thus, the most practised and culturally familiar pedagogy is the didactic lecture, whereby students are mainly passive learners (Kuchah, 2018;Okai-Ugbaje, 2021). This stands in sharp contrast to the more student-centred pedagogies in advanced countries. Therefore, what works in educationally advanced countries may not be suitable in LMICs. This evidence suggests that an m-learning framework, in and from the context of higher education in LMICs, is necessary. Conceptual framework The analysis in Table 1 provides rich insight into the pedagogical aspects of m-learning. However, the works of Koole (2009) and Ng and Nicholas (2013) stand out because of their approaches to theorising m-learning, explained below. Additionally, while each is unique, both frameworks have the strengths of other frameworks. On this basis, they provided a strong starting point for creating the conceptual framework. This study acknowledges the apparent contradiction of beginning with studies from the context of educationally advanced countries given the earlier argument. However, not giving them due consideration appeared short-sighted and negated the importance of building on existing knowledge. Koole's FRAME model (2009) considered the pedagogical, technocentric and interactive aspects of the mobile device for learning referred to as the learner, device, and social aspects respectively. A comparison of both theoretical approaches shows the core aspects of Koole's framework are evident in Ng and Nicholas's framework (2013). The learner aspect of the FRAME model considers the students' prior knowledge and how that forms the basis for new knowledge, emphasising learning theories and how they affect the learner. Ng and Nicholas (2013) share this view, emphasising the pedagogical attributes of learning with handheld devices and the interpersonal relationship between students and educators; they posit that mobile devices not only can bridge formal and informal learning but also support seamless and long-term learning goals. The device aspect of the FRAME model represents the hardware and software characteristics of the mobile device and its usability for learning. While Ng and Nicholas (2013) share this foundation, they also emphasise other technological peripherals such as wireless access points, mobile networks and technical support from IT personnel as crucial for sustainable and seamless m-learning. Finally, the social aspect -the processes of social interaction and cooperation between stakeholders is essential for the sustainability of any m-learning initiative. These approaches towards theorising m-learning provide useful insight to this research, aimed at aligning theory with practice in ways that integrate the social, device and learner aspects of m-learning with the contextual realities of LMICs. Accordingly, Fig. 1, informed by both works, shows how the three aspects (social, learner and device) interconnect for a viable m-learning solution. It also highlights the importance of interaction between and among various stakeholder groups for successful m-learning. Creating the m-learning contextual framework for LMICs Drawing inspiration from the conceptual framework presented in Fig. 1 and the takeaways from the analysed research outlined in Table 1, creating a contextual framework specific for LMICs is explored in the following section. Since context is critical, it begins with an overview of the pedagogical situation in many LMICs, then discusses how two learning theories provide the basis for a shift from current to potentially more effective pedagogical practices. The section concludes with a discussion of how exploiting local opportunities and mobile device attributes with effective stakeholder interaction could help manage some of the socioeconomic and sociotechnical challenges. The most practised pedagogies in many higher institutions of learning in LMICs are the teacher-centred approach and traditional didactic face-to-face delivery, where educators are revered, and intellectual interchanges between students and educators are not widely practised (Apiola & Tedre, 2012;Damon et al., 2016;Muianga et al., 2018). Instead, students are passive recipients who memorise rather than conceptualise the content received (Okai-Ugbaje, 2021). In developing a framework for LMICs, it is essential not to ignore these realities but instead integrate technology to ensure synergy between existing practices and proposed solutions. As stated in the introduction, many higher learning institutions in LMICs have a limited basic educational infrastructure. This includes inadequately equipped computer laboratories and students' limited ownership of personal computers (Akin, 2013;Damon et al., 2016;Eze et al., 2018). Acknowledging that m-learning cannot eradicate the need for these resources, the wide penetration of mobile technology and mobile device ownership by the vast majority of people in these countries (Silver, 2019), including higher education students, provide the possible avenue for m-leaning to be a viable alternative for web-based e-learning. Pedagogical considerations To ensure synergy between existing practices and the successful integration of m-learning, it is beneficial to draw upon theories that view learning as a collaborative, engaging and motivating process. Arguably, these are essential attributes for meaningful learning regardless of the learning mode or delivery. While virtually all forms of learning can be made to foster collaboration and enhance students' engagement, Laurillard (2007) argues that the intrinsic nature of mobile devices makes m-learning motivating because of the degree of ownership and control and opportunity to communicate with peers anytime, anywhere. This facilitates collaborative learning in ways otherwise difficult to achieve, and such collaboration could make learning fun. Further, for TEL to be both worthwhile and enjoyable, the learning design should be facilitated by principles and theories that ensure learning is situated, personal and encourages a high level of interaction with the learning context and content (Kearsley & Shneiderman, 1998). Two established theories that potentially provide these attributes, in addition to offering the appropriate level of challenge to stimulate meaningful learning and interaction, are the social constructivist theory (social constructivism) and the theory of optimal experience (Flow). Although both theories were developed decades before advances in educational technologies, they remain relevant as seminal theories used to effectively address the learning needs of twenty-first-century learners (Lockey et al., 2020;Singh et al., 2020). This is also evident in Table 1. Although a purposeful selection of studies, it shows that all studies underpinned by theory share a connection to seminal theories like constructivism. This is unsurprising given constructivism's focus on studentcentred learning -that is, a learning approach that actively engages students in the learning process through 'student-student, student-content, student-instructor, and student-outside resources interactions' and mobile devices' potential as a learning tool to support the constructivist approach (Ozdamli, 2012, p. 929). 'refine their knowledge through arguments, structured controversy and reciprocal teaching and learning' (Baharom, 2011, p. 6), leading to a shared understanding of the content. Theory of optimal experience (flow) The inherent nature of the mobile device to keep users engaged warrants the inclusion of a theory that centres on intrinsic motivation to make learning enjoyable. The theory of optimal experience, called 'Flow', potentially offers such an outcome. Flow was coined by Csikszentmihalyi (1975), who defined it as 'the holistic sensation that people feel when they act with total involvement' (p. 36). The experience is often characterised by a deep concentration on and engagement in the activity with an intense sense of control, interest and enjoyment, which results in the individual losing track of time (Schmidt, 2010). The three conditions leading to Flow are: the activity has clear goals; there is a balance between challenge and skill; and immediate feedback is available (Csikszentmihalyi, 1990). Experiences characterised by such conditions have become known as the 'Flow state', denoting optimal experience in which the activity becomes worth doing for its own sake (Csikszentmihalyi, 1990). Flow in the educational context is associated with persistence in learning (Park et al., 2010), which can be influenced by intrinsic and extrinsic factors (Chang et al., 2018). Intrinsic factors include the learner's personality and learning preferences, as well as the instructional design and how content is presented. Extrinsic factors include support and encouragement from peers and educators (Chang et al., 2018), which may be influenced through collaboration within and outside the learning environment. The application of Flow to m-learning could lead to active participation in the construction of knowledge and skill development (Power, 2013). Applying the principles of social constructivism and Flow in designing m-learning opportunities has the potential to gradually bridge the cultural power distance between educators and students and provide students with a sense of control over their learning. According to Culbertson et al. (2015), another method for increasing student control is presenting material that leaves students feeling that learning is effortless. This is achievable when material is presented in a way that matches the students' skill, it is viewed as understandable, and learning is likely to be effortless, leading to a sense of control to meet the demands of the course. Further, opportunities for immediate feedback may help maintain students' focus and increase their interest in the learning activity (Park et al., 2010). Amineh and Asl (2015) and Ozdamli (2012) argue that these considerations provide the avenue for personalised, self-directed, lifelong learning. The integration of such an m-learning design as a blended approach to complement existing teaching and learning practices is one way to potentially introduce or strengthen technology-enhanced and student-centred learning in LMICs where such practices are either lacking or weak. Mobile devices In addition to pedagogical considerations, it is also important to consider how attributes of the mobile device impact m-learning design and integration. The connectivity and functionality of the mobile device and constant advancement in technology have transformed portable handheld devices (phones and tablets) from basic communication gadgets to service delivery platforms with tangible benefits and tremendous educational potential (Iqbal & Bhatti, 2020). Such advances make it possible to successfully expand educational opportunities and provide affordable solutions to educational problems, even to the world's poorest nations, by leveraging the devices people already own (West & Vosloo, 2013). This is especially important in LMICs because m-learning initiatives in which participants are provided with devices are unlikely to be widespread due to the economic situations of low-income countries. According to a 2019 Pew Research Centre report, mobile technology and smartphone ownership are increasing globally. However, while 83% of the surveyed population in emerging economies (nine countries) have mobile phones, only 45% own smartphones (Silver, 2019). Despite the relatively low level of smartphone ownership, most devices today have multimedia capabilities in addition to basic functionalities such as calling and texting. These device attributes provide the basis for m-learning. Moreover, studies have shown that the majority of higher education students in LMICs own phones with internet capability (Kaliisa & Picard, 2017;Lamptey & Boateng, 2017). Notably, some mobile phone manufacturers that target countries with low purchasing power are said to meet specific contextual needs in addition to being low-cost. For example, mobile devices (including smartphones) targeted at countries with unstable electricity supply are made to have longer battery life (Nsehe, 2017), with claims that some can run for up to five days before requiring a re-charge for an average phone user (Nkem-Gbemudu, 2016). These design features provide the avenue for m-learning to thrive. Moreover, using personal devices have been reported to increase students' motivation to engage in m-learning and address their learning needs and desires, as they feel empowered 'to make their own decisions facilitated by their own device' (Bikanga Ada, 2018, p. 7), in addition to the ubiquitousness of mobile devices that is making learning and collaboration possible anytime, anywhere. Stakeholders As noted by Ng and Nicholas (2013), a key ingredient to the successful implementation of m-learning is involving stakeholders, such as leadership, management and IT personnel, in addition to students and educators. The role of these stakeholders, particularly management, is essential given effective leadership influences technology adoption by students and staff (Hauge & Norenes, 2015). Moreover, leadership that encourages productive relationships between educators and technical staff is vital, as the ability of the IT department and educators to successfully work together is crucial for achieving success in teaching and learning with technology (Salmon & Angood, 2013). The involvement of these stakeholders improves the quality of m-learning (Adedoja, 2016), and such alliances help educators develop the required skills for m-learning to work effectively (Handal, 2015). Further, Seong and Ho (2012) assert that even in the absence of advanced technologies, collective social capacity influenced by management has the potential to create an environment for communication, dialogue and collaboration among staff. This can foster sustainable policies and stimulate suitable approaches that take into account local capacity, culture and way of life suitable to the context. Accordingly, available resources used in a meaningful way have the potential for successful implementation. These considerations are especially important in LMICs where technological infrastructure is still evolving and to bridge the gap between policies and ideas that are too ambitious and those likely to succeed. Overview To evaluate and determine the practical application of the contextual framework, the authors conducted two separate investigations in a Nigerian university. Accounts of both studies are detailed in peer reviewed articles. The first, an exploratory study involved stakeholder groups in the conceptual framework (management, educators, students and IT personnel). The goal of the study was to determine the willingness and readiness of the stakeholders to use m-learning. Given the apparent difference in pedagogical practices between the existing teacher-centred approach at the university, and the technology-enhanced and student-centred learning attainable via m-learning, the study included a focus on the pedagogical readiness of educators and students to engage in m-learning. Interestingly, the findings showed that although all four stakeholder groups showed strong willingness and readiness for the implementation of m-learning, and students and educators were keen to engage in m-learning, some educators in spite of their support were concerned about losing control of the classroom if students became more independent learners. The empirical data and its analysis is reported in Okai-Ugbaje et al. (2020a). The authors concluded that the success of m-learning in practice would largely depend on the positive attitude of educators to the m-learning pedagogy. Following the outcome of the exploratory study, an informal workshop was conducted in order to understand the concerns of educators regarding implementation including misconceptions about student-centred learning. The workshop helped concerned educators to see that enabling a democratic learning environment following the principles of social constructivism and Flow did not necessarily mean relinquishing control of the classroom, but rather provided students with the oppourtunity to be actively engaged participants in their learning journey. Following the workshop, two educators volunteered to trial m-learning, resulting in an experimental study of the practical implementation of m-learning (trial and intervention are used interchangeably when referring to the experimental study). The intervention had two goals. The first was to determine the impact of an m-learning design underpinned by the principles of social constructivism and Flow on the traditional face-to-face delivery and the learning experience of students, as well as the ability of educators to step back a little and let students take some control of their learning. The second goal was to determine the practicality of a cost-effective and sustainable m-learning delivery in an educational setting with limited educational technology resources. The intervention was undertaken using the blended learning approach in which m-learning was used as a complement rather than a standalone approach to augment the existing traditional delivery. Detailed accounts of how the studies were conducted, including how data were collected and analysed are reported in Okai-Ugbaje et al. (2020b). Research participants Participants for both studies were students and staff from a Nigerian university. The exploratory study included 566 student participants and 21 staff members comprising 14 academics, four IT personnel and three senior management staff members. Participants for the second study included two academics and 208 students. Methodology The methodology for both studies was a mixed-method design. It was chosen because a combination of both quantitative and qualitative data collection methods was considered most suitable to achieve the research goals. In the exploratory study, students' responses were collected via survey and the other stakeholders' views were collected via semi-structured interviews. In trailing the practical implementation of m-learning, the second study collected data via survey and observation of students. The m-learning component was delivered using lecture videos created by the course lecturers hosted over a public cloud platform. Students were required to watch the videos then collaborate on WhatsApp chat platforms created for the intervention, before the face-to-face class sessions. The WhatsApp platforms and the classroom served as data collection sites where the researcher was an observer. Further, students' experiences from the intervention and thoughts about the trial were collected via survey after the study. The quantitative data from the exploratory and experimental studies were analysed using SPSS, and the qualitative data (semi-structured interviews) were analysed using the NVivo software. Reflections on the findings and discussion This section reports on additional insights gained at the end of the project. Specifically, it shows how the contextual framework provides the pathway for a transition from current realities to what is attainable. Thus, this paper calls for m-learning research to go beyond trials and focus on mainstream integration, whereby m-learning is considered one of the main vehicles of higher education rather than an additional channel or enabler. This philosophical shift is particularly important in order to realise the unique potential of m-learning as a 'catalyst for pedagogical change' (Cochrane, 2014, p. 30). Building on the findings and experiences gathered from the exploratory and experimental studies, Fig. 2 provides a snapshot of current sociocultural elements, as well as socio-economic and sociotechnical factors that impact the integration of TEL in higher education in many LMICs. As noted earlier, this research does not claim that m-learning is the silver bullet to the challenges of TEL in LMICs. However, it does argue that a well-considered, contextually appropriate m-learning approach holds the potential to be a viable and effective alternative to web-based e-learning, where teacher-centred and traditional face-to-face approaches with limited technology use are the norms. Similarly, the contextual framework (Fig. 2) also shows how applying the principles of social constructivism and Flow potentially provide the pathway for a shift from current pedagogical practices. Social constructivism advocates for dynamic interaction between educators and students and between the learning context and content, thereby promoting active learning. In other words, the learner can construct knowledge based on their active involvement in the learning process made possible through social interaction in a democratic learning environment enabled by the educator (Ozdamli, 2012). Thus, the teacher becomes a mentor or facilitator rather than delivering a didactic lecture where students are passive learners. Conversely, the principles of Flow advocate that educators provide clear learning goals and objectives so that students know expected learning outcomes (Chang et al., 2018;Power, 2013). In addition, the design parameters of Flow require learning to be at the appropriate skill level and content to be brief and concise to make learning engaging and fun. During the m-learning trial, the courses were presented in content grouped together via video (in approximately 15-minute chunks) and each lecture video was made to achieve a specific objective (Okai-Ugbaje et al., 2020b). This was especially important because of the relatively small screens of mobile devices. The combination of these principles to deliver some parts of the course provided the avenue for gradual and seamless integration of m-learning as a means of technology-enhanced and student-centred learning approach. Our study showed that the application of these principles saw students benefit from and value m-learning. For example, most students reported that content was easier to comprehend because the lecture videos made it possible for them to study at their own pace and convenience, with the opportunity to rewind as often as needed. Fostering a deeper understanding subsequently led students to ask more relevant and specific questions (Okai-Ugbaje et al., 2020b). In line with the findings of the trial, a growing body of evidence (Cho et al., 2015;Culbertson et al., 2015;dos Santos et al., 2018) shows that students who engage in collaborative learning environments underpinned by Flow exhibit a deeper understanding of the subject and better learning outcomes. TEL integration goes beyond educators and students alone, and as a potentially sustainable mainstream m-learning approach, it requires institutional change management (Salmon & Angood, 2013). While a common success factor for IT interventions in developed countries is key stakeholders' involvement in the project, the culture of many LMICs and power distance between the leaders and people require more than stakeholder involvement. The leadership's conviction on the intervention and the importance of their role in influencing change is necessary for achieving desired results (Gregor et al., 2014). This parallels Pollack and Algeo's (2015) argument that successful change management requires the organisation's leadership and management to own and subsequently align the necessary change as required. Given the socio-economic and sociotechnical circumstances of many LMICs, including limited fixed broadband internet connectivity, which makes access to the internet predominantly through mobile broadband (International Telecommunication Union, 2017), m-learning may be facilitated through collaboration with local mobile network providers. The findings of the exploratory study suggest that collaboration with mobile service providers mediated through institutional management may boost alliances with the university's IT department (IT personnel), including technical support to the university community and large data plans for students and staff at subsidised rates (Okai-Ugbaje et al., 2020a). Considering LMCIs' limited infrastructure capability, cloud-based m-learning was proposed and used for the trial. Cloud-based m-learning combines the benefits of cloud computing and m-learning, offering m-learning content delivery that is ubiquitous, convenient and low-cost, as it does not need any infrastructure purchases, installation, configuration and maintenance (Badidi, 2016;Masud & Huang, 2013;Simmon, 2018). It also eliminated obstacles, such as limited memory and processing power of mobile devices, often reported as limitations of traditional m-learning (Badidi, 2016;Masud & Huang, 2013). It further ensured continuity in learning even when students moved across multiple mobile devices. The outcome of the intervention strongly suggests that cloud-based m-learning, alongside device attributes such as personalisation and ubiquitousness, and strong stakeholder interaction that builds trust and promotes a mutual working relationship between administrative management, IT personnel and educators, could help to effectively manage the socio-economic and sociotechnical factors enumerated in Fig. 2. Further, as shown in the contextual framework, applying the principles of social constructivism and Flow as pedagogical underpinnings may help overcome current sociocultural factors that encourage teacher-centred pedagogical practices. For example, in adopting social constructivism, educators become mentors and role models rather than didactic teachers. In doing so, they create a democratic learning environment that encourages collaborative learning. These may trigger a deeper understanding of the content, leading to intellectual interchange between students and educators, and other learner-centred pedagogical practices. Likewise, in adopting Flow, educators provide clear learning goals and opportunities for feedback, which may trigger deeper engagement and enjoyment of learning. Conclusion, limitations and further research The role of governments in educational systems is clearly important. However, current realities suggest that the governments of many LMICs who decades into the twenty-first century are still unable to provide basic needs like stable electricity, clean water, good sanitary conditions and other social amenities for the vast majority of their population; may not be able to provide robust educational systems as seen in educationally advanced climes anytime soon. Ironically, these limitations stand in sharp contrast to the deep penetration of mobile technology and wide ownership of mobile devices by the vast majority of the people in these countries, including students. This ironic situation can be capitalised upon, as it provides the avenue for m-leaning to be a viable alternative for web-based e-learning. Therefore, the opportunity arises for higher education policymakers and leaders to leverage mobile technology to design effective m-learning to facilitate TEL. In doing so, LMICs have the opportunity to leap-frog a generation of educational technologies used in the developed world and adopt m-learning directly, as it is more feasible. We argue that the feasibility of m-learning holds great potential to improve learning conditions and expand the reach of higher education to a large population of LMICs currently deprived of such opportunities. This need has been accelerated by the realities of the COVID-19 pandemic when educational institutions are frantically looking for alternative means to traditional classrooms and face-to-face teaching. While m-learning has been widely acknowledged as a possible alternative, its implementation in many LMICs is still low despite the potential for success. Were m-learning a part of the educational system in many LMICs, the physical shutdown of institutions due to the pandemic may not have meant a halt in education, as was the case in many instances, until the resumption of face-to-face classes. Instead, a rapid and remote shift to m-learning may have been possible, as seen in countries where e-learning was an integral component of the educational system before the pandemic. Against this backdrop, this paper has presented a potential pathway for effective mainstream integration of m-learning in higher education in LMICs now and beyond the pandemic. While this study has attempted to provide a contextual m-learning framework to guide sustainable m-learning in LMICs, it has some limitations. First, the study participants were drawn from only one university and included only internal stakeholders (students, educators, IT personnel and management). However, the study's focus is a necessary first step in such an investigation, as it provides the basis for wider stakeholder participation. Future studies will do well to include external stakeholders, including mobile service providers. Second, this study only considered m-learning for teaching and supporting student learning. Further research on the role of m-learning in assessing and evaluating students' performance is necessary for complete integration.
7,866.6
2022-05-24T00:00:00.000
[ "Education", "Computer Science" ]
On new general versions of Hermite–Hadamard type integral inequalities via fractional integral operators with Mittag-Leffler kernel The main motivation of this study is to bring together the field of inequalities with fractional integral operators, which are the focus of attention among fractional integral operators with their features and frequency of use. For this purpose, after introducing some basic concepts, a new variant of Hermite–Hadamard (HH-) inequality is obtained for s-convex functions in the second sense. Then, an integral equation, which is important for the main findings, is proved. With the help of this integral equation that includes fractional integral operators with Mittag-Leffler kernel, many HH-type integral inequalities are derived for the functions whose absolute values of the second derivatives are s-convex and s-concave. Some classical inequalities and hypothesis conditions, such as Hölder’s inequality and Young’s inequality, are taken into account in the proof of the findings. Introduction Mathematics has basically started its adventure as a theoretical field with the efforts of researchers for centuries, and has continuously aimed to formulate events and phenomena in various fields such as physics, engineering, modeling, and mathematical biology into a form that can be calculated. Not content with this, it has always been looking for more effective and original solutions to problems. Fractional analysis is also one of the important tools that serve mathematics to find solutions to real world problems. In fact, recent studies have shown that fractional analysis serves this purpose more than classical analysis. The basic working principle of fractional analysis is to introduce new fractional derivatives and integral operators and to analyze the advantages of these operators with the help of real world problem solutions, modeling studies, and comparisons. New fractional derivatives and related integral operators are a quest to gain momentum to frac-tional analysis and to gain the most effective operators to the literature. This search is a dynamic process, and different features of kernel structures, time memory effect, and the desire to reach general forms are factors that differentiate fractional operators in this dynamic process. We will now take a look at some of the basic concepts of fractional analysis and build the basis for our work. Definition 1 (see [1]) Let ϑ ∈ L[ϕ 1 , ϕ 2 ]. The Riemann-Liouville integrals J ζ ϕ 1 + ϑ and J ζ ϕ 2 -ϑ of order ζ > 0 with ϕ 1 , ϕ 2 ≥ 0 are defined by The Riemann-Liouville fractional integral operator is a very useful operator and has been applied to many problems by researchers in both mathematical analysis and applied mathematics (see [2][3][4]). For many years, Caputo derivative and Riemann-Liouville integrals have been the best known operators in fractional analysis. Recently, the development of new fractional operators has accelerated and comparisons have been made by taking these operators as reference. We will now proceed with the definition of a new fractional integral operator that contains the kernel of the Riemann-Liouville integral operator. Definition 2 (see [5]) The fractional integral related to the new fractional derivative with nonlocal kernel of a mapping ϑ ∈ H 1 (ϕ 1 , ϕ 2 ) is defined as follows: In [6], the authors gave the right-hand side of integral operator as follows: Here, (ζ ) is the gamma function. Due to B(ζ ) > 0 that is called the normalization function, this yields that the fractional Atangana-Baleanu integral of a positive function is positive. It should be noted that, when the order ζ − → 1, we recapture the standard integral. Also, the original function is recovered whenever the fractional order ζ − → 0. This interesting integral operator owes its strong kernel to its associated fractional derivative operator. The Atangana-Baleanu fractional derivative operator is a nonsingular and nonlocal fractional integral operator with its kernel structure containing the Mittag-Leffler function. This rare operator is described in the Caputo sense and the Riemann-Liouville sense as follows. After giving some basic information and concepts about fractional analysis, which is one of the basic foundations of the study, we will continue by reminding some basic concepts on convex functions and inequalities. Analytical and geometric inequalities are a topic that researchers focus on in mathematics both theoretically and practically. Especially in the last centuries, with the effect of convex analysis on theory, new inequalities and its applications have expanded the field. The contribution of different types of convex functions to the literature is supported by the inequalities proved based on them. The concept of convexity, which has a special position among functions with the aesthetics of its algebraic structure, its geometrical properties and the richness of its application areas, encounters the interest of researchers in many disciplines such as physics, engineering, economics, and approximation theory, as well as in mathematics. With the effect of this interest, many new types of convex functions have been introduced, and the concept of convexity has been carried to different spaces with multidimensional versions. The diverging and convergent aspects of each new convex function type have been identified, and enrichment has been added to the field of convex analysis. Now let us refresh our memory by talking about the convex function, the s-convex function in the second sense, and the HH-inequality. In [21], Orlicz has given the definition of s-convexity as follows. Obviously, one can see that in case of s = 1, both definitions overlap with the standard concept of convexity. The famous HH-inequality, which is built on convex functions with its different modifications, generalizations, and iterations, generates lower and upper limits for the mean value in the Cauchy sense and is given as follows. Assume that ϑ : The HH-inequality for convex mappings can be presented as follows (see [20]): In [22], a new variant of HH-inequality for s-convex mappings in the second sense has been performed by Dragomir and Fitzpatrick. , then one has the following: Here, we must note that k = 1 s+1 is the best possible constant in (1.5). To provide more details related to different kinds of convex functions and generalizations, new variants and different forms of this important double inequality, we suggest to read the papers . This study is organized as follows. First of all, the basic concepts to be used in the study were defined, and the scientific infrastructure required for the proof of the findings was created. In the main findings section, a new generalization of the HH-inequality, which includes Atangana-Baleanu integral operators for s-convex functions in the second sense, is obtained. Then, by giving an integral identity for differentiable s-convex functions in the second sense, new HH-type inequalities are proved for functions whose absolute value is s-convex in the second sense with the help of this identity. Also, a similar inequality is obtained for s-concave functions. New results by Atangana-Baleanu fractional integral operators We start this section by giving the following inequalities containing the versions of the HH-inequality for s-convex mappings in the second sense via new fractional integral operators defined by Atangana and Baleanu. We continue this section by giving an equality containing second order derivatives for Atangana-Baleanu integral operators. Proof By using the integration by parts, we can get If we use the integration by parts again, we can write By using the changing of variable, we get the term for Atangana-Baleanu integral operators (ϕ 2ϕ 1 ) ζ +2 2(ζ + 1)B(ζ ) (ζ ) -2(ζ + 1)( 1 2 ) ζ ϑ( ϕ 1 +ϕ 2 2 ) (ϕ 1ϕ 2 ) 2 (2.7) As a similar calculation of (2.7), we get If we add (2.7) and (2.8), and after this step if we multiply the resulting equality by 1 (ϕ 2 -ϕ 1 ) , we complete the proof of the inequality in (2.6). Now, we are going to produce generalizations of the HH-type inequalities for Atangana-Baleanu fractional integral operators by using the new integral equation and s-convexity identity. Throughout the study, we denote the following terms with F: Theorem 3 Let ϕ 1 < ϕ 2 , ϕ 1 , ϕ 2 ∈ I • and ϑ : where ζ ∈ (0, 1]. Proof By using the equality in (2.6) and the s-convexity of |ϑ |, we have Afterwards, by getting the necessary calculations, we complete the proof of the inequality in (2.9). Corollary 1 In Theorem 3, if we choose s = 1, we have the following inequality: Corollary 3 In Theorem 4, if we choose s = 1, we have the following inequality: Proof When we use Hölder's inequality from a different point of view, we can write If we apply the s-convexity of |ϑ | q and calculate the above integrals, we get the desired. Corollary 5 In Theorem 5, if we choose s = 1, we have the following inequality: Proof By using Hölder's inequality in a different way, we can write If we use the s-convexity of |ϑ | q above, we have By making the necessary integral calculations, the proof is completed. Proof By using Lemma 1, we have By using Young's inequality as xy ≤ 1 p x p + 1 q y q , we get By using the s-convexity of |ϑ | q and by simple calculations, we provide the result. Conclusion We see that the main idea for most of the studies in the field of inequalities is to generalize, to reveal new boundaries, and to create findings that will allow different applications. In this direction, sometimes the features of the function, sometimes new methods, and sometimes new operators are used, and these choices add original value to the studies. In this context, in the paper, which includes reflections of fractional analysis to inequality theory, the main motivation point is to obtain new integral inequalities for s-convex and s-concave functions that involve Atangana-Baleanu fractional integral operators. First, a general form of the HH-inequality for Atangana-Baleanu fractional integral operators has been obtained. Then, using a newly established integral identity, various HH-type inequalities have been derived. The special cases of these inequalities, which are presented in general forms, have been taken into consideration.
2,343.6
2021-11-18T00:00:00.000
[ "Mathematics" ]
Detecting Selected Network Covert Channels Using Machine Learning Network covert channels break a computer’s security policy to establish a stealthy communication. They are a threat being increasingly used by malicious software. Most previous studies on detecting network covert channels using Machine Learning (ML) were tested with a dataset that was created using one single covert channel tool and also are ineffective at classifying covert channels into patterns. In this paper, selected ML methods are applied to detect popular network covert channels. The capacity of detecting and classifying covert channels with high precision is demonstrated. A dataset was created from nine standard covert channel tools and the covert channels are then accordingly classified into patterns and labelled. Half of the generated dataset is used to train three different ML algorithms. The remaining half is used to verify the algorithms’ performance. The tested ML algorithms are Support Vector Machines (SVM), k-Nearest Neighbors (k-NN) and Deep Neural Networks (DNN). The k-NN model demonstrated the highest precision rate at 98% detection of a given covert channel and with a low false positive rate of 1%. The fundamental deficiencies on the existent wardens are two. Firstly, the need of human intervention to insert rules that are capable of identifying ambiguities on the network traffic. Secondly, the warden is limited in the sense that it can only detect threats that are already known. A Network Anomaly Detection System (NADS) circumvents the mentioned deficiencies by encountering unusual patterns in the network traffic that are non-compliant with expected normal behavior. NADS can be statistically-based, classification-based, clustering-based or information theory-based. Currently, the anomalies can fall into three groups: collective, contextual and point. Point anomalies refer to any deviation of particular data from a normal pattern of a dataset. When anomalies occur in a particular context, they are designated as a contextual group. Collective anomalies are the correlation of similar anomalies within an entire dataset [54], [36]. The main limitations of NADS are: 1) The inherent need to define the notion of the traffic's "normality". Actually, an object is considered as anomalous if its rate of deviation within the defined profile of normal is adequately high. 2) NADS usually requires human intervention for analyzing, interpreting and acting on the generated alert by the infected systems. 3) The variation detection of network traffic with known anomalies by NADS has not been dealt with in depth. Indeed, one of the most powerful and cheapest tactics for a cyber-attacker to evade security countermeasure is to develop new variants from existing covert channel techniques. The related academic literature on detecting NCC with ML has mainly two limitations. Firstly, the proposed detection schemes were largely tested on very limited covert channel techniques. Secondly, little attention has been given on classifying covert channels into patterns. The primary contributions of this paper are: (1) analyzing eleven popular NCC tools and classifying them accordingly into patterns [28]; (2) designing a proof of concept for the detection and classification of NCCs using three different ML algorithms; (3) comparative evaluation of the used ML algorithms; and (4) identification and discussion of the results and indicating possible future research directions. We present our analysis of related work in Section II. In subsequent sections (Sections III-IV), we describe the detection scheme and discuss the experimental measurements. Our findings are discussed in Section V and conclusion in Section VI. II. FUNDAMENTALS & RELATED WORK The traditional approach on detecting NCCs (i.e. signaturebased, behavior-based, heuristic-based) relies on the manual definition of signatures. This approach often fails to detect novel threats. To circumvent this problem ML is widely used [14], which is capable of modelling the normal behaviour of a network traffic and consequently can detect any unexpected behavior within the network traffic without (or with minor) human intervention. ML aims at making computer systems adapt their actions so that these actions get more accurate [58] ML techniques are generally grouped into two categories: supervised and unsupervised. The supervised category learns from a set of labelled data and encodes that learning into a model to predict an attribute for new data. On the other hand, unsupervised ML is used to find patterns within data that are without a specified target variable [24]. Supervised ML, which is also called classification, is characterized by the use of a labelled dataset. Classification is defined by the creation of a model by the training the labelled dataset. The created model is then used to predict the label of a certain dataset with an unknown label. We distinguish three families of datasetss as follows [1]: 1) Synthetic: Created to fulfill special requirements in relation with real data circumstances. 2) Benchmark: Generated on a simulated environment along with network devices. 3) Real life: Prepared by collecting network traffic during a certain period of time. The identification and detection of covert channels are distinct. The identification aims to determine a shared resource that could be utilized as covert carrier. However, the detection examines the event flow in order to reveal a covert channel in operation. To minimize any negative impact on performance, the detection mechanism should be implemented before the elimination one. There are mainly three strategies to detect network covert channels: signature-based, anomaly-based and specification-based [19]. The signature-based detection requisites the creation of a baseline of signatures that need regular maintenance and update (signatures are typically patterns that a warden should monitor). This type of detection is typically the pattern that a warden should monitor to detect covert channels. The anomaly-based detection approach identifies any deviation from the normal traffic. Lastly, the specificationbased approach intends to match the predefined specifications of a protocol to verify any misuse or attacks. The capabilities of the listed detection approaches are limited to the known NCCs. These limitations could be circumvented by the use of ML, which is referred to as the studies of automatic techniques for learning to make accurate predictions based on past observations [45]. There are three types of Intrusion Detection System (IDS) using ML: single, hybrid and ensemble. When an IDS uses an individual ML algorithm it is called single, hybrid IDS uses several algorithms. However, ensemble IDS refers employs a combination of several weak ML algorithms [27]. Various ML algorithms have been proposed in covert channels related literature. For instance, Hidden Markov Model (HMM) was used to detect covert communication on the TCP stack [30]. The main pitfall of it though is the limitation of the algorithm on detecting covert communication in applications that use tunneling. Gilbert and Bhattacharya [50] suggested a twofold detection system that features both covert channel profiling and anomaly detection. The genetic algorithm was used in the IDS first in 1995 by applying a hybrid approach of multiple agents and genetic programming in order to detect anomalies [13], [33]. Some enhanced ML techniques include the Intelligent Heuristic Algorithm (IHA) based on Naive Bayes classifiers to detect covert in IPv6 [41]. Salih et al. [22] improved the detection rate and reached an accuracy of 94% by using enhanced decision trees C4.5 with a very low false negative rate. C4.5 was also applied to detect protocol switching covert channels (PSCC) in [55]. The main inconvenience of the supervised method is that it requires labelled information for efficient learning. Additionally, it can hardly deal with the relationship between consecutive variations of learning inputs without additional prepossessing. There is a considerable number of works that have used Support Vector Machine (SVM) to classify network anomalies [24], [16], [2], [4], [10], [11], [12], [20], [23], [35]. Compared with other ML algorithms, SVM has faster processing and is capable of processing both supervised and unsupervised learning [26], [17], [6]. For instance, SVM was used in a passive warden to detect TCP anomalies within TCP ISN and IP ID [25] or IPv4 network anomaly detection [29], [8]. SVM could be used to classify patterns based on statistical learning techniques for the regression and the categorization [23], [4]. This algorithm aims to achieve the optimal separating hyperplane in a higher dimensional feature space by using a kernel function. On the other hand, it has been demonstrated that k-NN is one of the simplest ML algorithms [31]. Firstly, it classifies the entire dataset into training and testing data points. Secondly, it evaluates the distance from all training points to the testing points. The point that has the lowest distance is named nearest neighbor. Tsai et al. [34] suggested a hybrid method based on a triangle area using the k-NNs approach to detect attacks. suggested a hybrid method based on a triangle area using the k-NNs approach to detect attacks. They extract the number of center clusters where each cluster center constitutes one specific type of attack. Then, the triangle area is calculated by two clusters chosen randomly and one data point from the dataset. Finally, the constituted triangle symbolizes one new feature for measuring similar attacks. This k-NN classifier is used based on the feature of triangle areas to detect intrusions. Most studies on the detection of storage covert channels were tested with a single popular tool (e.g. Covert-TCP), [47], [48], [51], [49] [42], [52], [53] with captured traffic [45], [46], [44], [48] or with a personalized developed tool for the purpose of research work [47]. The authors of this paper propose a detection concept that uses ML with 3 different algorithms, which are not based on own particular developed techniques but on popular tools instead. III. DETECTION APPROACH The proposed approach is based on three steps: (1) generating the datasetss containing network traffic, (2) training and feature extraction, and (3) testing the models with different tools. A. Generating the datasets In order to train the ML algorithms, datasets were created of a mixture of real life and benchmark network packets [1] (ge umhnerated in a lab environment). For the benchmark dataset we have collected a set of tools as listed on Tab. II and covert traffic was produced. Then, PCAP files were labelled according to the type of pattern the covert packets belonged to. Wendzel et al. introduced a pattern-based classification of covert channels. 109 covert channels were categorized into 11 distinct patterns based on their similarities [28]. For example, the pattern P7 represents covert channels that encode data into a reserved or unused field. To train the ML models, large labelled datasets (supervised) that represent each type of pattern were used. As shown in Tab. I, this research work focuses only on the following patterns: Non-P0-P1-P5-P7 Different patterns from the above Labels are saved in a CSV file, so that each packet of the PCAP file is numbered and labelled accordingly. B. Training and features extraction As we use supervised ML, our training process requires a pair of files (PCAP and CSV) and uses the Pcap2scikit class from scikit-learn (Python ML library). Each time the model is to be trained, the script checks the previous training model and adds it to the new data. If there is no previous data, then the model is newly created (Fig. 1). At the same time, features are extracted from each packet. The features are extracted when preprocessing the packet data. For example, the TTL field can be preprocessed to determine whether the packet is involved in TTL value modulation. TTL values in packets sent by each sources address are compared to previous TTL values in packets sent by the same source. If the TTL value has changed, then the feature is the percentage of packets that have previously had modified TTLs out of the total packets sent from the same source address. Both ML algorithms (k-NN and SVM) have a similar training process. Therefore, they are stored in a file. The cross-validation is executed using utility methods provided by SKLEARN. For DNN we used TEENSORFLOW which does not store models on a single file. Instead, a directory is used to store several meta data and graph files. Since TEEN-SORFLOW does not provide convenient methods for crossvalidation or predictions, we have created a method to make these possible on both TEENSORFLOW and SKLEARN. C. Testing ML models are also called classifiers, and aim at learning by corresponding the classes with the inputs [38]. Classifiers are widely used to detect general network anomalies and also covert channels. By generating knowledge based on using the normal packets, the classifiers treat any activities that differ from the normal packet attribute as covert. Therefore, novel covert channel techniques can be detected with minimum effort (as they also deviate from normal packets). Selecting the appropriate classifier is a challenging task and generally based on the accuracy of the prediction. In this paper only supervised ML algorithms were used (k-NN, SVM and DNN) for the following reasons: • k-NN is characterized to be one of the most straightforward instance-based learning algorithms [32]. • SVM belongs to the newest supervised machine techniques, it is pertinent with large number of features and it is very useful in insolvency analysis (when data are non regular) [37]. • DNN is capable of learning features automatically at any level of abstraction by mapping the input and the output directly from data with a negligible human-crafted feature [39], [40]. The detection process requires a pair of (PCAP, CSV) files and starts by first extracting the features from the packets similarly to the training process. Secondly, it loads the model from a pkl file. Thirdly, it calculates the accuracy. Lastly, it creates both metrics and confusion matrix. An output file is then produced which contains metrics values such as the number of packets and the prediction rate (whether a packet is normal or covert), as well as the classification of a certain covert packet into covert channel patterns. To train the ML models we used large labelled datasets (supervised) that represent each type of pattern Tab.II. The following 20 features are extracted from preprocessing the packet data: D. Metrics The ML model is first evaluated using the 3-fold crossvalidation via the cross val score function in scikit-learns tool. E. Model fitting and persistence When the model is full with data, then it is stored to the models directory. The scores from cross-validation and the label encoder are also added. The original input PCAP and CSV files are saved to the model's directory. The model must be saved to a disk to be reloaded and used for the testing (prediction). The training data is saved to be combined with additional training data in the future. IV. EXPERIMENTS A. Experimental setup 1) The dataset: The covert dataset was created using ten popular tools and the normal dataset from http://mawi.wide.ad.jp. Afterwards, they were classified into four patterns. Network packets were generated with each tool and were systematically labelled as belonging to one of four patterns as described in Tab. I. The global dataset is made up of a consolidation of all packets created and labelled. The distribution of normal and covert network packets (dataset) for the classification of training and testing is summarized in Tabs. IV. 2) Evaluation methodology: To measure the performance of the NCC's, the confusion matrix Tab. X, the accuracy, false positive, detection and precision Tab. XI were calculated using the following metrics: 2) k-NN: After having tested many values of k, k=4 was identified as being the best value. As shown in Tab. VI, the rate of detection, accuracy, precision and false positive with SVM are 89%, 90%, 96% and 1%, respectively. 3) DNN: The detection rate is high (92%) with a precision of 85%, the accuracy rate is 67% and false positive 4%. V. DISCUSSION This section provides a comparison between the different classifiers based on their metrics performance: accuracy, detection rate, precision, TP, TN, FP and FN. This comparative provides a basis of evaluation in order to identify the best ML algorithm to detect NCC's. Tab. X shows the performance of the training dataset. k-NN performs the best given the highest rates of detection, precision, accuracy, TP and lowest rates of FN and FP. Tab. IV provides the detection capabilities for the different patterns over the training dataset. The results reveal that k-NN is capable of classifying NCC's into patterns with high accuracy and precision. The measurement results of k-NN on testing dataset demonstrates a difference on classification capabilities of the NCC into patterns. (Tab. IX). While NP0157, P1 and P0 allowed for the highest classification accuracy and precision rates, P5 and P7 resulted in the lowest values. On average, compared with DNN and SVM, k-NN provides the best accuracy and precision rates and lowest FP value. DNN has the highest detection and FP rates. The results indicate that k-NN has a significant level of difference with DNN and SVM over both training and testing datasets. Therefore, a reliable conclusion can be drawn that k-NN perform the best to detect and classify NCC by a considerable margin. Our work is limited in different ways. First, the dataset of the study was restricted to some selected NCC tools. Second, we obtained the training data from the tools and we believe that this is a problem for real world applications of machine learning algorithms. Third, the work was limited to using 3 machine learning algorithms (i.e. SVM, k-NN, DNN) VI. CONCLUSION The rapid growth of computer networks has driven forward the need to acquire security policy that ensure confidentiality, integrity and availability of information. This has led cyberattackers to find ways to break security policy and infiltrate or exfiltrate information using network covert channel techniques. Detection mechanisms to detect covert channels are based on identifying any deviation of nonstandard or abnormal behavior. In this paper, selected ML methods were applied to detect popular network covert channels. The capacity of not only detecting, but classifying covert channels with high precision is also demonstrated. A dataset was created from eleven standard covert channel tools and the covert channels are then accordingly classified into patterns and labelled. Half of the generated dataset is used to train three different ML algorithms. The remaining half is used to verify the algorithms precision. The tested ML algorithms are Support Vector Machines (SVM), k-Nearest Neighbors (k-NN) and Deep Neural Networks (DNN). The k-NN model demonstrated the highest precision rate at 98% detection of a given covert channel and with a low false positive rate of 1%. DNN has the highest rate of FP and SVM has the lowest precision with testing dataset. The findings of the research results suggest several areas of future work. Firstly, the possibility of additional covert channel patterns to be tested through the detection scheme. Secondly, further investigation with other ML algorithms could be considered. Most importantly however, research is needed to study how the discussed classifiers impact legitimate network communications while detecting and classifying covert channels on a large scale.
4,352.6
2019-07-01T00:00:00.000
[ "Computer Science" ]
A few thoughts on $\theta$ and the electric dipole moments I highlight a few thoughts on the contribution to the dipole moments from the so-called $\theta$ parameter. The dipole moments are known can be generated by $\theta$. In fact, the renowned strong $\cal{CP}$ problem was formulated as a result of non-observation of the dipole moments. What is less known is that there is another parameter of the theory, the $\theta_{QED}$ which becomes also a physical and observable parameter of the system when some conditions are met. This claim should be contrasted with conventional (and very naive) viewpoint that the $\theta_{\rm QED}$ is unphysical and unobservable. A specific manifestation of this phenomenon is the so-called Witten effect when the magnetic monopole becomes the dyon with induced electric charge $e'=-e \frac{\theta_{QED}}{2\pi}$. We argued that the similar arguments suggest that the electric magnetic dipole moment $\mu$ of any microscopical configuration in the background of $\theta_{QED}$ generates the electric dipole moment $\langle d_{\rm ind} \rangle $ proportional to $\theta_{QED}$, i.e. $\langle d_{\rm ind}\rangle= - \frac{\theta_{\rm QED} \cdot \alpha}{\pi} \mu$. We also argue that many $\cal{CP}$ correlations such as $ \langle \vec{B}_{\rm ext} \cdot\vec{E}\rangle = -\frac{\alpha\theta_{\rm QED}}{\pi}\vec{B}^2_{\rm ext}$ will be generated in the background of an external magnetic field $\vec{B}_{\rm ext} $ as a result of the same physics. I. INTRODUCTION AND MOTIVATION The leitmotiv of the present work is related to the fundamental parameter θ in Quantum Chromodynamics (QCD), as well as the axion field related to this parameter.The θ parameter was originally introduced in the 70s.Although the θ term can be represented as a total derivative and does not change the equation of motion, it is known that this parameter is a fundamental physical parameter of the system on the non-perturbative level.It is known that the θ = 0 introduces P and CP violation in QCD, which is most well captured by the renowned strong CP problem. In particular, what is the most important element for the present notes is that the θ parameter generates the neutron (and proton) dipole moment which is known to be very small, d n 10 −26 e • cm, see e.g.review in Physics Today [1].It can be translated to the upper limit for θ 10 −10 .The strong CP problem is formulated as follows: why parameter θ is so small in strongly coupled gauge theory?The proton electric dipole moment d p , similar to the neutron dipole moment d n will be also generated as a result of non-vanishing θ.In particular, a future measurement of the d p on the level d p 10 −29 e • cm will be translated to much better upper limit for θ 10 −13 . On the other hand, one may also discuss a similar theta term in QED.It is normally assumed that the θ QED parameter in the abelian Maxwell Electrodynamics is unphysical and can be always removed from the system. The arguments are based on the observation that the θ QED term does not change the equation of motion, which is also correct for non-abelian QCD.However, in contrast with QCD when π 3 [SU (3)] = Z, the topological mapping for the abelian gauge group π 3 [U (1)] = 0 is trivial.This justifies the widely accepted view that θ QED does not modify the equation of motions (which is correct) and does not affect any physical observables and can be safely removed from the theory (which is incorrect as we argue below).We emphasize here that the claim is not that θ QED vanishes.Instead, the (naive) claim is that the physics cannot depend on θ QED irrespective to its value.While these arguments are indeed correct for a trivial vacuum background when the theory is defined on an infinitely large 3+1 dimensional Minkowski space-time, it has been known for quite sometime that the θ QED parameter is in fact a physical parameter of the system when the theory is formulated on a non-simply connected, compact manifold with non-trivial π 1 [U (1)] = Z, when the gauge cannot be uniquely fixed, see the original references [15,16] and review [17].Such a construction can be achieved, for example, by putting a system into a back-ground of the magnetic field or defining a system on a compact manifold with non-trivial topology.In what follows we treat θ QED as a new fundamental (unknown) parameter of the theory. Roughly speaking, the phenomena, in all respects, are very similar to the Aharonov-Bohm and Aharonov Casher effects when the system is highly sensitive to pure gauge (but topologically nontrivial) configurations. In such circumstances the system cannot be fully described by a single ground state 1 .Instead, there are multiple degenerate states which are classified by a topological index.The physics related to pure gauge configurations describing the topological sectors is highly nontrivial.In particular, the gauge cannot be fixed and defined uniquely in such systems.This is precisely a deep reason why θ QED parameter enters the physical observables in the axion Maxwell electrodynamics in full agreement with very generic arguments [15][16][17].Precisely these contributions lead to the explicit θ QED -dependent effects, which cannot be formulated in terms of conventional propagating degrees of freedom (propagating photons with two physical polarizations). The possible physical effects from θ QED have also been discussed previously [19,20] in the spirit of the present notes.We refer to our paper [21] with explicit and detail computations of different observable effects (such as induced dipole moment, induced current on a ring, generating the potential difference on the plates, etc) when the system is defined on a nontrivial manifold, or placed in the background of the magnetic field. It is important to emphasize that some effects can be proportional to θ QED , as opposed to θQED as commonly assumed or discussed for perturbative computations.Precisely this feature has the important applications when some observables are proportional to the static time-independent θ QED , and, in general, do not vanish even when θQED ≡ 0, see below. 1 We refer to [18] with physical explanation (in contrast with very mathematical papers mentioned above) of why the gauge cannot be uniquely fixed in such circumstances.In paper [18] the socalled "modular operator" has been introduced into the theory.The exp(iθ) parameter in QCD is the eigenvalue of the large gauge transformation opeartor, while exp(iθ QED ) is the eigenvalue of the modular operator from [18].This analogy explicitly shows why θ QED becomes a physically observable parameter in some circumstances. II. AXION θ FIELD AND VARIETY OF TOPOLOGICAL PHENOMENA Our starting point is the demonstration that the θ QED indeed does not enter the equations of motion.As a direct consequence of this observation, the corresponding Feynman diagrams at any perturbation order will produce vanishing result for any physical observable at constant θ QED .Indeed, which shows that θQED and not θ QED itself enters the equations of motion.In our analysis we ignored spatial derivatives ∂ i θ QED as they are small for non-relativistic axions.This anomalous current (1) points along magnetic field in contrast with ordinary E&M , where the current is always orthogonal to B. Most of the recent proposals [9][10][11][12][13][14] to detect the dark matter axions are precisely based on this extra current (1) when θ is identified with propagating axion field oscillating with frequency m a . We would like to make a few comments on the unusual features of this current.First of all, the generation of the very same non-dissipating current (1) in the presence of θ has been very active area of research in recent years.However, it is with drastically different scale of order Λ QCD instead of m a .The main driving force for this activity stems from the ongoing experimental results at RHIC (relativistic heavy ion collider) and the LHC (Large Hadron Collider), which can be interpreted as the observation of such anomalous current (1). The basic idea for such an interpretation can explained as follows.It has been suggested by [22,23] that the socalled θ ind -domain can be formed in heavy ion collisions as a result of some non-equilibrium dynamics.This induced θ ind plays the same role as fundamental θ and leads to a number of P and CP odd effects, such as chiral magnetic effect, chiral vortical effect, and charge separation effect, to name just a few.This field of research initiated in [24] became a hot topic in recent years as a result of many interesting theoretical and experimental advances, see recent review papers [25,26] on the subject. In particular, the charge separation effect mentioned above can be viewed as a result of generating of the in-duced electric field in the background of the external magnetic field B ext and θ QED = 0.This induced electric field E ind separates the electric charges, which represents the charge separation effect.Then formula (2) essentially implies that the electric field locally emerges in every location where magnetic field is present in the background of the θ QED = 0. The effect of separation of charges can be interpreted as a generation of the electric dipole moment in such unusual background.Indeed, for a table-top type experiments it has been argued in [21] that in the presence of the θ QED the electric and magnetic dipole moments of a topologically nontrivial configuration (such as a ring or torus) are intimately related: which obviously resembles the Witten's effect [27] when the magnetic monopole becomes the dion with electric charge e ′ = −(eθ QED /2π). To support this interpretation we represent the magnetic dipole moment m ind as a superposition of two magnetic charges g and −g at distance L 3 apart, where L 3 can be viewed as the size of the compact manifold in construction [21] along the third direction2 .As the magnetic charge g is quantized, g = 2π e , formula (3) can be rewritten as This configuration becomes an electric dipole moment d ind with the electric charges e ′ = −(eθ QED /2π) which precisely coincides with the Witten's expression for e ′ = −(eθ QED /2π) in terms of the θ QED according to [27].This construction is justified as long as magnetic monopole size is much smaller than the size of the entire configuration L 3 such that the topological sectors from monopole and anti-monopole do not overlap and cannot untwist themselves.The orientation of the axis L 3 also plays a role as it defines the L 1 L 2 plane with non-trivial mapping determined by π 1 [U (1)] = Z, see below.If our arguments on justification of this formula are correct it can be applied to all fundamental particles including electrons, neutrons, and protons because the typical scale L 3 ∼ m −1 e ∼ 10 −11 cm, while magnetic monopole itself be assumed to be much smaller in size.In this case the expression (3) derived in terms of the path integral in [21] assumes the form where µ is the magnetic moment of any configuration, including the elementary particles: µ e , µ p , µ n .As emphasized in [21,28] the corresponding expression can be represented in terms of the boundary terms, which normally emerge for all topological effects. The observed upper limit for d e < 10 −29 e • cm implies that θ QED < 10 −16 .We do not have a good explanation of why this parameter is so small.This question is not addressed in the present work.It is very possible that a different axion field must be introduced into the theory which drives θ QED to zero, similar to conventional axion resolution of the strong CP problem [2][3][4][5][6][7][8]. The equation similar to (5), relating the electric and magnetic dipole moments of the elementary particles was also derived in [29,30] where it has been argued that for time-dependent axion background the electric dipole moment of the electron d e will be generated 3 , and it must be proportional to the magnetic moment of the electron µ e and the axion field θ(t).The absolute value for the axion field θ 0 ≈ 3.7 • 10 −19 was fixed by assuming the axions saturate the dark matter density today.While the relation (5) and the one derived in [29,30] look identically the same (in the static limit m a → 0 and proper normalization) the starting points are dramatically different: we begin with canonically defined fundamental unknown constant θ QED = 0 while computations of [29,30] are based on assumption of time dependent axion fluctuating field saturating the DM density today, which obviously implies a different normalization for θ.Still, both expressions identically coincide in the static m a → 0 limit. The identical expressions with precisely the same coefficients (for time dependent [29,30] and time independent (5) formulae) in static limit m a → 0 relating the electric dipole and magnetic dipole moments strongly suggest that the time dependent expression [29,30] can be smoothly extrapolated to (5) with constant θ QED .This limiting procedure can be viewed as a slow adiabatic process when θ ∝ m a → 0 and the θ becomes the timeindependent parameter, θ → θ QED when the same normalization is implemented 4 . We want to present one more argument suggesting that the constant θ QED may produce physical effects including the generating of the electric dipole moment.Indeed the S θ term in QED in the background of the uniform static magnetic field along z direction can be rewritten as follows where 2πκ The expression on the right hand side is still a total divergence, and does not change the equation of motion. In fact, the expression in the brackets is identically the same as the θ term in 2d Schwinger model, where it is known to be a physical parameter of the system as a result of nontrivial mapping π 1 [U (1)] = Z, see e.g.[34] for a short overview of the θ term in 2d Schwinger model in the given context 5 . The expression (6) shows once again that θ QED parameter in 4d Maxwell theory becomes the physical parameter of the system in the background of the magnetic field 6 .In such circumstances the electric field will be 4 A different approach on computation of the time dependent dipole moment due to the fluctuating θ parameter was developed recently in [33].The corresponding expression given in [33] approaches a finite non-vanishing constant value if one takes the consecutive limits t → ∞ and after that the static limit ma → 0 by representing e/(2m) = µ in terms of the magnetic moment of a fermion.In this form it strongly resembles the expression derived in [29,30]. 5In this exactly solvable 2d Schwinger model one can explicitly see why the gauge cannot be uniquely fixed, and, as the consequence of this ambiguity, the θ becomes observable parameter of the system.The same 2d Schwinger model also teaches us how this physics can be formulated in terms of the so-called Kogut-Susskind ghost [35] which is the direct analog of the Veneziano ghost in 4d QCD. 6The parameter κ which classifies our states is arbitrary real number.It measures the magnetic physical flux, which not necessary assumes the integer values. induced along the magnetic field in the region of space where the magnetic field is present according to (2).This relation explains why the electric dipole moment of any configuration becomes related to the magnetic dipole moment of the same configuration as equation ( 5) states. The topological arguments for special case (6) when the external magnetic field is present in the system suggest that the corresponding configurations cannot "unwind" as the uniform static magnetic field B z enforces the system to become effectively two-dimensional, when the θ QED parameter is obviously a physical parameter, similar to analogous analysis in the well-known 2d Schwinger model, see footnote 5. The practical implication of this claim is that there are some θ QED -dependent contributions to the dipole moments of the particles.While the θ QED does not produce any physically measurable effects for QED with trivial topology, or in vacuum, we expect that in many cases as discussed in [21] and in present work the physics becomes sensitive to the θ QED which is normally "undetectable" in a typical scattering experiment based on perturbative analysis of QED.We want to list below several CP odd correlations which will be generated in the presence of θ QED , and which could be experimentally studied by a variety of instruments. The generation of the induced electric field (2) unambiguously implies that the following CP odd correlation will be generated Another CP odd correlation which can be also studied is as follows: where one should average over entire ensemble of particles with magnetic moments µ i , which are present in the region of a non-vanishing magnetic field B ext .The induced electric field (2) will coherently accelerate the charged particles along B ext direction such that particles will assume on average non-vanishing momentum p i along B ext .As a result of this coherent behaviour the following CP odd correlation for entire ensemble of particles is expected to occur One should add that the dual picture when the external magnetic field B ext is replaced by external electric field E ext also holds.For example, instead of (2) the magnetic field will be induced in the presence of the strong external electric field E ext , as e.g. in the proposal [36] to measure the proton EDM when the E ext is directed along the radial component, such that the correlation similar to (7) will be also generated III. CONCLUSION AND FUTURE DIRECTIONS The topic of the present notes on the dipole moments of the particles and antiparticles in the presence of the θ QED is largely motivated by the recent experimental advances in the field, see [36,37].There are many other CP odd phenomena which accompany the generation of the dipole moments.All the relations discussed in the present notes, including (5) or (7) are topological in nature and related to impossibility to uniquely describe the gauge fields over entire system, as overviewed in the Introduction. Essentially the main claim is that the θ QED should be treated as a new fundamental parameter of the theory when the system is formulated on a topologically nontrivial manifold, and in particular, in the background of a magnetic field which enforces a non-trivial topology, as argued in this work.I believe that the very non-trivial relations such as (5) or (7) which apparently emerge in the system at nonvanishing θ and θ QED is just the tip of the iceberg of much deeper physics rooted to the topological features of the gauge theories. In particular, the θ dependent portion of the vacuum energy could be the source of the Dark Energy today (at θ = 0) in the de Sitter expanding space as argued in [38,39].Furthermore, these highly non-trivial topological phenomena in strongly coupled gauge theories can be tested in the QED tabletop experiments where the very same gauge configurations which lead to the relation similar to (5) or (7) may generate an additional Casimir Forces, as well as many other effects as discussed in [28,34,40].What is even more important is that many of these effects in axion electrodynamics can be in principle measured, see [41][42][43][44][45] with specific suggestions and proposals.I finish on this optimistic note. ACKNOWLEDGEMENTS These notes appeared as a result of discussions with Dima Budker and Yannis Semertzidis during the conference "Axions across boundaries between Particle Physics, Astrophysics, Cosmology and forefront Detection Technologies" which took place at the Galileo Galilei Institute in Florence, June 2023.I am thankful to them for their insisting to write some notes on the dipole moments of the particles and their relations to the fundamental parameters of the theory, the θ and the θ QED .I am also thankful to participants of the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) workshop on "quantum sensors and new physics" for their questions during my presentation.Specifically, I am thankful to Yevgeny Stadnik for the long discussions on topics related to refs [29-33].These notes had been largely completed during the MIAPbP workshop in August 2023.Therefore, I am thankful to the MIAPbP for the organization of this workshop.The MIAPbP is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy-EXC-2094 -390783311.This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.
4,739
2023-09-06T00:00:00.000
[ "Physics" ]
Anomalous Symmetry Fractionalization and Surface Topological Order In addition to possessing fractional statistics, anyon excitations of a 2D topologically ordered state can realize symmetry in distinct ways , leading to a variety of symmetry enriched topological (SET) phases. While the symmetry fractionalization must be consistent with the fusion and braiding rules of the anyons, not all ostensibly consistent symmetry fractionalizations can be realized in 2D systems. Instead, certain `anomalous' SETs can only occur on the surface of a 3D symmetry protected topological (SPT) phase. In this paper we describe a procedure for determining whether an SET of a discrete, onsite, unitary symmetry group $G$ is anomalous or not. The basic idea is to gauge the symmetry and expose the anomaly as an obstruction to a consistent topological theory combining both the original anyons and the gauge fluxes. Utilizing a result of Etingof, Nikshych, and Ostrik, we point out that a class of obstructions are captured by the fourth cohomology group $H^4( G, \,U(1))$, which also precisely labels the set of 3D SPT phases, with symmetry group $G$. We thus establish a general bulk-boundary correspondence between the anomalous SET and the 3d bulk SPT whose surface termination realizes it. We illustrate this idea using the chiral spin liquid ($U(1)_2$) topological order with a reduced symmetry $\mathbb{Z}_2 \times \mathbb{Z}_2 \subset SO(3)$, which can act on the semion quasiparticle in an anomalous way. We construct exactly solved 3d SPT models realizing the anomalous surface terminations, and demonstrate that they are non-trivial by computing three loop braiding statistics. Possible extensions to anti-unitary symmetries are also discussed. I. INTRODUCTION Recently it has been realized that gapped phases can be distinguished on the basis of symmetry even when that symmetry is unbroken. Short range entangled phases of this form -dubbed 'symmetry protected topological' (SPTs) -have been classified using group cohomology 1,2 . However, in two and higher dimensions there are also long range entangled phases supporting fractionalized excitations, and it is an interesting problem to classify these phases in the presence of a symmetry [3][4][5][6][7][8][9] . In two dimensions, one approach is to distinguish such 'symmetry enriched topological' (SET) phases on the basis of symmetry representation fractionalization on the anyons 3,5 . (Note: this applies when the symmetry doesn't permute the anyons). For example, a Z 2 spin liquid with the gauge charge excitations carrying spin 1/2 represents a different SET phase from the one where the gauge charge carries no spin. Indeed, in two dimensions, all possible ways of assigning fractional symmetry quantum numbers to anyons which are compatible with their fusion and braiding rules can be enumerated 5,9 . However, it is not clear that just because a certain assignment of fractional symmetry representations is compatible with all the fusion and braiding structure, it must necessarily be realizable by a two dimensional Hamiltonian. Previous works have put forward several putative SETs whose assignments are in fact anomalous, and incompatible with any 2D symmetric physical realization 10-17 . In these examples, time reversal symmetry is involved and the anomaly is usually exposed by showing that the SET must be chiral when realized in 2D, which necessarily breaks time reversal symmetry unless realized on the surface of a 3D SPT. In this paper, we focus on SETs with unitary discrete symmetries and discuss a general way to detect anomalies in them. We start with the simplest example of this type which we refer to as the 'projective semion' theory, where the only quasiparticle in the theory -a semion -carries a projective representation of the Z 2 × Z 2 symmetry of the system. While such a symmetry representation is compatible with the fusion and braiding structure of the semions, we show that such a theory is anomalous and can never be realized purely in 2D. With unitary symmetry, one cannot use chiral edge states to identify the anomaly, instead we need a new strategy. The anomaly in the projective semion theory is exposed when we try to gauge the Z 2 × Z 2 symmetry. If an SET can be realized in a 2D symmetric model, this gauging process should result in a larger topological theory with more quasiparticles, including the gauge charges and gauge fluxes of the Z 2 × Z 2 symmetry. However, as we are going to show, such an extension fails for the projective semion theory because braiding and fusion rules cannot be consistently defined for the gauge fluxes and the semion. Therefore, it indicates that the projective semion theory is not possible in 2D. On the other hand, we show that the projective semion theory can be realized at the surface of a 3D SPT with Z 2 × Z 2 symmetry, where symmetry acts anomalously. We discuss in detail the relation between the SPT order in the bulk and the projective semion on the surface. This projective semion theory represents the simplest example of an anomalous SET theory with unitary and arXiv:1403.6491v1 [cond-mat.str-el] 25 Mar 2014 discrete symmetries. Indeed the method we discuss in the paper can be used to detect anomalies in all SETs with discrete unitary symmetries and provides a direct link between the symmetry anomaly on the surface and the SPT order in the 3D bulk. The mathematics underlying this method of anomaly detection, developed by Etingof et al. 18 , studies the problem of G-extensions of tensor categories. We use it in this paper to discuss the general case after analyzing in detail the simple example of a projective semion. Satisfyingly, this theory includes a class of obstructions to defining a consistent 2D topological order with both anyons and symmetry fluxes, which are captured by the fourth cohomology group H 4 (G, U (1)) of the symmetry group. The same mathematical object is believed to classify all three dimensional SPT phases with unitary symmetry groups 1,2 . Thus the 'anomalous' 2D SETs have a natural relation to 3D SPTs, via the surface topological order. Finally, we discuss cases involving time reversal symmetry, and how similar criteria can potentially be applied there as well. The paper is structured as follows: in section II, we introduce the projective semion model and show that gauging the Z 2 × Z 2 symmetry leads to an inconsistency in the braiding and fusion rules of the symmetry fluxes; in section III we present a solvable 3D lattice model that realizes an SPT with a projective semion surface state; in section IV we give a non-linear sigma model representation of the 3D SPT, and discuss the relation between the SPT order in the bulk and the anomaly on the surface; in section V, we discuss generalizations of this method to all SETs with discrete unitary symmetries and propose our conjecture of a general formula that predicts the SPT order in the bulk given the SET state realized on the surface. We close with discussion on future directions including the incorporation of time reversal symmetry into this formalism. II. ANOMALY OF THE PROJECTIVE SEMION MODEL IN 2D A. The 'projective semion' model The 'projective semion' model we consider is a variant of the Kalmeyer-Laughlin chiral spin liquid (CSL) 19 . The Kalmeyer-Laughlin CSL can of course be realized in 2D with the explicit construction given in 19 . However, by slightly modifying the way spin rotation symmetry acts on the semion, we obtain an anomalous theory. First, recall the setup for the Kalmeyer Laughlin CSL. The degrees of freedom are spin-1/2's on a lattice. We will not be concerned with the precise form of the Hamiltonian, but only note that it is spin rotation (SO(3)) invariant. Thus we can think of the CSL as an SO(3) symmetry enriched topological (SET) phase. The chiral topological order is the same as the ν = 1/2 bosonic fractional quantum Hall state which can be described by the K = 2 Chern-Simons gauge theory There is one non-trivial quasiparticle, a semion s which induces a phase factor of −1 when going around another semion. Two semions fuse into the a trivial quasiparticle, which we denote as I. Moreover each semion carries a spin-1/2 under the SO(3) symmetry. The CSL is then a nontrivial SET because s carries a projective representation of SO (3). 20 The precise definition of projective representation is given in appendix A. Such an SET theory is of course not anomalous. In order to get an anomalous theory, reduce the symmetry to the discrete subgroup of 180 degree rotations about the x, y, and z axes, which form a Z 2 × Z 2 subgroup of SO(3). We denote the group elements as g x , g y and g z . The CSL is of course also an SET of this reduced symmetry group. Each semion carries half charge for all three Z 2 transformations, because 360 degree rotation of a spin-1/2 always results in a phase factor of −1. Moreover, the three Z 2 transformations anti-commute with each other and can be represented as However, now there are other possible SETs, because the semion can now carry extra half integral charges of the Z 2 symmetries. For example, the semion can carry integral charge under g x and g y but fractional charge under g z . Indeed, we can have three variants of the CSL which we call the 'projective semion' models where the Z 2 × Z 2 on the semions can be represented as Projective Semion X: g x = iσ x , g y = σ y , g z = σ z Projective Semion Y: g x = σ x , g y = iσ y , g z = σ z Projective Semion Z: g x = σ x , g y = σ y , g z = iσ z The addition of half charge to the CSL seems completely harmless. First notice that the symmetry representation in the projective semion theories is compatible with the fusion rule of the semion. Two semions fuse into a bosonic particle which is non-fractionalized. Correspondingly, it can be easily checked that having two copies of the same projective representation listed in Eq. 3 gives rise to a trivial representation of Z 2 ×Z 2 with integral Z 2 charges and commuting g x , g y , g z . On the other hand, topological theories with the semion carrying half charges (in ν = 1/2 fractional quantum Hall states) and spin-1/2's (in CSL) have both been identified in explicit models in 2D. However, as we are going to show in the next section, the projective semion theories are anomalous and not possible in purely 2D systems with Z 2 × Z 2 symmetry. B. Projective fusion rule of gauge fluxes The anomaly is exposed when we try to gauge the Z 2 × Z 2 symmetry in the projective semion model and introduce gauge fluxes Ω x , Ω y and Ω z . The non-trivial projective action of Z 2 ×Z 2 on the semion over-constrains the fusion rules of the fluxes, leading to an inconsistency, as we show in this section. In this section, we will show that: The gauge fluxes have a 'projective' fusion rule up to a semion. which is determined from the way the symmetry acts on the semion. First, note that each Ω actually contains two sectors which differ from each other by a semion. One might try to label one of the sectors as the 'vacuum' flux sector Ω and the other as the 'semion' flux sector sΩ, although there is no absolute meaning to which one is which and only the difference between the two sectors matters. Interesting things happen when we consider the fusion rules of the fluxes. Normally, one would expect for example Ω i × Ω i = I (I denotes the vacuum) and Ω x × Ω y = Ω z due to the structure of the symmetry group. However, due to the existence of two sectors we might actually get Ω i ×Ω i = s and Ω x ×Ω y = sΩ z . That is, the gauge fluxes must fuse in the expected way only up to an additional semion. These 'projective' fusion rules can be determined from the action of the symmetry on the semion. Consider for example the Projective Semion X state. One important observation is that bringing a gauge flux around the semion is equivalent to acting with the corresponding symmetry locally on the semion, as shown in Fig.1 (a). Because the semion carries a half charge of g x , bringing two Ω x 's around the semion gives rise to a −1 phase factor, which can be reproduced by bringing another semion around the semion. Therefore, if we imagine fusing thẽ Fusion rule of gauge flux from symmetry action on the semion: (a) Bringing a gauge flux (Ωx) around a semion is equivalent to acting with the corresponding symmetry (gx) locally on the semion; (b) Bringing two Ωx gauge fluxes around the semion gives rise to a −1 phase factor, because the semion carries half of the corresponding gx charge. This −1 sign can be reproduced by bringing another semion around the original semion. two Ω x fluxes before bringing them around the semiona distinction which should not change the global phase factor -we are led to the conclusion that two Ω x 's fuse into a semion Similarly, we find The fusion rules in involving the sΩ fluxes can be correspondingly obtained by adding s to both sides. For example, sΩ x × Ω x = I. In this way, we have established a 'projective' fusion rule for the Z 2 ×Z 2 gauge fluxes in the Projective Semion X state. The fusion rule is 'projective', with a semion coefficient, and can be compactly expressed as a mapping ω from two group elements to the semion or the vacuum ω(g x , g x ) = s ω(g y , g y ) = I ω(g z , g z ) = I ω(g x , g y ) = s ω(g y , g x ) = I ω(g y , g x ) = I ω(g z , g y ) = s ω(g z , g x ) = s ω(g x , g z ) = I (6) such that The mapping ω obeys certain relations. First, consider the fusion of three gauge fluxes Ω g , Ω h and Ω k . The result should not depend on the order of fusion. We can choose to fuse Ω g , Ω h together first and then fuse with Ω k or we can choose to fuse Ω h , Ω k together first and then fuse with Ω g . The equivalence between these two procedures leads to the following relation among the semion coefficients ω(g, h)ω(gh, k) = ω(h, k)ω(g, hk) It is straight forward to check that this relation is satisfied by the ω given in Eq. 6 and we are going to use this relation in our discussion of the next section. For the other two projective semion states, similar fusion rules can be derived, also with coefficients taking semion values. In fact, in an SET phase (anomalous or not) with discrete unitary symmetries which do not change anyon types, it is generally true that when the symmetry is gauged the gauge fluxes satisfy a projective fusion rule with coefficient in the abelian anyons. We will discuss more about the general situation in section V. Note that the fusion product of two Ω's is order dependent. For example, Ω x × Ω y = sΩ z while Ω y × Ω x = Ω z , which is very different from the usual fusion rules we see in a topological theory. This is because the Ω's discussed here are not really quasi-particles, but rather the end points of symmetry defect lines introduced in the original SET. Because the Ω's are all attached to defect lines, they are actually confined. However, we can still define fusion between them and because of the existence of defect lines their fusion is not commutative. If the SET is not anomalous, the Ω's should first form what is called a 'fusion category', whose properties are discuss in for example Ref. 9 and 21. The fusion product between objects in a fusion category can be non-commutative, but it does have to be associative, and the Pentagon relation 22 illustrated in Fig. 3 must still be satisfied. Moreover, in a non-anomalous SET theory a gauge field can be introduced and the Ω's can be promoted to deconfined quasiparticles. However, as we shall see in the following, the Ω's in the projective semion theory do not admit consistent fusion rules which satisfy the pentagon equation, not to mention the promotion to real quasi-particles with extra braiding structures. It is worth emphasizing at this point that a projective fusion rule for the fluxes is not in itself an indication of a surface anomaly. Indeed, a projective fusion rule for the gauge fluxes exists in many SETs; for example, if we considered the toric code instead of the semion topological order, we could make say the Z 2 gauge charge carry the sort of fractional Z 2 × Z 2 charges that make the semion theory anomalous, but not have any problem realizing it in a symmetric way in 2D. To expose the anomaly in the projective semion model, we have to do more work. C. Anomaly in the statistics of gauge fluxes We are now ready to see the anomaly from the statistics of the gauge fluxes. In a topological theory with anyonic excitations, the anyon statistics is described by two sets of data: the braiding statistics of exchanging anyon a with anyon b and the fusion statistics in the associativity of fusing anyons a, b and c. The fusion statistics is the phase difference between two processes: the one which fuses a with b first before fusing with c and the one which fuses b, c first before fusing with a. For example, in the semion theory discussed here, the only nontrivial braiding statistics happen when two semions are exchanged and this process is shown diagrammatically in Fig. 2 (a). The exchange of two semions leads to a phase factor of i. Denote the braiding statistics as R ω,ω , where ω = I, s. The only nontrivial fusion statistics happen when three semions are fused together in different orders as shown in Fig. 2 (b). The two orders of fusion differ by a phase factor of −1. Denote the fusion statistics as F ω,ω,ω , where ω = I, s. Similarly, if the symmetry in the projective semion theory can be consistently gauged, then we should be able to define for the gauge fluxes not only the projective fusion rules but also the braiding and fusion statistics involved with exchanging two fluxes or fusing three of them in different orders. These statistics cannot be chosen arbitrarily. In fact, they have to satisfy certain consistency conditions, one of which is called the Pentagon equation as shown in Fig. 3. The Pentagon equation relates different orders of fusing four gauge fluxes, Ω f , Ω g , Ω h , and Ω k . For example, in the figure on the top left corner of Fig. 3, Ω f and Ω g are fused together first, then with Ω h and finally with Ω k . In moving from this figure to the next figure through step 1, we have changed the order (the associativity) in the fusion of the first three fluxes. Such a step is related to a phase factor given by the fusion statistics of the first three gauge fluxes similar to the phase factor shown for three semions in Fig. 2 (b). However, when we go through all configurations in the pentagon and are back to the original configuration, the total phase factor gained should be equal to 1. This is a fundamental requirement for the consistency of the fusion statistics of any anyon theory. For the gauge fluxes, there is a direct way to check whether this Pentagon equation is satisfied by using the projective fusion rules of the fluxes. By repeated use of the projective fusion rule, the fusion result of the each figure can be reduced to (in fact sophisticated steps are involved in the derivation of the following results, which we explain in the next section) )ω(f, ghk))Ω f ghk ((ω(f, g)ω(h, k))ω(f g, hk))Ω f ghk (11) The meaning of this equation is as follows: each line gives an equivalent representation of the fusion product of the four gauge fluxes Ω f,g,h,k , corresponding to one of the five figures in the pentagon diagram. Comparing these, we can see that the difference between the configurations can be characterized by the difference in the semion coefficient, whose braiding and fusion statistics are already known. Therefore, we can derive the phase factor change at each step of the Pentagon using the statistics of the semions. At step 1, there is no extra phase factor involved due to the relation ω(f, g)ω(f g, h) = ω(g, h)ω(f, gh) satisfied by the semion coefficients. At step 2, we first need to change the order of fusion such that ω(f, gh) is fused with ω(f gh, k) first before fusing with ω(g, h); then using the relation ω(f, gh)ω(f gh, k) = ω(gh, k))ω(f, ghk) we relate the second configuration to the third one; finally, we change the order of fusion in the third configuration such that ω(g, h) and ω(gh, k) are fused first and then with ω(f, ghk). The total phase factor involved in step 2 is then Similarly, we find that there are extra phase factors at step 4 and 5. At step 5, the total phase factor is Step 4 is a bit more special because it involves the exchange of two semion coefficients: first we change the order of fusion in the fourth configuration such that ω(g, hk) and ω(f, ghk) are fused first and then with ω(h, k); then use the relation ω(g, hk)ω(f, ghk) = ω(f, g)ω(f g, hk) to relate the fourth configuration to the fifth; change the order of fusion again such that ω(h, k) and ω(f, g) are fused first before fusing with ω(f g, hk); finally, exchange ω(h, k) and ω(f, g) which results in a phase factor of R ω(h,k),ω(f,g) . Therefore, the total phase factor involved is Putting all these phase factors together, we should get 1 for all possible choices of f, g, h, k if a consistent anyon theory can be defined for the gauge fluxes. If not, then there is obstruction to gauging the symmetry, indicating that the SET state is anomalous. Indeed, for the Projective Semion X state defined above, one can directly check that the total phase factor for the four gauge fluxes Ω z , Ω y , Ω z , Ω y is not 1. Therefore, the Projective semion X state is anomalous. Similarly, we can see that the Projective semion Y and Z states are also anomalous. It is worthwhile to emphasize a slight subtlety at this point: in order to conclude that we have an obstruction, i.e. in order to conclude that there is no possible way to write down consistent fusion and braiding rules for the gauge fluxes and seminon, we should in principle check that the pentagon equation cannot be satisfied for any choice of defect F-matrices. It turns out that in general the F-matrices of 3 defects are uniquely up to a phase by pentagon equations involving 3 defects and an anyon; in our specific model, these can be chosen (up to said phase) to be equal to 1, something that we implicitly made use of in the above analysis. Now, if the 4-defect pentagon equation fails to be satisifed, we still have the freedom to redefine the 3-defect F matrix by a phase, in the hopes of satisfying the 4-defect pentagon equation. That this is impossible for our projective semion theory is the consequence of a group cohomology calculation, leading to an obstruction in H 4 (G, U (1)), as we discuss in detail in the next section. D. Group cohomology structure of the anomaly There is actually more structure to the phase factor calculated above than we have discussed. In order to explain this structure, we will make use of group cohomology theory, reviewed in appendix A. Discussion in this section is more formal and mathematical, but given the classification of SPT phases with group cohomology, the identification of the group cohomology structure in the anomaly of the projective semion state allows us to make direct connection to the SPT order in the 3D bulk when we try to realize the projective semion state on the surface of a 3D system. This mathematical structure was identified by Etingof et. al. 18 and we briefly explain the main idea in this section. This general structure applies not only to the projective semion example, but to all SETs with discrete unitary symmetires as well. First, one can easily see from Eq. 8, using the definition of projective representation, that the gauge fluxes form a projective representation of the symmetry group Z 2 × Z 2 with semion coefficients. Because the semion has a Z 2 fusion rule, the possible projective fusion rules of the gauge fluxes can be classified by where the trivial element corresponds to the CSL state and the nontrivial elements corresond to the projective semion X,Y,Z states respectively. It is generally true that possible projective fusion rules, hence possible symmetry fractionalization patterns, in an SET theory are classified by H 2 (G, A) where G is the symmetry group and A denotes the set of abelian anyons 5,9 . Notice that here the coefficients of the projective fusion rule are valued only in the abelian sector of the topological theory, which ensures that the symmetry fractionalization pattern is consistent with the fusion and braiding rules of the topological theory. In more general situations, the anyons can be permuted by the symme-try in the system. Correspondingly, the g action on the coefficients A can be nontrivial, but the H 2 classification as discussed in appendix A still applies. With the symmetry fractionalization information encoded in ω ∈ H 2 (G, A), we can then move on to determine whether the SET theory is anomalous or not. That is, if we gauge the G symmetry, whether we can obtain a consistent extended topological theory involving both the gauge charge, gauge fluxes and the original anyons. This is a highly nontrivial process, but fortuitously a mathematical framework has been developed in 18, concerning group extensions of braided categories, which is precisely equipped to deal with these issues. Specifically, this mathematical framework allows us to draw the following important conclusions: 1. In order to determine whether an SET theory is anomalous or not, we only need to look at the fusion and braiding statistics of the original anyons and all the gauge fluxes. If a consistent topological theory can be defined for the original anyons and all the gauge fluxes, then gauge charges can always be incorporated without obstruction. Therefore, to detect an anomaly in a putative SET theory, we need to look for possible ways to define consistent fusion and braiding statistics for all the gauge fluxes which we denote as Ω and the original anyons which we denote as α. We already know the fusion and braiding statistics of α which are given by R α,α and F α,α,α . (For simplicity of notation we use the same label for the set of anyons in α and Ω. Also we suppress the complication in notation due to nonabelian anyons as long as it does not cause confusion.) What we need to find is the fusion and braiding statistics of Ω and those involving both Ω and α: R α,Ω , R Ω,Ω , F α,α,Ω , F α,Ω,Ω and F Ω,Ω,Ω (and those with permuted indices). The strategy is to bootstrap from the known data (R α,α and F α,α,α ) and solve for the unknowns using the consistency equations they have to satisfy. The consistency equation comes in two types: the pentagon equation involving the fusion statistics F and the hexagon equation involving both the braiding and the fusion statistics R and F . (For a detailed discussion of these equations see Ref. 22.) There is one pentagon equation for every combination of 4 quasiparticles (including both α and Ω) and there is one hexagon equation for 3 quasiparticles (including both α and Ω). Of course the pentagon and hexagon equations involving only α are all satisfied. The next step is to include one Ω and try to solve for the R α,Ω and F α,α,Ω that appear in those equations and so on. If the equations have no solutions, then we have detected an anomaly. Of course, this would be a lengthy process to follow by brute force. Luckily, it has been shown in Ref. 18 that there are only two steps at which the equations might have no solutions. 2. The first such step results in an obstruction in H 3 (G, A), which, when non-zero, signals a lack of any solutions. However, this obstruction appears only when the group G acts non-trivially on the anyons by permu-tation, and essentially corresponds to the fact that, when it is non-zero, it is impossible to define an associative fusion product (nevermind requiring the associativity constraint to satisfy the pentagon equation). Since we are only concerned in this paper with a trivial permutation by the group on the set of anyons, we can safely ignore this obstruction. 3. When the first type of obstruction vanishes, there is still a probability that we run into a second type of obstruction when we try to satisfy the pentagon equation of four Ω's, as is the case with the projective semion example. Indeed, the combination of Eq. 12,13,14 together gives the total phase factor up to which the pentagon equation is satisfied: for f, g, h, k ∈ G. It was proved in Ref. 18 that the ν(f, g, h, k) data forms a four-cocycle of group G with U (1) coefficients (G acts trivially on U (1)). Moreover, if we follow the bootstrap steps closely, we see that ν(f, g, h, k) does not all have to be identically equal to 1 in order for the pentagon equation to have solutions. This is because the F Ω,Ω,Ω involved in each step of the pentagon equation (which we took to be the phase difference between different lines in Eq. 11) is actually defined only up to an arbitrary phase factor β(Ω, Ω, Ω). Therefore, if the ν(f, g, h, k) is different from 1 but takes the form: then it can be gauged away. In other words, ν(f, g, h, k) has some gauge -or co-boundary -degrees of freedom. Therefore, this type of obstruction is classified by H 4 (G, U (1)), which corresponds exactly to the classification of 3D SPT phases with G symmetry. The explanation in this section provides a very brief physical interpretation of some of the results in Ref. 18 (see also 9). Even though this is far from a clear derivation of the result, we extract the most important formula -Eq. 15 -for the identification of the second type of anomaly in an SET theory. For the projective semion theory, we calculated this ν(f, g, h, k), confirmed that it is a four cocycle and moreover checked numerically that it is indeed nontrivial. An analytical proof of this fact is also given in appendix B. We have also checked that ν(f, g, h, k) is a trivial four cocycle for the CSL state. The second thing we want to point out through this discussion is the connection between the SET anomaly and the SPT phases. They are both classified by H 4 (G, U (1)) and hence one might expect a close relation between the two, which we will explore further in the remaining sections. III. REALIZATION OF THE PROJECTIVE SEMION MODEL ON 3D SURFACE In the previous section, we established that a system with excitations that have semionic statistics and transform under a Z 2 × Z 2 global symmetry projectively according to Eq. 3 is inconsistent in 2D. In this section, we will demonstrate that such a system can be realized at the 2D boundary of a 3D SPT phase. We will do this by presenting an exactly solvable model. In section IV A we will elaborate on this 3D phase by discussing the relation between the projective semion state and O(5) NLSM description of the SPT bulk. We also present a more physical way to detect the anomaly by threading crossed fluxes in the 3D bulk in section IV B. Our model is constructed by adding a global symmetry to a Walker-Wang 23 -type lattice model such that its surface admits semionic excitations transforming projectively. Using the Walker-Wang construction, Von Keyserlingk et al. 24 described a 3D loop gas model with deconfined semions living on its surface, and no deconfined excitations in the bulk. In the bulk this model is therefore trivial, while its surfaces are topologically ordered. To make the surface semions transform projectively under a global Z 2 × Z 2 symmetry, we will exploit the fact that the ground state of this model is a superposition of closed loop configurations (with relative amplitudes ±1, ±i), and that excitations with semionic statistics arise when open strings end on the surface. We will decorate these strings with the equivalent of Haldane chains (for the appropriate projective representation). With this decoration the ground state is necessarily invariant under the global symmetry, as it consists only of closed loops. The end of an open string, however, is also the end of a Haldane chain and hence carries a projective representation. Therefore the semions on the surface, which also occur at the ends of such open strings, transform projectively under the Z 2 × Z 2 symmetry. We Defining the vertex operator is: where * V is the set of edges entering the vertex V . This favours configurations in which the number of spin-down edges at each vertex must be even. The ground state is consequently a loop gas (Fig. 4): edges on which n i = 1 (blue in the Figure) form closed loops if B V = 1 at each vertex. The plaquette term B P simultaneously flips τ z on all edges around a plaquette, and assigns a configurationdependent phase to the result: where ∂P is the set of all edges bordering P . Exactly as in the 3D Toric code 25,26 model, the product over τ x ensures that the ground state is a superposition of closed loops. Their relative coeficients are determined by the second, third, and fourth terms in Eq. (20), which are phase factors that depend only on {n i } on edges on or adjacent to the plaquette P . We will choose the phase factors Θ(P ), Φ O,O , Φ U,U such that the coefficients of different loop configurations in the ground state of the loop gas are given (up to an overall global phase) by where Z CS {L} is the (Euclidean) partition function of a 3D U (1) 2 Chern-Simons theory. Specifically, every linking of loops incurs an extra factor of −1, while every counterclockwise (clockwise) twist in a loop induces a factor of i (−i). There is also a phase factor of (−1) N L , where N L is the number of loops. To obtain the coefficient of (−1) N L , we choose: where * P denotes the set of all edges entering the plaquette P . Essentially, this term ensures that if i∈∂P τ x i changes the number of loops in a particular loop configuration, the action of B P on this configuration is negative (discounting, for the moment, the effect of Φ). However, since 6 edges meet at a vertex, at vertices where n i = 1 on four or more edges, there can be ambiguity in determining the number of loops in a given configuration. This can be resolved by "point-splitting" each vertex in the cubic lattice into three trivalent vertices, following Walker and Wang 23 (Fig. 5 b). The L-pairs in Eq. (20) (shown in green in the Figure) are chosen such that in the ground state each loop configuration appears with a relative phase factor of (−1) N L , where N L is the number of closed loops on this point-split lattice: the factors of (−1) ninj explicitly cancel the factors of (i) 2 that can occur if two edges that are in * P on the cubic lattice, but are not in * P after we have point-split the vertices, have n i = 1. To obtain the phases appropriate to Z CS {L} , we choose: To define this term, we must fix a particular 2D projection of the cubic lattice. The two edges O (for "over") and U (for "under") are edges that are project into the plaquette from above and below, respectively, in this projection. O and U are neighbouring edges in ∂P , as shown in Fig. 5. Φ ensures that if flipping the spins in ∂P introduces a clockwise (counter-clockwise) twist in a loop, the resulting configuration will appear in the ground state with a relative phase of i (−i). (It is also responsible for the extra factor of (−1) that arises if the action of B P links a pair of previously unlinked loops). Though not immediately apparent, it can be shown that with this choice of Θ, Φ, [B Pi , B Pj ] = 0 for all pairs of plaquettes. Hence the entire spectrum of our model Hamiltonian can be determined exactly. One might guess that this spectrum contains two distinct types of defects: pairs of "vertex" defects, for which A V1 = A V2 = −1 at the affected vertices, correspond to adding an open string to the loop gas connecting vertices V 1 and V 2 ; and "plaquette" defects, in which the eigenvalues of the plaquette term are B P = −1 along a closed string of plaquettes. 27 This guess is correct if we choose Θ = Φ = 1; however, for the choices given above it turns out that in the bulk of our model, a pair of vertex defects is always connected by a line of plaquette defects. 24 Hence open strings in the bulk cost a finite energy per unit length, and vertex defects are confined. On the surface, however, vertex defects are deconfined: if V 1 and V 2 are vertices on the boundary of our 3D system, there exist operators that create states in which the eigenvalues obey A V1 = A V2 = −1, but B P = 1 everywhere. The corresponding excited states will consist of superpositions of closed loops, together with an open string connecting the vertices V 1 and V 2 . Further, these vertex defects are semions: exchanging them multiplies the excited-state wave function by a phase factor of ±i (Fig. 6). Both the bulk and surface spetrum are derived in detail in Ref. 24. B. Decorating the model with a global symmetry We next show how to decorate this model such that the surface semion transforms projectively under a global Z 2 × Z 2 symmetry. Though in practise the model is an SPT only if the semion transforms projectively according to Eq. (3), we first discuss the case where the semion carries spin-1/2, as in the Kalmeyer-Laughlin spin liquid 19 , in which the loops in our loop gas ground state are true spin-1/2 Haldane chains. To obtain an SPT, we will replace the full SU(2) symmetry with a Z 2 × Z 2 symmetry, and take our spins to transform via one of the possibilities listed in Eq. (3). We begin by enlarging our Hilbert space by adding six auxillary degrees of freedom at each vertex (one associated with each edge). Each auxillary degree of freedom can be either a spin 1/2 (transforming projectively under Z 2 × Z 2 ) or a spinless particle b (transforming in the singlet representation of Z 2 × Z 2 ). We will also impose the constraint that, within this enlarged Hilbert space, only states with an even number of spins at each vertex are allowed. This ensures that the vertex degree of freedom transforms in a linear representation of Z 2 × Z 2 . Within this enlarged Hilbert space, it is convenient to define the modified edge variables where ↑, ↓ represent the physical spin states, b are the spinless states, and i, j index the auxillary variables associated with the two ends of the edge. |1 describes an edge in which a hard-core boson occurs in conjunction with two spin degrees of freedom, which combine to form a singlet. |0 is an edge with no boson, and where the auxillary degrees of freedom are both spinless. We can express our Hamiltonian in terms of the decorated spin operators: These act like Pauli spin operators within the Hilbert space (24): by construction, (τ x ) 2 = (τ z ) 2 = 1, and τ xτ z = −τ zτ x . Importantly, they act only on the set of auxillary vertex variables associated with the edge in question, so that decorated spin operators acting on adjacent edges commute. Note also thatτ x conserves the total spin, since it replaces a singlet-bonded pair of spin-1/2's at adjacent vertices with a pair of spin 0 particles. The Hamiltonian is: whereà V ,B P are as in Eq.'s (20,19), with τ x,z i replaced byτ x,z i , and n i byñ i = 1 2 (1 −τ z i ). H 0 is a potential term favouring the edge states |0 and |1 : Let us now discuss the spectrum of this new Hamiltonian. By construction, any configuration in which all loops are closed will have lowest energy if all edges are in one of the two states |0 , |1 . This is possible when τ z = 1 about each vertex: in this case each hard-core boson can have an associated spin 1/2 at the vertex without violating the constraint; the possibilities are shown in Fig.7(a). At vertices where τ z = −1, it is not possible for all edges to be in one of the two states |0 , |1 without violating the constraint, as shown in Fig.7(b). Hence whenever a string ends, the constraint that the total number of spins at each vertex must be even ensures that an unpaired spin 1/2 remains at or near the vertex. Moving this unpaired spin away from the vertex in question produces a series of edges along which H 0 is not minimized (Fig. 8). In other words, the unpaired spin is confined by a linear energy penalty to be close to the violated vertex. Therefore an unpaired spin 1/2 is bound (by a linear confining potential) to each end-point of an open string. Defects in the original semion model involving only plaquette violations are unaffected by the decoration. This includes closed vortex loops and the open flux loop connecting a pair of confined charges. In particular, semions on the surface remain deconfined. Because they are also bound by a confining potential to a spin-1/2, these deconfined surface semions transform projectively under the global symmetry, exactly as in the Kalmeyer-Laughlin spin liquid. We note in passing that in the presence of open strings, the plaquette term as defined here will be violated at the string endpoints, since there are necessarily edges that are neither in state |0 or state |1 . This affects the energy of the semions at the end-points of these open strings, but not their statistics or transformations under symmetry. The construction given so far realizes a system with a Kalmeyer-Laughlin spin liquid state on its surface, by associating an open Haldane chain with the semion excitation. Our main objective, however, is to obtain surface states that are not possible in 2 dimensions. We can do this with a very minor modification of the construction given above: it suffices to take the spin 1/2 degrees of freedom to transform under Z 2 × Z 2 according to Eq. 3. In this case our spins are not really spins since we have assumed that SU(2) spin rotation invariance is broken; however a spin singlet will still transform as a net singlet under symmetry. Hence the operatorσ x still interchanges two different trivial representations, and hence does not violate the symmetry. In Appendix C, we describe how to generalize this construction to obtain other models with surface anyons that transform projectively under an appropriate global symmetry group. IV. THE 3D SPT We now turn our attention to understanding the 3D SPT suggested by the model constructed in the previous section. We will first describe it using an O(5) NLSM; we then describe a way to insert fluxes of the global symmetry that reveal the surface anomaly. An alternative approach to constructing SPTs is to use non-linear sigma models with a theta term. In particular, Ref.10 and 28 argue that a wide class of 3d SPTs can be described by the following action: Ref.28 argues that the following action of G realizes a non-trivial G-SPT: Z A 2 : n 1,2 → −n 1,2 , n a → n a (a = 3, 4, 5) Z B 2 : n 1 → n 1 , n a → −n a (a = 2, 3, 4, 5) We will see how to construct one of our anomalous semion surface states as a symmetric termination of this 3D SPT. Before we do this, let us first consider an alternative action of this symmetry group, which actually corresponds to a trivial 3D SPT. For clarity, we will denote this group byZ A 2 ×Z B 2 . Its action is: Z A 2 : n 1,2 → −n 1,2 , n a → n a (a = 3, 4, 5) Z B 2 : n 2,3 → −n 2,3 , n a → −n a (a = 1, 4, 5) Now, this is simply the subgroup of 180 degree rotations inside the SO(3) that rotates n 1 , n 2 , n 3 , and since we know that there is no non-trivial 3D SPT of SO(3) in 3D, the correspondingZ A 2 ×Z B 2 SPT must be trivial as well. But let us examine its surface anyway. To do this, consider the effective action for the complex field n 4 + in 5 on the 2D surface, after having integrated out n 1,2,3 . According to the arguments of Ref.10 and 28, the theta term ensures that a single vortex of n 4 + in 5 carries a spin 1/2. Now, we can add fluctuations and proliferate doubled vortices bound to single charges -i.e. we think of this as a system with U (1) charge conservation symmetry that acts on n 4 + in 5 , and drive it to a ν = 1/2 bosonic Laughlin state, while maintaining the SO(3) symmetry that acts on n 1 , n 2 , n 3 . We have then constructed a chiral spin liquid at the surface -the non-trivial quasiparticle is a semion, and because it descends from a single vortex, it carries spin 1/2. The chiral spin liquid is of course a perfectly valid 2D symmetry enriched phase, which we expect since the bulk SO(3) SPT is trivial. Having understood this trivial case, we turn to G = Z A 2 × Z B 2 defined above, which corresponds to the nontrivial SPT. The analysis is in fact completely the same, with the only difference being that Z B 2 differs fromZ B 2 by the transformation n 4,5 → −n 4,5 , i.e. a π rotation of the U (1): n 4 + in 5 → −n 4 − in 5 . Once we drive the system into the ν = 1/2 bosonic Laughlin state, this extra π rotation causes Z B 2 to pick up an extra phase factor, equal to π times the charge of the semion (which is 1/2), i.e. e iπ/2 = i. In other words, the semion now carries an extra half charge of Z B 2 , compared to the case of the chiral spin liquid. This is precisely our anomalous surface state. The other anomalous surface states can be constructed by changing the roles of the generators of Z 2 × Z 2 . B. Crossed Z2 fluxes argument Although the irreperable inconsistency in the pentagon equations for the fluxes already implies that our semion surface state cannot be symmetrically realized in 2D, it is nice to have a more physically tangible manifestation of this anomaly. Indeed, one such manifestation is the following: if we put such a putative surface state on a T 2 torus geometry, then inserting a flux of Z A 2 in one cycle and a flux of Z B 2 in the other results in a half charge of Z B 2 appearing on the torus surface, as shown in Fig. 9. Clearly this cannot happen in a symmetric model in 2D, where the degrees of freedom all form ordinary linear representations of the symmetry group. Another half charge appears along the flux line that pierces the bulk, well separated from the surface. Although we believe that this linked fluxes argument can be phrased entirely in 2D, we will instead demonstrated it in a 3D context, where the T 2 forms a surface of a 3D bulk. It is actually easiest to see this in the decorated domain wall picture of Ref. 29. Take the following geometry as shown in Fig. 10: an xy plane with a hole (taken to be the unit disk centered at the origin), and then thickened in the z direction, with periodic boundary conditions imposed in the z direction. Just to be specific, let's take z in the interval [0, 1] and periodically identify z = 0 and z = 1. This defines a 3D bulk whose surface is a 2D torus: namely, the unit circle in the xy plane times the [0, 1] interval in the z direction. This is equivalent to the thickened torus geometry mentioned initially, except that the other surface of the torus is out at infinity. Now put a flux of Z A 2 through the z cycle. This just means we insert a defect surface (domain wall) of Z A 2 on the xy plane, at z = 0. According to the arguments of Ref. 29, this domain wall is just a 2D SPT (in the xy plane) of Z B 2 . So inserting a flux of Z B 2 through the hole -which is just the second cycle of our torus -brings a half charge of Z B 2 to the surface, as desired. Alternatively, we can consider the same geometry in the sigma model framework. Once again, put a flux of Z A 2 through the z cycle. Then the trick is to integrate out n 1 to get a 2D O(4) sigma model with theta term for n 2 , n 3 , n 4 , n 5 . The original action of Z B 2 , namely n i → −n i , descends to this O(4) sigma model, and we recover the sigma model for the non-trivial 2D SPT of Z B 2 , as in eq. 46 of Ref. 28. V. SUMMARY In this paper, we studied in detail a symmetry enriched topological state -the projective semion state -which cannot be realized in 2D symmetric models even though the fractional symmetry action on the anyons is consistent with all the fusion and braiding rules of the chiral semion topological order. The anomaly is exposed when we try to gauge the Z 2 × Z 2 symmetry and fail to find a solution for the fusion statistics of the gauge fluxes which satisfies the pentagon equation. On the other hand, we demonstrate that the projective semion state can be realized on the surface of a 3D system and that there is a close connection between the surface SET order and the symmetry protected topological order in the bulk. The projective semion state is the simplest example of an anomalous SET state with discrete unitary symmetries. Our discussion in section II illustrates a general procedure for detecting anomalies in a large class of such SET states. In particular, we want to emphasize two points: 1. The fractional symmetry action on the anyons gives rise to a projective fusion rule of the gauge fluxes once the symmetry is gauged, with the abelian anyon in the original topological theory as the coefficient. Thus, not only can an anomalous SET be unambiguously identified, it can also be related to a particular 3D SPT phase when it is of the H 4 type. VI. DISCUSSION AND FUTURE DIRECTIONS Thus far we have focused on discrete unitary symmetry groups G and identifying whether a given SET is anomalous. However we mention the following two extensions which we conjecture to be true: 1. Anomalous SET with H 4 type obstruction (vanishing H 3 type obstruction) can be realized on the surface of a 3D SPT phase characterized by the same four cocycle ν ∈ H 4 (G, U (1)). 2. Eq. 15 applies to SETs with time reversal symmetry as well. That is, there should be a constructive procedure to realize a particular anomalous SET, given an element of H 4 , which is the surface topological order of the corresponding 3D bosonic SPT phase, at least for discrete unitary G. (A 3D SPT with no consistent surface topological order was recently discussed 30 , although that involved bulk fermions and a combination of continuous symmetry and time reversal). For time reversal symmetry, although it is currently not known how to introduce time reversal fluxes at the same level as for a unitary symmetry, we conjecture that the same formula Eq. 15 works in detecting the H 4 type obstruction. This comes from the observation of the following example: the Z 2 gauge theory with both the gauge charge e and gauge flux m transforming as T 2 = −1 (sometimes called eTmT). It is believed that this state is anomalous and appears on the surface of the 3D SPT with time reversal symmetry 10 which is described by the cohomology classification, in particular the nontrivial element in H 4 (Z T 2 , U (1)) , where the time reversal symmetry acts nontrivially on the U (1) coefficients by taking complex conjugation. Suppose that we can define in some notion a time reversal gauge flux. Then the projective fusion rule would be where f is the bound particle of e and m. Equivalently, we can write This reflects the fractional symmetry action where T 2 = −1 on e and m because f has a −1 braiding statistics with both e and m. Now we can use this information to calculate ν(f, g, h, k) in Eq. 15. The Z 2 gauge theory has trivial F . Therefore, Eq. 15 reduces to ν(f, g, h, k) = R ω(h,k),ω(f,g) (30) which is nontrivial only when f = g = h = k = T and ν(T, T, T, T ) = −1. This is exactly the nontrivial 4 cocycle of time reversal 2 . Even though at this moment, we are not sure what gauging time reversal means in general, recent work indicates that this notion can be formalized 31,32 . From this particular example, we expect that the procedure and result discussed in Ref.18 might be generalized to treat anti-unitary symmetries as well. VII. ACKNOWLEDGMENT When we were writing up the paper, we learned of other works on anomalous SET's with unitary discrete symmetries 32,33 . We are very grateful to helpful discussions with Meng Cheng, Senthil Todadri, Ryan Thorngren, Alexei Kitaev, and Netanel Lindner. XC is supported by the Miller Institute for Basic Research in Science at UC Berkeley, AV is supported by NSF DMR 0645691. Appendix A: Projective representation and group cohomology The following definition works only for unitary symmetries because we are not dealing with time reversal symmetry in this paper. Suppose that we have one projective representation u 1 (g) with factor system ω 1 (g 1 , g 2 ) of class ω 1 and another u 2 (g) with factor system ω 2 (g 1 , g 2 ) of class ω 2 , obviously u 1 (g)⊗u 2 (g) is a projective presentation with factor system ω 1 (g 1 , g 2 )ω 2 (g 1 , g 2 ). The corresponding class ω can be written as a sum ω 1 + ω 2 . Under such an addition rule, the equivalence classes of factor systems form an Abelian group, which is called the second cohomology group of G and is denoted as H 2 (G, U (1)). The identity element 1 ∈ H 2 (G, U (1)) is the class that corresponds to the linear representation of the group. The above discussion on the factor system of a projective representation can be generalized which gives rise to a cohomology theory of groups. For a group G, let M be a G-module, which is an abelian group (with multiplication operation) on which G acts compatibly with the multiplication operation (i.e.the abelian group structure) on M: For example, M can be the U (1) group and a a U (1) phase. The multiplication operation ab is then the usual multiplication of the U (1) phases. The group action is trivial g · a = a for unitary symmetries considered here. Or M can be a Z 2 group and a is the semion or the vacuum sector in the K = 2 Chern-Simons theory. The multiplication ab is then the fusion between anyons. The group action g · a = b encodes how the anyon sectors get permuted under the symmetry, which is trivial for the projective semion example discussed in this paper but can be nontrivial in general. Let ω n (g 1 , ..., g n ) be a function of n group elements whose value is in the G-module M . In other words, ω n : G n → M . Let C n (G, M ) = {ω n } be the space of all such functions. Note that C n (G, M ) is an Abelian group under the function multiplication ω n (g 1 , ..., g n ) = ω n (g 1 , ..., g n )ω n (g 1 , ..., g n ). We define a map d n from C n (G, U (1)) to C n+1 (G, U (1)): (d n ω n )(g 1 , ..., g n+1 ) = g 1 · ω n (g 2 , ..., g n+1 )ω (−1) n+1 When n = 1, we find that ω 1 (g) satisfies Therefore, the 1st cocycles of a group with U (1) coefficient are the one dimensional representations of the group. Moreover, we can check that the consistency and equivalence conditions (Eq. A2 and A3) of factor systems of projective representations are exactly the cocycle and coboundary conditions of 2nd cohomology group. Therefore, 2nd cocycles of a group with U (1) coefficient are the factor systems of the projective representations of the group. Similarly, we can check that if we use the semion / vacuum sector as a Z 2 coefficient, then the projective fusion rule of the gauge fluxes discussed in section II B is a 2nd cocycle of the symmetry group with Z 2 coefficient. our auxillary Hilbert space must contain an object transforming in r a for every a. For notational simplicity, we will call this object r a from now on. In order for our construction to work, we must impose the condition that our choice of r a is consistent with the fusion rules of the anyon model. Specifically, we will require that r 0 is the trivial representation, and that conjugate anyon types transform under conjugate representations in the symmetry group: We also require that a × b = c ⇒ r a × r b = r c × linear reps (C2) (For non-abelian anyons, c may contain more than one anyon type. However, we will require the projective part of the symmetry action to be the same for all possible products of fusion.) The next step is to construct analogues of the states |0 and |1 for this more general case. Because the edges are oriented, it is natural to favour configurations in which the edge starting at vertex V 1 and ending at vertex V 2 , with anyon label a, has auxillary variables in r a at V 1 and in r a at V 2 . The analogue of the two states |1 and |0 above are states in which an edge with anyon label a has these two auxillary variables combining to the trivial representation (i.e., in which r a and r a are in a "singlet" state): We may favour such configurations energetically by introducing a potential term H 0 = e=edges n a=1 |ã e ã e | (C4) We next impose the constraint that only linear representations are allowed at vertices. In the ground state, the tensor product of the representations associated with the anyon types that meet at each vertex contains the identity, so Eq. (C2) ensures that this constraint is compatible with minimizing H 0 on each edge. At vertices where the net anyon flux is not zero, however, this constraint forces us to include at least one edge on which H 0 is violated. Let the anyon types of the 3 edges (all oriented into the vertex) be a, b and c, and let them be fused at the vertex to create a fourth anyon d. Let us choose the edge with label a to be an excited state of H 0 . Then edges b and c contribute r b × r c to the vertex; hence (up to a tensor product with linear representations, which is not important for our purposes) the auxillary variable associated with a must carry a representation of r b × r c . It follows that the edge a has two auxillary variables, which together transform in the representation r a × r b × r c . (Recall that the other end of our a-labelled edge had better carry r a , to avoid having edges at adjacent vertices that are also in excited states). But we have required that (up to a tensor product with linear representations), r a × r b × r c = r d . Hence a vertex with net anyon flux d necessarily has the correct projective component of its symmetry transformation. (In cases where this leaves the representation under which d transforms ambiguous, it is possible to add additional potential terms to remove this ambiguity). Evidently, with this construction it is possible to define operators that mix the |ã variables in a manner consistent with the symmetry, since for all states |ã the edge carries a trivial representation of the symmetry. We may thus as above construct the plaquette term of the Walker-Wang Hamiltonian in theã variables to obtain a model in which anyons are confined in the bulk, and transform in the desired projective representations on the surface.
14,929.6
2014-03-25T00:00:00.000
[ "Physics" ]
Transience/Recurrence and Growth Rates for Diffusion Processes in Time-Dependent Domains Let $\mathcal{K}\subset R^d$, $d\ge2$, be a smooth, bounded domain satisfying $0\in\mathcal{K}$, and let $f(t),\ t\ge0$, be a smooth, continuous, nondecreasing function satisfying $f(0)>1$. Define $D_t=f(t)\mathcal{K}\subset R^d$. Consider a diffusion process corresponding to the generator $\frac12\Delta+b(x)\nabla$ in the time-dependent domain $D_t$ with normal reflection at the time-dependent boundary. Consider also the one-dimensional diffusion process corresponding to the generator $\frac12\frac{d^2}{dx^2}+B(x)\frac d{dx}$ on the time-dependent domain $(1,f(t))$ with reflection at the boundary. We give precise conditions for transience/recurrence of the one-dimensional process in terms of the growth rates of $B(x)$ and $f(t)$. In the recurrent case, we also investigate positive recurrence, and in the transient case, we also consider the asymptotic growth rate of the process. Using the one-dimensional results, we give conditions for transience/recurrence of the multi-dimensional process in terms of the growth rates of $B^+(r)$, $B^-(r)$ and $f(t)$, where $B^+(r)=\max_{|x|=r}b(x)\cdot\frac x{|x|}$ and $B^-(r)=\min_{|x|=r}b(x)\cdot\frac x{|x|}$. Introduction and Statement of Results Let K ⊂ R d , d ≥ 2, be a bounded domain with C 3 -boundary satisfying 0 ∈ K, and let f (t), t ≥ 0, be a continuous, nondecreasing C 3 -function It is known that one can define a Brownian motion X(t) with normal reflection at the boundary in the time-dependent domain {(x, t) : x ∈ D t , t ≥ 0}. More precisely, one has for 0 ≤ s < t, X(t) = x + W (t) − W (s) + t s 1 ∂Du (X(u))n(u, X(u))dL u , where W (·) is a Brownian motion, n(u, x) is the unit inward normal to D u at x ∈ ∂D u and L u is the local time up to time u of X(·) at the time-dependent boundary. See [1]. The process X(t) is recurrent if, with probability one, X(t) ∈ K at arbitrarily large times t, and is transient if, with probability zero, X(t) ∈ K at arbitrarily large times t. As with non-degenerate diffusion processes in unrestricted space, transience is equivalent to lim t→∞ |X(t)| = ∞ with probability one. It is simple to see that the definitions are independent of the starting point and the starting time of the process. In a recent paper [2], it was shown that for d ≥ 3, if f d (t) dt = ∞, and an additional technical condition is fulfilled, then the process is recurrent. The additional technical condition is that either K is a ball, or that ∞ 0 (f ′ ) 2 (t)dt < ∞. In particular, this result indicates that if for sufficiently large t, f (t) = ct a , for some c > 0, then the process is transient if a > 1 d and recurrent if a ≤ 1 d . The paper [2] also studies the analogous problem for simple, symmetric random walk in growing domains. In this paper we study the transience/recurrence dichotomy in the case that the Brownian motion is replaced by a diffusion process; namely, Brownian motion with a locally bounded drift b(x). That is, the generator of the process when it is away from the boundary is 1 2 ∆ + b(x)∇ instead of 1 2 ∆. Using the Cameron-Martin-Girsanov change-of-measure formula, or alternatively in the case of a Lipschitz drift, by a direct construction as in [1], one can show that the diffusion process in the time-dependent domain can be defined. We will show how the strength of the radial component, b(x) · x |x| , of the drift, and the growth rate of the domain-via f (t)-affect the transience/recurrence dichotomy. In fact, we will prove a transience/recurrence dichotomy for a one-dimensional process. Our result for the multi-dimensional case will follow readily from the one-dimensional result along with results in [2]. Let f (t) be as in the first paragraph. Consider the diffusion process corresponding to the gen- where B is locally bounded, in the time-dependent domain (1, f (t)) with reflection at the endpoint x = 1 (for all times) and at the endpoint f (t) at time t. If B(x) = k x , the process is a Bessel process. When this process is considered on the space (1, ∞) with reflection at 1, it is recurrent for k ≤ 1 2 and transient for k > 1 2 . In particular, it is the radial part of a d-dimensional Brownian motion when k = d− 1 2 . The result of [2] noted above can presumably be slightly modified to show that for k > 1 2 , the process on the time dependent domain (1, f (t)) with reflection at the endpoints is transient or recurrent according to whether f 2k+1 (t) dt = ∞. In this paper we considers drifts that are on a larger order than 1 x . We will prove the following theorem concerning transience/recurrence. If 2bc 1+γ 1 + γ < 1, or 2bc 1+γ 1 + γ = 1 and γ ≥ − 1 2 , then the process is recurrent. ii. Assume that B(x) ≥ bx γ , for sufficiently large x, 1+γ , for sufficiently large t. If then the process is transient. Using Theorem 1, we will prove the following result for the multi-dimensional process. Theorem 2. Consider the diffusion process corresponding to the generator Let γ > −1 and b, c > 0. Also assume either that K is a ball or that then the process is recurrent. ii. Assume that 1+γ , for sufficiently large t. then the process is transient. 1+γ , for all large t, where C > 0 and γ > −1, In the recurrent case, it is natural to consider positive recurrence, which we define as follows: the one-dimensional process above is positive recurrent if starting from x > 1, the expected value of the first hitting time of 1 is finite, while the multi-dimensional process defined above is positive recurrent if starting from a point x ∈K, the expected value of the first hitting time ofK is finite. It is simple to see that this definition is independent of the starting point and the starting time of the process. We have the following theorem regarding positive recurrence of the one-dimensional process. Remark. The proof of Theorem 3 relies heavily on the estimates in the proof of part (i) of Theorem 1. We suspect that in the borderline cases, when 2bc 1+γ 1+γ = 1, the process is never positive recurrent. However, the estimates in the proof of part (ii) of Theorem 1 don't go quite far enough to prove this. In the transient case, it is natural to consider the asymptotic growth rate of the process. It is known that the process X(t) corresponding to the generator 1 2 d 2 dx 2 + bx γ d dx on [1, ∞) with reflection at 1 grows a.s. on the order t 1 1−γ if γ ∈ (−1, 1). (In fact, the solutionsx(t) to the differential The process grows a.s. exponentially if γ = 1, and explodes a.s. if γ > 1 [5]. From this it is clear that the one-dimensional process X(t) with B(x) = bx γ on the time-dependent domain (1, f (t)) satisfies X(t) = f (t) for arbitrarily large t a.s., and consequently, 1+γ < 1, then the process is recurrent and thus lim inf t→∞ X(t) = 1.) We restrict to γ ∈ (−1, 1) for technical reasons, but we suspect that the following result also holds for γ ≥ 1. Theorem 4. Consider the diffusion process corresponding to the generator , with reflection at both the fixed endpoint and the time-dependent one. Let γ ∈ (−1, 1) and b, c > 0. Assume that for sufficiently large x, t, We now consider the asymptotic growth behavior in the case that B(x) = x γ , γ ∈ (−1, 1), and that f (t) is on a larger order than (log t) we will assume that f (t) = (log t) l , with l > 1 1+γ , or that f (t) = t l , with l ∈ (0, 1 1−γ ). (We have dispensed with the coefficients b and c because here they no longer play a role at the level of asymptotic behavior we investigate.) Theorem 5. Consider the diffusion process corresponding to the generator , with reflection at both the fixed endpoint and the time-dependent one. Let γ ∈ (−1, 1). Assume that i. Assume that for t ≥ 2, ii. Assume that In particular (in light of (1.3)), Asymptotic growth behavior in the spirit of Theorems 4 and 5 for the multi-dimensional case can be gleaned just as Theorem 2 was gleaned from Theorem 1. In section 2 we prove several auxiliary results which will be needed for the proofs. The proofs of Theorem 1-5 are given in sections 3-7 respectively. Throughout the paper, the following notation will be employed: Let X(t) denote a canonical, continuous real-valued path, and let T α = inf{t ≥ 0 : X(t) = α}. Let Let P bx γ ;Ref←:β x and E bx γ ;Ref←:β x denote probabilities and expectations for the diffusion process corresponding to L bx γ on [1, β], starting from x ∈ [1, β], with reflection at β and stopped at 1, and let P bx γ ;Ref→:α x and E bx γ ;Ref→:α x denote probabilities and expectations for the diffusion process corresponding to L bx γ on [α, ∞), starting from x ∈ [α, ∞), with reflection at α. We note that this latter diffusion is explosive if γ > 1, but we will only be considering it until time T β for some β > α. We will sometimes work with a constant drift, which we will denote by D (instead of bx γ with γ = 0), in which case D will replace bx γ in all of the above notation. Auxiliary Results In this section we prove four propositions. The first three of them are used explicitly in the proof of Theorem 1, and implicitly in many of the other theorems, since many of the calculations in the proof of Theorem 1 are used in the proofs of the other theorems. Proposition 4 is used only for the proof of (1.5) in Theorem 5. Proposition 4. (2.6) E bx γ ;Ref→:α x exp(λτ β ) ≤ 2, for x ∈ [α, β] and λ ≤λ, Proof. The proof is similar to that of Proposition 1. By the Feynman-Kac formula, when λ is less than the principal eigenvalue for the operator L bx γ on (α, β) with the Neumann boundary condition at α and the Dirichlet boundary condition at β, the function u λ (x) ≡ E bx γ ;Ref→:α is smaller than the principal eigenvalue and u λ ≤ u. We look for such a It follows readily that if , it is clear thatλ in the statement of the proposition is smaller than the right hand side of (2.7). Thus, u λ (x) ≤ u(x) ≤ 2, for λ ≤λ. Proof of Theorem 1 We will denote probabilities for the process staring from 1 at time 0 by P 1 . denote the standard filtration on real-valued continuous paths X(t). By standard comparison results and the fact that the transience/recurrence dichotomy is not affected by a bounded change in the drift over a compact set, we may assume that Proof of (i). The conditional version of the Borel-Cantelli lemma [3] shows that if then P 1 (A j i.o.) = 1, and thus the process is recurrent. Thus, to show recurrence, it suffices to show (3.2). Since up to time t j , the largest the process can be is f (t j ), and since up to We estimate the right hand side of (3.3). Let σ For any l j ∈ N, Also, it follows by the strong Markov property that Since Lφ = 0, it follows by standard probabilistic potential theory [6, chapter 5] that Applying L'Hôpital's rule shows that Using (3.9) along with the facts that f (x) = c(log x) 1 1+γ and t j = e j , it for sufficiently large j, for constants K 1 , K 2 > 0. From (3.10) and (3.11), it follows that (3.5) will hold if we define l j ∈ N by since then the general term, 1 − P will be on the order at least 1 j log j . With l j chosen as above, we now analyze P and show that (3.6) holds. By the strong Markov property, σ (j) , and the two IID sequences are independent of one another. By Markov's inequality, for any λ > 0. By Proposition 1, whereλ(·, ·) is as in (2.2). Using the fact that f (t j ) = cj 1 1+γ , it is easy to check that there exists aλ 0 > 0 such that By comparison, It is easy to check that if one substitutes and β = f (t j+1 ) = c(log(j + 1)) 1 1+γ in the expression on the right hand side of (2.5) in Proposition 2, the resulting expression is bounded in j. Letting M > 1 be an upper bound, it follows that Noting that t j+1 − t j = e j+1 − e j ≥ e j , and choosing λ = Recalling l j from (3.12), we conclude from (3.19) that 1+γ j log 2M , for sufficiently large j. Recalling that D j is equal to a positive constant, if γ ≥ 0, and that D j is on the order j γ 1+γ , if γ < 0, it follows that the right hand side of (3.20) is summable in j if 2bc 1+γ 1+γ < 1, or if 2bc 1+γ 1+γ = 1 and γ ≥ − 1 2 . Thus (3.6) holds for this range of b, c and γ. This completes the proof of (i). For j ≥ j 1 , let B j be the event that the process hits 1 sometime between the first time it hits f (j) and the first time it hits f (j + 1): then by the Borel-Cantelli lemma it will follow that P 1 (B j i.o.) = 0, and consequently the process is transient. To prove (3.21), we need to use different methods depending on whether γ ≤ 0 or γ > 0. We begin with the case γ ≤ 0. To consider whether or not the event B j occurs, we first wait until time T f (j) . Of course, necessarily, is not accessible to the process before time j. Since we may have T f (j) < j + 1, the point f (j + 1) may not be accessible to the process at time T f (j) , however, if we wait one unit of time, then after that, the point f (j + 1) certainly will be accessible, since T f (j) + 1 ≥ j + 1. Let Now if in that one unit of time, the process never got to the level f (j) − M j , then by comparison, the probability of B j occurring is ) (because after this one unit of time the process will be at a position greater than or equal to f (j) − M j ). By comparison with the process that is reflected at the fixed point f (j), the probability that the process got to the level f (j) − M j in that one unit of time is bounded from above by P . From these considerations, we conclude that For ǫ ∈ (0, 1) to be chosen later sufficiently small, choose M j = ǫf (j). Recall With such a choice of ǫ, it follows from (3.23) and (3.24) that as above. By comparison, we have where D j is equal to the minimum of the original drift on the interval [f (j)− M j , f (j)]; that is, By Markov's inequality, we have for λ > 0, If γ < 0, then lim j→∞ D j = 0 and M j → ∞, and it follows from (3.28) for some K > 0. If γ = 0, then D j = b, for all j, and we have from (3.28), as j → ∞. Since M j = ǫc(log j) 1 1+γ , it follows from (3.29) and (3.30) that for all choices of λ > 0 in the case γ < 0, and for sufficiently large λ in the case γ = 0. Thus, we conclude from (3.31) and (3.27) that We now turn to the case that γ > 0. Let ζ j+1 = inf{t ≥ j + 1 : X(t) ≥ f (j)}. Since the process cannot reach f (j + 1) before time j + 1, it follows Since the right hand endpoint of the domain is larger than or equal to f (t j+1 ) at all times t ≥ ζ j+1 , it follows by comparison that ). Thus, similar to (3.8) we have As in (3.24), but with ǫ = 0, we have From (3.34), (3.35) and the fact that 2bc 1+γ 1+γ > 1, it follows that For any s j , we have the estimate Here is the explanation for the above estimate. To check whether or not the event C j occurs, one waits until time T f (j) , at which time the process has first and C j does not occur. Otherwise, one watches the process between time T f (j) and time j + 1. If the process hit 1 in this time interval, whose length is no more than 1, then C j occurs. (Note that during this interval of time, the right hand boundary for reflection is always at least f (j).) Otherwise, C j has not yet occurred, but one continues to watch the process after time j + 1 until the first time the process is again greater than or equal to f (j). If the process reaches 1 in this interval, then C j occurs, while if not, then we conclude that C j did not occur. (Note that if X(j + 1) ≥ f . Letting (3.40) it follows from (3.38) with λ = b 2 2 , (3.39) and the fact that γ > 0 that We now estimate P bx γ ;Ref←:f (j) f (j) (T 1 ≤ s j + 1), the first term on the right hand side of (3.37), where s j has now been defined in (3.40). Note that by the strong Markov property, i=2 be independent random variables with X i distributed as T 1 under P D i ;Ref←:2 2 , where (3.42) We will use the generic P and E for calculating probabilities and expecta- i=2 X i ≤ s j + 1). Proof of Theorem 2 First we prove Theorem 2 in the case that K is a ball. The part of the |x| depends not only on the radial component r = |x| of x, but also on the spherical component x |x| . Let B + (r) = max |x|=r b(x) · x |x| and B − (r) = min |x|=r b(x) · x |x| . Then by comparison, if the multi-dimensional process with radial drift B + (|x|) · x |x| is recurrent, so is the one with drift b(x), and if the multi-dimensional process with radial drift B − (|x|) · x |x| is transient, so is the one with drift b(x). In the case of a radial drift B(|x|) · x |x| , with K a ball, so that D t = f (t)K is a ball, the question of transience/recurrence is equivalent to the question of transience/recurrence considered in Theorem 1 with drift B(x) + d−1 2x and with D t = 1, rad(K) f (t) , where rad(K) is the radius of K. Thus, if B(r) ≡ B + (r) and f (t) satisfy the inequalities (1.1) in part (i) of Theorem 2 with 2bc 1+γ 1+γ < 1, then the multi-dimensional process is recurrent, while if B(r) ≡ B − (r) and f (t) satisfy the inequalities (1.2) in part (ii) of Theorem 2 with 2bc 1+γ 1+γ > 1, then the multi-dimensional process is transient. (Of course, since K is a ball, rad ± (K) appearing in Theorem 1 are equal to rad(K).) Now consider the case that B(r) ≡ B + (r) and f (t) satisfy the inequalities (1.1) in part (i) of Theorem 2 with 2bc 1+γ 1+γ = 1. To show recurrence, we need to show recurrence for the one dimensional case when B(x) = bx γ + d−1 2x , for large x, and f (t) = c(log t) 1 1+γ , for large t, with 2bc 1+γ 1+γ = 1. Thus, the function φ appearing in (3.7) must be replaced by (Here C is the appropriate constant. In (3.7) we integrated over s starting from 0 for convenience in order to prevent such a constant from entering, however in the present case we can't do this because of the term d−1 s .) In place of (3.9), we will now have This causes the term j − γ 1+γ on the right hand side of (3.11) to be replaced by j − γ+d−1 1+γ , which in turn causes l j in (3.12) to be changed to log j exp( 2bc 1+γ 1+γ j)]. Finally, this causes the term on the right hand side of (3.20) to be changed to exp(− Recalling that D j is equal to a positive constant, if γ ≥ 0, and D j is on the order j γ 1+γ , if γ < 0, we conclude that if 2bc 1+γ 1+γ = 1, then the above expression is summable in j if d = 2 and γ ≥ 0. This proves recurrence when 2bc 1+γ 1+γ = 1, d = 2 and γ ≥ 0. We now extend from the radial case to the case of general K. In [2], the proof of a condition for transience was first given for the radial case. The extension to the case of general K, which appears as step III in the proof of Theorem 1.15 in that paper, followed by Lemma 2.1 in that paper. This lemma implies that if one considers two such processes, one corresponding to K 1 and one corresponding to K 2 , where K 1 is a ball and K 2 ⊃K 1 , then the process corresponding to K 2 is transient if the one corresponding to K 1 is transient. Lemma 2.1 goes through just as well when the Brownian motion is replaced by our Brownian motion with drift. This extends our proof of transience to the case of general K. In [2], the proof of the condition for recurrence also was first given in the radial case. The extension to the general case, which is more involved than in the case of transience, and which requires the additional condition ∞ 0 (f ′ ) 2 (t)dt < ∞, appears in step V in the proof of Theorem 1.15 in that paper. The analysis in that step also go through when Brownian motion is replaced by our Brownian motion with drift. This extends the proof of recurrence to the case of general K. Proof of Theorem 3 We will prove the theorem for the one-dimensional case. The proof for the multi-dimensional case follows from the proof of the one-dimensional case, similar to the way the proof of Theorem 2 follows from the proof of Theorem 1. Let P 2 and E 2 denote probabilities and expectations for the process starting from x = 2 at time 0. Let t j = e j as in the proof of part (i) of Theorem 1. We have Recall the definition of j 0 and of A j+1 from the beginning of the proof of part (i) of Theorem 1. From (3.3) we have for j ≥ j 0 + 1, If we show that then it will certainly follow from (5.1) and (5.2) that E 2 T 1 < ∞, proving positive recurrence. In order to prove (5.3), it suffices from (3.4) to prove that for some choice of positive integers {l j } ∞ j=j 0 , From (3.8), (3.11) and the fact that lim y→∞ (1− 1 y ) yg(y) = 0, if lim y→∞ g(y) = ∞, it follows that (5.4) holds if we choose With this choice of l j , we have from (3.19), As in the proof of Theorem 1, we can assume that b and f satisfy (3.1). We will first show that The proof of (6.1) is just a small variant of the proof of recurrence in Theorem 1; that is, part (i) of Theorem 1. As in that proof, let t j = e j . Recalling the definition of j 0 appearing at the very beginning of the proof of part (i) of Theorem 1, it follows from (3.1) that f (t j ) = cj 1 1+γ , for j ≥ j 0 . In that proof, for j ≥ j 0 , A j+1 was defined as the event that the process hits 1 at For the present proof, we define instead, for each ρ ∈ (0, 1), the event A (ρ) j+1 that the process X(t) satisfies X(t) ≤ ρf (t j ) for some t ∈ [t j , t j+1 ]. We mimic the proof of Theorem 1-i up through (3.9), using A (ρ) j+1 in place of A j+1 , replacing the stopping time T 1 by the stopping time T ρf (t j ) , and replacing φ(1) by φ(ρf (t j )). Instead of (3.10), we obtain Instead of (3.11), we have for sufficiently large j, for constants K 1 , K 2 > 0. From (6.2) and (6.3), it follows that (3.5) with T 1 replaced by T ρf (t j ) will hold if we define l j ∈ N by since then the general term, 1 − P bx γ ;Ref←: be on the order at least 1 j log j . We now continue to mimic the proof of Theorem 1-i, starting from the paragraph after (3.12) and up through (3.19). We then insert the present l j from (6.4) in (3.19) to obtain (6.5) 1+γ (1−ρ 1+γ )j log 2M , for sufficiently large j. Recalling that D j is equal to a positive constant, if γ ≥ 0, and that D j is on the order j γ 1+γ , if γ < 0, it follows that the right hand side of (6.5) is Analogous to the proof of Theorem 1, we conclude then that P 1 (A (ρ) j i.o.) = 1 for ρ as above. From the definition of A (ρ) j and the fact that f is increasing, we conclude that (6.1) holds. To complete the proof of Theorem 4, we will prove that For this direction, we will need some new ingredients. Recalling again the definition of j 0 appearing at the very beginning of the proof of part (i) of Theorem 1, it follows from (3.1) that f (t) = c(log t) 1 1+γ for t ≥ e j 0 . Let τ 1 = inf{t ≥ e j 0 : X(t) = f (t)}, and for j ≥ 2, let τ j = inf{t ≥ τ j−1 + 1 : By the remarks in the paragraph preceding Theorem 4, it follows that τ j < ∞ a.s. [P 1 ], for all j. By construction, we have (6.7) τ j > j, for all j ≥ 1. (We have suppressed the dependence of B j on ǫ and ρ.) It follows from (6.8) that on the event B j one has X(t) ≥ (1 − 2ǫ)ρf (t), for all t ∈ [τ j , τ j+1 ]. Thus, for any N , on the event ∩ ∞ j=N B j , one has lim inf t→∞ We will complete the proof of (6.6) by showing that (6.10) lim 1+γ and all sufficiently small ǫ (depending on ρ). We write (6.11) where ∩ M −1 i=M B i denotes the entire probability space. Let (Note that C j depends on the random variable τ j .) Let P bx γ (1−ǫ)f (τ j ) denote probabilities for the diffusion process corresponding to L bx γ without reflection, starting from (1 − ǫ)f (τ j ). Noting that if τ j+1 ≤ s(τ j ), then , it follows by the strong Markov property and comparison that (6.12) Also, In order to get a lower bound on P 1 (B j |∩ j−1 i=M B i , τ j ), we will bound P 1 (C c j |τ j ) and P bx γ from above, and we will calculate the asymptotic behavior of P bx γ (1−ǫ)f (τ j ) (T ρ(1−ǫ)f (τ j ) > T f (s(τ j )) ). We start with P 1 (C c j |τ j ). Let P BM 0 denote probabilities for a standard Brownian motion starting from 0, and letT x = min(T x , T −x ), for x > 0. By the strong Markov property and comparison we clearly have (6.14) . Thus from (6.14) we obtain . We now turn to and T f (s(τ j )) refer to the hitting times for the Y process. (Note that we have been using the generic T a for the hitting time of a for any process, the process in question being inferred from the probability measure which appears with it.) Thus, for any t > 0, (6.16) For ease of notation, in the analysis below, we let L 1 = ρ(1 − ǫ)f (τ j ), Using Brownian scaling for the first inequality and symmetry for the second one, we have (6.18) As is well-known, there exist κ, λ > 0 such that P BM 0 (T 1 ≥ t) ≤ κe −λt , for all t ≥ 0. Thus, from (6.16)-(6.18), choosing t = s(τ j ) − τ j − 1, we conclude that (6.19) We now calculate the asymptotic behavior of ). Similar to (3.8), we have (6.20) In light of (6.9) and (3.9), it follows from (6.20) that , as τ j → ∞. Proof of Theorem 5 Proof of (i). The proof is almost exactly the same as the proof of Theorem 4 starting from (6.6), using (log t) l instead of (log t) 1 1+γ (and with b = c = 1). Let B j be the event that i. (We have suppressed the dependence of B j on ǫ and q.) It follows from (7.2) that on the event B j one has Thus, for any N , on the event ∩ ∞ j=N B j , one has Therefore, the proof of (1.4) will be completed when we show that for some ǫ ∈ (0, 1) and all q > q 0 . We write where ∩ M −1 i=M B i denotes the entire probability space. Let (Note that C j depends on the random variable τ j .) Let P x γ f (τ j )−ǫτ q j denote probabilities for the diffusion process corresponding to L x γ without reflection, starting from f (τ j ) − ǫτ q j . Noting that if τ j+1 ≤ s(τ j ), then X(τ j+1 ) = f (τ j+1 ) ≤ f (s(τ j )), it follows by the strong Markov property and comparison that (7.5) Also, In order to get a lower bound on P 1 (B j |∩ j−1 i=M B i , τ j ), we will bound P 1 (C c j |τ j ) and P x γ from above, and we will calculate the asymptotic behavior of P j > T f (s(τ j )) ). We start with P 1 (C c j |τ j ). We mimic the paragraph containing (6.14), the only change being that ǫf (τ j ) is replaced by ǫτ q j . Thus, similar to (6.15), we obtain (7.7) We now turn to P x γ . We mimic the paragraph following (6.15), the only changes being that ( is replaced by f (τ j ) − τ q j and b is set to 1. Similar to (6.19), we obtain, (7.8) where L 3 = f (s(τ j )) and We now calculate the asymptotic behavior of ). Similar to (3.8), we have . We now make the assumption, as in the statement of the theorem, that q > q 0 . Thus, lγ + q > 0. Using this in (7.10), along with (3.9) (with b = 1) and (7.9), and recalling that f (τ j ) = τ l j and that, from (7.1), f (s(τ j )) = τ l j + ǫτ q j , we conclude that From (7.5)-(7.8) and (7.11), we have for large τ j . From (7.8) and (7.1), we have L 3 − L 1 = f (s(τ j )) − f (τ j ) + τ q j = (1 + ǫ)τ q j . Thus, for large τ j , If 1 − q − l > 0, then we can complete the proof just like we completed the proof of Theorem 4 and conclude that (7.3) holds, and thus that (1.4) holds. Note that in order to come to this conclusion, we have needed to assume that q > q 0 = max(0, −lγ) and that 1 − q − l > 0; that is, we need max(0, −lγ) < 1 − l and q ∈ (max(0, −lγ), 1 − l). A fundamental assumption in the theorem is that l ∈ (0, 1 1−γ ). For these values of l, the above inequality always holds. Thus, (7.3) holds for those q which are larger than q 0 and sufficiently close to q 0 . Consequently, (1.4) holds for all q which are larger than q 0 and sufficiently close to q 0 . However, if (1.4) holds for some q, then clearly it also holds for all larger q. Thus, (1.4) holds for all q > q 0 . We now turn to the proof of (1.5). We have γ ∈ (−1, 0] and q 0 = −γl ∈ [0, l). Let t j = j k , for j ≥ 1 and some k > 1 to be fixed later. Since up to time t j , the largest the process can be is f (t j ), and since up to time t j+1 the time-dependent domain is contained in [1, f (t j+1 )], it follows by comparison that (7.13) Clearly, (7.14) corresponds to the L x γ diffusion with reflection at both f (t j ) − M t q 0 j and f (t j+1 ). We estimate the right hand side of (7.14). We have Thus, where P x γ f (t j ) corresponds to the L x γ diffusion without reflection. Similar to (3.8), we have (7. 16) , where φ is as in (3.7) with b = 1. We now choose k so that kl(1 + γ) > 1.
8,293.6
2015-05-18T00:00:00.000
[ "Mathematics" ]
Spectrochemical analysis of liquid biopsy harnessed to multivariate analysis towards breast cancer screening Mortality due to breast cancer could be reduced via screening programs where preliminary clinical tests employed in an asymptomatic well-population with the objective of identifying cancer biomarkers could allow earlier referral of women with altered results for deeper clinical analysis and treatment. The introduction of well-population screening using new and less-invasive technologies as a strategy for earlier detection of breast cancer is thus highly desirable. Herein, spectrochemical analyses harnessed to multivariate classification techniques are used as a bio-analytical tool for a Breast Cancer Screening Program using liquid biopsy in the form of blood plasma samples collected from 476 patients recruited over a 2-year period. This methodology is based on acquiring and analysing the spectrochemical fingerprint of plasma samples by attenuated total reflection Fourier-transform infrared spectroscopy; derived spectra reflect intrinsic biochemical composition, generating information on nucleic acids, carbohydrates, lipids and proteins. Excellent results in terms of sensitivity (94%) and specificity (91%) were obtained using this method in comparison with traditional mammography (88–93% and 85–94%, respectively). Additional advantages such as better disease prognosis thus allowing a more effective treatment, lower associated morbidity, fewer false-positive and false-negative results, lower-cost, and higher analytical frequency make this method attractive for translation to the clinical setting. Scientific RepoRtS | (2020) 10:12818 | https://doi.org/10.1038/s41598-020-69800-7 www.nature.com/scientificreports/ pre-cancerous lesions. A significant degree of transformation in such lesions found in this phase would allow determination of their clinical significance and implementation of effective treatment to improve the patient's prognosis. Such a screening test that diagnoses early disease needs to be acceptable to patients and available at a reasonable cost 5 . Mammography is the recommended method for routine screening of breast cancer worldwide 6 . This technique performed with an x-ray machine is described as a radiological examination for evaluation of the breasts. It can be used for checking breast cancer-like lesions in apparently healthy woman by finding nodules or calcifications. Exposure to this radiation rarely causes cancer, unless performed with a high periodic frequency whereby risk will increase. Besides being considered painful, relatively expensive, and a source of much discomfort and even embarrassment to patients, its sensitivity varies from 88 to 93%, while its specificity varies from 85 to 94% 6 . Such statistical metrics demonstrate the proportion of women with breast cancer who will present a positive mammogram signalling disease presence, and the rate of women without breast cancer who will have a normal mammography, respectively 6 . Some breast cancer screening tests also include breast self-examination (BSE), clinical examination of breasts (CBE), nuclear magnetic resonance (NMR), and ultrasonography. However, the time from initial patient examination until diagnosis can be too lengthy; about 70% of breast cancer cases lead to complete removal of the breast(s). Many examinations are required to identify the presence of neoplasm: mammogram, breast exam, biopsy, magnetic resonance imaging (MRI) and ultrasound. Infrared (IR) spectroscopy is a vibrational technique capable of analysing biomolecules, such as nucleic acids (asymmetric PO 2 − in DNA and RNA at ~ 1,225 cm −1 ), carbohydrates (C-O stretching at ~ 1,155 cm −1 ), proteins (amide II at ~ 1,550 cm −1 and amide I at ~ 1,660 cm −1 ) and lipids (C=C stretching at ~ 1,750 cm −1 ), that exhibit characteristic features in the IR region 7 . Attenuated total reflection Fourier-transform IR (ATR-FTIR) spectroscopy has been used to analyse several biofluids due to its fast spectral acquisition, minimum sample preparation and sample volume, and its non-destructive nature to the sample 8 . Recent research is progressing gradually in which excellent diagnostic results compared to traditional methods have been obtained in various types of cancer such as ovarian 9 , cervical 10 , and prostate 11 ; additionally, to diagnosis neurodegenerative diseases such as Alzheimer's 12 . Herein, we present the results of using ATR-FTIR spectroscopy together with chemometrics for classification of patients with breast cancer in a large-scale screening program using blood biopsies. Results The FTIR spectral data in the fingerprint region (900-1,800 cm −1 ) were pre-processed by Savitzky-Golay smoothing (window of 7 points, 2nd order polynomial fitting) followed by AWLS baseline correction and normalization to the Amide I peak (1,650 cm −1 ). The raw and pre-processed spectral data are shown in Fig. 1, where visual overlaps between breast cancer and healthy control spectra are present throughout the whole spectral region indicating the need of chemometric techniques to distinguish samples in such complex matrices. The preprocessed spectral data underwent chemometric analysis by several classification techniques (Table 1). Amongst the classification techniques tested, SPA-SVM presented the best classification performance with accuracy of 92.9% (94% sensitivity and 91% specificity) to detect breast cancer samples based on an external test set (15% of samples, n = 71 patients). ~ 70% of samples (n = 334 patients) were used for model construction and another 15% for internal validation (n = 71 patients). Overall classification performance represented by the F-Score and G-Score values was good (93%), indicating equal performance with or without considering imbalanced data. Figure 2 shows the receiver operating characteristic (ROC) curve for all models. The best ROC curve (area under the curve [AUC] = 0.929) was found for SPA-SVM, indicating an excellent predictive performance. PCA-SVM (AUC = 0.886) and GA-SVM (AUC = 0.871) were, respectively, the second and third best classification algorithms, demonstrating a good classification performance. The spectral variables selected by the best classification model (SPA-SVM) are shown in Fig. 3 www.nature.com/scientificreports/ 1742 cm −1 ) were responsible for class differentiation using SPA-SVM. The tentative biochemical assignments of these variables based on Movasaghi et al. 13 are shown in Table 2. Discussion Breast cancer accounts for approximately 15% of all female cancer deaths and has a 5-years survival rate ranging from approximately 40% in low-income countries to ≥ 80% in developing contruies 14 . Its incidence is continually increasing worldwide. This is partly due to a change in the distribution of risk factors: e.g., in developed countries such as the UK, there have been significant increases in women giving birth later in life and in the number of www.nature.com/scientificreports/ women childless by age 45 years. In addition, there has been an increasing adoption of Westernized lifestyles in developing countries 14 , which may be a risk factor for breast cancer. Mammography-based breast cancer screening is a common practice for early detection of breast cancers, where its efficiency has been demonstrated in randomized controlled trials and observational studies; hence, most organizations that issue recommendations endorse regular mammography as an important part of preventive care 15 . However, although mammography-based breast cancer screening is associated with reduced morbidity and mortality, the majority of women who undergo screening will not develop breast cancer in their lifetime 15 . In addition to the low risk of cumulative exposure to radiation over time and the great discomfort or shame associated with mammography-based screening, false positive results may lead to additional tests and investigations potentially causing psychological distress and anxiety. Conversely, negative results (i.e., where no signs of abnormality are found in the screening) may falsely reassure women when cancer is actually present 14 . Moreover, mammography-based screening may also not benefit all women who are diagnosed with breast cancer, since it may lead to harm in women who undergo further biopsy for abnormalities that may not be breast cancer 15 . For these reasons, less invasive and more accurate breast cancer screening strategies are urgently needed. www.nature.com/scientificreports/ Herein, ATR-FTIR spectroscopy in conjunction with chemometric techniques was used to detect breast cancer in a total cohort of 476 patients recruited over 2 years for an early-stage breast cancer screening program in Natal, Brazil. Breast cancer detection among normal samples was successfully performed based on the blood plasma spectra with 93% accuracy (94% sensitivity, 91% specificity, AUC = 0.929) in an external (blind) cohort of 71 patients using the SPA-SVM algorithm. Sixteen spectral features were responsible for class differentiation in the fingerprint region (Table 2). These are predominantly associated with phosphodiesters (P-O vibrations), polysaccharides (C-O stretching), proteins (CH 3 bending, Amide III, Amide I band), nucleic acids (C=O stretching and C-C ring breathing mode), and lipids (C=O stretching and (C=C) cis ). C-O vibrations in carbohydrates, P-O vibrations in phosphodiesters, and proteins vibrations; these have been previously associated with breast cancer in serum 15,16 . Serum applications for breast cancer detection have been performed using IR spectroscopy by Backhaus et al. 15 , where 98% sensitivity and 95% specificity (using cluster analysis) and 92% sensitivity and 100% specificity [using artificial neural networks (ANN)] was obtained in a study carried out with 196 patients. Likewise, Elmi et al. 16 detected breast cancer in serum-based IR spectroscopy with 76% sensitivity and 72% specificity for breast cancer cases using principal component analysis linear discriminant analysis (PCA-LDA) in a study with 86 samples (43 breast cancer, 43 healthy controls). The results reported herein are higher taking into consideration the large number of patients, where the sensitivity and specificity are found to be > 90%; being comparable to results obtained by more sophisticated methods such as using quantum cascade laser IR imaging, where sensitivity and specificity has been reported at 94% and 86%, respectively, using a random forest classifier 17 . However, there are no studies reporting breast cancer screening based on plasma samples using IR spectroscopy for a big cohort of samples. Herein, 476 patients were studied resulting in a diagnostic accuracy, sensitivity and specificity above 90% for cancer detection. Samples. In this study, we evaluated two groups of women. The first, Breast Cancer (BC), refers to a group of women diagnosed with breast cancer, with or without neoadjuvant treatment, and were collected by professionals trained at the Liga Contra o Câncer Hospital (Natal/RN, Brazil), during a period of 2 years. The second, Healthy Controls (HC), refers to a group of women with no previous or current diagnosis of breast cancer, collected at the Prontoclínica Dr. Paulo Gurgel (Natal/RN, Brazil), during the same time period. In both groups, patients were > 18 years old, and family history related to some type of cancer was not taken into account. The Institutional Ethics Committee for Human Research of the Hospital Universitário Onofre Lopes (HUOL), of the Federal University of Rio Grande do Norte (UFRN), Brazil, approved this study (Ethical Approval Number-44113115.1.1001.5292) and informed consent was obtained from all subjects. Also, all the methods carried out in this study were by the approved guidelines. Samples from both groups were obtained after the reading of a Free Informed Consent Form and signature of the patients. Vacutainer tubes BD with 5 mL EDTA were used with disposable vacuum syringes. Thereafter, they were centrifuged for 10 min, and frozen at approximately − 20 °C until the time of analysis. A total of 476 samples were obtained. AtR-ftiR spectroscopy. The samples were removed from the freezer 15 min before analysis to allow thawing. Samples were randomized and, to minimize temporal or instrumental effects, a similar number of samples from both groups were measured on each day. The absorption spectra were obtained using an attenuated total reflection Fourier-transform infrared (ATR-FTIR) spectrometer model IRAffinity-1S (Shimadzu Corp., Kyoto, Japan). The spectra were obtained in the range between 600 and 4,000 cm −1 , with 32 co-added scans and 4 cm −1 spectral resolution (2 cm −1 data spacing). The ATR crystal was cleaned with alcohol (70% v/v) and acetone (P.A.) for each new sample and before setting the new background. A 10-μL staken performed. This procedure was repeated in triplicate. The measurement time for each sample was approximately 5 min. Three spectra collected per sample were first averaged and the following pre-processing was applied to the dataset: truncation to the biofingerprint region (900-1800 cm −1 with 468 wavenumber data points), Savitzky-Golay (SG) smoothing to remove random noise (window = 15 points, 2nd order polynomial fitting), automatic weighted least squares baseline correction, and normalization to the Amide I peak (1,650 cm −1 ). Data analysis. The spectral data import, pre-processing and construction of multivariate classification models were performed using the MATLAB R2014b environment version 8.4 (MathWorks, Inc., Natick, USA) with the PLS-Toolbox version 7.9.3 (Eigenvector Research, Inc., Manson, USA) and laboratory-made routines. All spectra were organized into a data matrix, where samples were represented as rows and the wavenumbers as columns. The samples were divided into three different subsets by the Kennard-Stone (KS) sample selection algorithm 18 : training (70%), validation (15%) and test (15%) sets. The training set was used to build the classification models, while the validation set to optimize and evaluate its internal performance. Finally, the test set was used to evaluate the model classification performance towards external samples. PCA is a feature extraction method widely used for data reduction 19 . It decomposes the pre-processed spectral data into a small number of principal components (PCs) containing scores (variance on sample direction) and loadings (variance on wavenumber direction). The PCA scores are used to assess similarities/dissimilarities between the samples, while the PCA loadings to investigate potential spectral markers. SPA is a forward feature Scientific RepoRtS | (2020) 10:12818 | https://doi.org/10.1038/s41598-020-69800-7 www.nature.com/scientificreports/ selection method 20 . Its purpose is to select wavenumbers whose information content is minimally redundant in order to solve co-linearity problems. The model starts with one wavenumber, then incorporates a new one at each iteration until it reaches a specified number of wavenumbers. SPA does not modify the original data space as PCA does. In SPA, the projections are used only for variable selection purposes. Thus, the relationship between the spectral variables is preserved. On the other hand, the GA uses a combination of selection, recombination and mutation to select a set of variables 21 . The GA aims to reduce the original data in a few number of wavenumbers following a natural evolutionary process based on Darwin's theory where the best set of wavenumbers, in this case considered as a chromosome, is selected according to a fitness function. The GA routine was carried out during 100 generations with 200 chromosomes each where mutation and crossover probabilities were set to 10% and 60%, respectively. The best solution in GA, in terms of fitness value, is obtained after three realizations starting from different random initial populations. Similarly to SPA, GA also does not modify the original data space as PCA does. The SPA/GA fitness is calculated as the inverse of the cost function G , which is defined as follows 24 : where N V is the number of validation samples and g n is defined as: where the numerator is the squared Mahalanobis distance between object x n of class index I(n) and the sample mean m I(n) of its true class; and the denominator is the squared Mahalanobis distance between object x n and the centre of the closest wrong class. The advantages of these variable reduction methods (PCA, SPA and GA) prior discriminant analysis lie in the fact that they efficiently remove co-linearity in the dataset, thus preserving only non-redundant information; they solve dimensionality problems for LDA and QDA; and they speed-up the computational time for SVM. LDA and QDA are discriminant analysis classifiers based on a Mahalanobis distance calculation between the samples; where the main difference between them is that LDA assumes classes having similar variance structures, hence, using a pooled covariance matrix, while QDA assumes classes having different variance structures therefor using the variance-covariance matrix of each class individually for calculation 22 . The LDA classification score for sample i of class k ( L ik ) is calculated for a given class sample in a non-Bayesian form by the following equation 22,25 : where x i is a vector with the input variables for sample i ; x k is the mean of class k ; and C pooled is the pooled covariance matrix between the classes. The QDA classification score for sample i of class k ( Q ik ) is estimated using the variance-covariance for each class k ( C k ) in a non-Bayesian form as follows 22,25 : SVM is a powerful supervised classification method that nonlinearly transform the input sample space into a feature space using a kernel function that maximizes the margins of separation between the sample groups, and then it constructs a linear hyperplane that discriminates the samples from different groups in this feature space 23 . In this study, a radial basis function (RBF) kernel was utilized. The RBF is calculated as follows 26 : where x i and z j are sample measurements vectors, and γ is a tuning parameter that controls the RBF width. In the RBF kernel function, the γ parameter was set to 1. The SVM classification rule is obtained by the following equation 26 : where N SV is the number of support vectors; α i is the Lagrange multiplier; y i is the class membership (± 1); k x i , z j is the kernel function; and b is the bias parameter. These SVM parameters were obtained and optimized via an external validation set. Quality performance. The statistical parameters for the evaluation of the classification models were: accuracy (AC), sensitivity (SENS), specificity (SPEC), Youden's Index (YOU), positive predictive value (PPV), negative predictive value (NPV), F-Score and G-Score. AC is related to the percentage of correct classification achieved by the model. SENS measures the proportion of positive results that are correctly identified while SPEC measures the proportion of negative results that are correctly identified. In this study, when we have a case-control patients approach, sensitivity can be understood as the probability to find a positive result when the disease is present, while specificity can be understood as the probability to find a negative result when the disease is not present. Youden's index (YOU) evaluates the classifier's ability to avoid failure. The PPV measures the proportion (2) g n = r 2 x n , m I(n) min I(m)� =I(n) r 2 X n , m I(m) Scientific RepoRtS | (2020) 10:12818 | https://doi.org/10.1038/s41598-020-69800-7 www.nature.com/scientificreports/ of positives that are correctly assigned (its value varies between 0 and 1); the NPV measures the proportion of negatives that are correctly assigned (its value varies between 0 and 1); the F-score represents the weighted average of the precision and sensitivity; and the G-score accounts for the model precision and sensitivity without the influence of positive and negative class sizes 27 . These parameters are calculated based on the equations shown in Table 3. In addition, a receiver operating characteristics (ROC) curve was generated to all models. The area under curve (AUC) value was calculated to evaluate how well the model can distinguish the samples between the different classes analysed.
4,348.4
2020-07-30T00:00:00.000
[ "Medicine", "Chemistry" ]
Excitation thresholds of field-aligned irregularities and associated ionospheric hysteresis at very high latitudes observed using SPEAR-induced HF radar backscatter On 10 October 2006 the SPEAR high power radar facility was operated in a power-stepping mode where both CUTLASS radars were detecting backscatter from the SPEAR-induced field-aligned irregularities (FAIs). The effective radiated power of SPEAR was varied from 1–10 MW. The aim of the experiment was to investigate the power thresholds for excitation ( Pt ) and collapse ( Pc) of artificiallyinduced FAIs in the ionosphere over Svalbard. It was demonstrated that FAI could be excited by a SPEAR ERP of only 1 MW, representing only 1/30th of SPEAR’s total capability, and that once created the irregularities could be maintained for even lower powers. The experiment also demonstrated that the very high latitude ionosphere exhibits hysteresis, where the down-going part of the power cycle provided a higher density of irregularities than for the equivalent part of the up-going cycle. Although this second result is similar to that observed previously by CUTLASS in conjunction with the Tromsø heater, the same is not true for the equivalent incoherent scatter measurements. The EISCAT Svalbard Radar (ESR) failed to detect any hysteresis in the plasma parameters over Svalbard in stark contract with the measurements made using the Tromsø UHF. Introduction The artificial modification of the Earth's ionosphere was first noted in the 1930s when newly built broadcast radio stations began operations.Radio signals propagating through the region of the ionosphere modified by the high power radio waves were influenced through a cross modulation effect (Tellegen, 1933;Bailey and Martyn, 1934).The first purpose-built high power transmitters (or heaters), intended for ionospheric studies, appeared in the 1970s and the first reported ionospheric modification (heating) experiments were performed at Platteville, Colorado.These experiments revealed that the high power "pump" wave led to the generation of field-aligned electron density irregularities (FAIs;Fialer, 1974;Minkoff et al., 1974).The FAIs are excited as a result of the coupling of the pump wave to a type of electrostatic plasma waves known as upper hybrid (UH) waves which occur in a narrow altitude region around the upper hybrid height.These UH waves cause an enhancement in the electron temperature, hence the term "heating".After several tens of seconds the electron density in the active region increases due to the temperature dependence of the recombination.However, in the upper F-region (about ∼200 km) other processes can also lead to density depletions (Gurevich, 1978;Ashrafi et al., 2006).Detailed discussions on the excitation of FAIs and other heater-induced phenomena are given in the reviews by Robinson (1989) and Stubbe (1996).Once they exist, the heater-induced FAI provide intense and coherent targets for HF radars such as CUTLASS, which receive backscatter from field-aligned electron density irregularities (e.g.Robinson, 1997).Current theory suggests that the FAI are generated by the thermal parametric instability (TPI).There are two regimes of the TPI which relate to soft and hard excitation, depending on whether initial density perturbations are infinitesimal or finite respectively (e.g., Istomin and Leyser, 1997).The former regime corresponds to the thermal oscillating twostream instability (TOTSI; e.g.Grach et al., 1978;Dysthe et al., 1983) while the latter regime is the resonance instability reported by Vaskov and Gurevich (1977).The nonlinear stage of the soft regime resembles the resonance instability.The TOTSI is a well-known theory for the creation and rapid growth of FAIs and one which readily explains the radar observations (e.g.Robinson, 1997). P (dB) Initially, the TOTSI causes a linear conversion of electromagnetic pump wave energy into upper hybrid waves (Vaskov and Gurevich, 1977;Dysthe et al., 1983;Robinson, 1988).This coupling requires the presence of plasma density gradients (pre-existing FAIs) and leads to an increase in the FAI amplitude.Once this amplitude exceeds a threshold value (typically after a few milliseconds) the interaction becomes nonlinear and the irregularity amplitude increases explosively.A simple heuristic (or fitting) model (Dysthe et al., 1983;Robinson, 1989) leads to a relationship between the pump power, P , and the level of anomalous absorption, , experienced by the pump wave, given by where P 1 and P 2 , respectively, are the required power thresholds for the initial and explosive stages of FAI growth, 0 is the level of anomalous absorption before the heater was activated and a is a factor relating to the field parallel scale length of the FAIs (see Robinson, 1989).The two terms in Eq. ( 1) represent the initial and explosive instabilities.Since anomalous absorption ∝ n 2 , where n is the FAI amplitude, then irregularity saturation as governed by Eq. ( 1) is illustrated as the solid curve in Fig. 1 in which is plotted as a function of P .Hysteresis effects in the ionospheric plasma are a consequence of a thermal parametric instability such as the TOTSI (e.g.Grach et al., 1978).The existence of hysteresis was confirmed by the ionospheric modification experiments at Gorkii in Russia reported by Erukimov et al. (1978) and subsequently by Stubbe et al. (1982) and Jones et al. (1983) using the heating facility at Tromsø.A hysteresis effect occurs in the generation of FAI because a threshold power, P t , (shown in Fig. 1) required for the onset of FAI growth is higher than the critical pump power, P c , at which the FAI can no longer be sustained and, hence, collapse.The effective threshold power is given by where P t is larger than both P 1 and P 2 .So, once the heater power P > P t then the FAI form explosively and saturation is rapid.This is demonstrated in Fig. 1 by the path ABCD. If the pump power is then steadily reduced, the FAI do not collapse until P < P c , therefore following path DEFA.This paper concerns the excitation and hysteresis of FAI induced by the SPEAR (Wright et al., 2000;Robinson et al., 2006) high power heating facility, located on Spitsbergen.Since SPEAR is the most northerly heating facility in the world, these observations have never been possible before at these very high latitudes (75 • N geomagnetic, L ∼ 15).Similar experiments have been undertaken at lower latitudes.In particular, this study will compare observations over SPEAR with those in the vicinity of the EISCAT Heater near Tromsø, Norway (66 • N geomagnetic, L ∼ 6.6), and presented by Wright et al. (2006).It will be shown that FAIs can be artificially excited using very low effective radiated powers (ERPs) from SPEAR.Hysteresis effects are clearly observed in the CUTLASS radar backscatter from both Tromsø and SPEAR.However, stark differences are apparent in the incoherent scatter measurements provided by the EISCAT UHF radars. The SPEAR high power facility SPEAR (Space Plasma Exploration by Active Radar; Wright et al., 2000;Robinson et al., 2006) is a facility capable of radiating high power radio waves in the high frequency (HF) band in the range 4-6 MHz.It was deployed on the island of Spitsbergen (in the Svalbard archipelago) at a geographic latitude of 78 is collocated with the EISCAT Svalbard Radar (ESR).First operations occurred in 2003.SPEAR utilises direct digital synthesis (DDS) technology to be able to point its beam in any direction essentially instantaneously.It currently employs a 4×6 array of rhombically broadened dipole antennas and possesses a frequency-dependent beam width which is approximately 14 • × 21 • wide with a gain of 21 dBi (Wright et al., 2000).During the experiment described in this paper, SPEAR was operated at a frequency of 4.45 MHz with O-mode polarisation and stepped its power output up to a maximum ERP of 10 MW.The SPEAR beam was directed along the magnetic field. O-mode polarised radio waves radiated by SPEAR are capable of interacting with the ionospheric plasma and can couple to upper hybrid waves, as described above, to excite FAI.When the ionosphere is underdense, the radio waves radiated by SPEAR can also interact with plasma deeper in the magnetosphere.In addition, SPEAR operates not only as an ionospheric heating facility but also possesses a receiving system and can thus also function as a radar.First results and more detailed description of various SPEAR experiments are presented by Robinson et al. (2006). Þykkvibaer Along the radar beams the range is marked every 10 range gates. The CUTLASS radars The global SuperDARN radar network (Greenwald et al., 1995) currently consists of 19 HF coherent radars, 12 of which operate in the Northern Hemisphere.Two of the Northern Hemisphere SuperDARN radars comprise the CUTLASS (Co-operative UK Twin Located Auroral Sounding System) system.CUTLASS is a frequency agile bistatic HF radar system (e.g.Milan et al., 1997) from FAIs in the ionosphere.There is an aspect-angle dependence for scattering, which requires that the radio wave k vector is close to orthogonal to the magnetic field.The experiments described here utilise the SPEAR high power HF Heating facility which can generate artificial field-aligned irregularities as described earlier and thus provide a region of backscatter in the CUTLASS fields of view (e.g.Robinson et al., 1997) when backscatter may not already be present.This effect is illustrated schematically in Fig. 2. The detection of artificial backscatter by HF radar then provides a powerful way of diagnosing plasma processes (e.g.Robinson et al., 1997) and observing geophysical phenomena (e.g.Yeoman et al., 1997). During the experiments relevant to this paper, the CUT-LASS radars operated in "stereo" mode where, by utilising some of the radar's spare duty cycle, each radar was able to simultaneously sound over two different scan patterns.These are shown on the maps illustrated in Fig. 3.In their standard operational mode the radars would sweep over 16 beams with a dwell time of 3 or 7 s on each beam and a range resolution of 45 km.During this experiment, however, the radars were sounding over reduced regions, the blue and red fields of view indicating the two scan patterns.Only data for the red fields of view (centred over SPEAR) are considered in this paper.The first range sounded on both the Hankasalmi and Þykkvibaer radars was 1485 km.SPEAR lies at approximately 1800 km from the Hankasalmi radar and about 1900 km from Þykkvibaer.The dwell (integration) time on each radar beam was 1 s for Hankasalmi and 2 s for Þykkvibaer.Each radar scanned over 5 beams in the red field of view and employed a three frequency scan mode during this experiment.Thus the final time resolution of the radar data for a single beam and a narrow frequency band (12.3-12.6MHz for all the data shown here) is 15 s for Hankasalmi and 30 s for Þykkvibaer.The high backscatter powers that are characteristic of artificially generated irregularities make it possible to integrate data over such short dwell times since the signal to noise levels are high.Only data from beam 9 of the Hankasalmi radar and beam 6 of Þykkvibaer have been employed as these beams overlie the location of SPEAR. The EISCAT Svalbard Radar (ESR) The EISCAT Svalbard radar system (ESR; Wannberg et al., 1997) is collocated with SPEAR and was used to detect SPEAR-enhanced incoherent radar backscatter from which plasma parameters, such as electron concentration and ion and electron temperatures, may be derived.The ESR operates at frequencies close to 500 MHz and is therefore sensitive to 30 cm wavelength plasma waves.On 10 October 2006, the ESR ran an experimental mode that used the steerable 32 m dish (pointed field-aligned, i.e. with a geographic azimuth of 182.1 • and a geographic elevation of 81.6 • ) to collect ion line data using long pulses.Although the ion line spectra were obtained on two channels with transmitter frequencies of 499.9 and 500.3MHz, only data from the 499.9MHz channel have been presented in this paper.The height-discriminated ion line data, which had a temporal resolution of 5 s, were obtained over an altitude range of 86-481 km with a resolution of approximately 28 km. Observations On 10 October 2006 SPEAR was operated such that its output power was increased stepwise up to an ERP of 10 MW and then stepped back down.The power, with respect to 10 MW, was varied from −10 dB to 0 dB and back to −10 dB in 2 dB steps with a dwell of 1 min at each power level.This cycle was repeated several times.A gap of four minutes was left between cycles to prevent the preconditioning of the ionosphere from influencing subsequent measurements. A complete cycle, including off period, thus lasted 15 min.Figure 4 shows range-time-intensity (RTI) plots of the received backscatter power at Þykkvibaer (upper panel) and Hankasalmi (lower panel) during the experiment.The data presented are for the radar beams which overlie SPEAR and the ESR.At the beginning of the interval shown SPEAR was operated a few times at full power for a few minutes on then off in order to identify the SPEAR-induced backscatter in the CUTLASS radars in real time.Then, at 11:19 UT the first of seven consecutive power-stepping cycles commenced.The backscatter were detected by both CUTLASS radars simultaneously although the SPEAR-induced scatter at Hankasalmi, which appear as the patches of higher intensity in the plots, were embedded in a region of weak natural scatter.Hence, the rest of this paper will focus on the Þykkvibaer data where interpreting the weakest returns is more straightforward.Reproduced in the top panel of Fig. 5 are the Þykkvibaer data during one of the cycles, selected since it is one of the clearest and simplest examples.The vertical coloured stripe immediately before 12:43 UT is most likely caused by noise or interference in the radar data.The lower panel shows the SPEAR output ERP relative to the maximum power of 10 MW.The patch of scatter can clearly be seen to intensify as the SPEAR output increases and then fall again as expected.However, with respect to the peak SPEAR output the observed shape and intensity of the patch of artificial scatter is clearly asymmetric.This is indicative of ionospheric hysteresis, where the backscatter intensity is higher for the power-down cycle than for the power-up cycle at the same SPEAR output power.Essentially, SPEAR has preconditioned the ionosphere prior to the down-going part of the power cycle.The two vertical dashed lines are placed in an attempt to indicate the cut-offs of the scatter observed by the radar.P t is related to the power threshold for the explosive excitation of the SPEAR-induced irregularities.It can be seen that P t occurs at about 8 dB below the maximum (equivalent to an ERP of about 1.6 MW).It seems that the threshold power to prevent collapse of the irregularities (P c ) was below the minimum power (∼1 MW) employed during this experiment.It should be noted that the CUTLASS radars may be desensitised to very weak backscatter returns as a result of absorption losses along the ∼4000 km radio wave path between the radar and target and back again.If observed locally, the power thresholds might actually be smaller. Discussion Wright et al. ( 2006) reported results from a series of powerstepping experiments which were undertaken in 1997 using the Tromsø heating facility in conjunction with the EISCAT mainland UHF radar and CUTLASS.The programme was designed to investigate power thresholds for the creation of FAIs and the associated ionospheric hysteresis.The findings of Wright et al. (2006) will be used for comparison with the observations presented here.Following the commissioning phase of SPEAR, it was decided to run similar experiments to those using the Tromsø heater in order to assess the nature of FAI formation at very high latitudes. The power-stepping experiments which took place in Tromsø in 1997 demonstrated that artificial FAI could be excited with relatively low heater output powers and that hysteresis played a role in maintaining them.However, the Tromsø heater had a maximum ERP of 155 MW during those experiments and was configured such that the smallest power increment available was 3.9 MW.Wright et al. (2006) could only conclude that the power thresholds P t and P c were less than 3.9 MW.In 2006, SPEAR operated such that it employed a finer step-size during the experiment.The results presented here strongly indicate that the threshold for creating the artificial FAI (P t ) lies in the range 1.0-1.6MW.Once created, the FAI can be maintained with much weaker powers.Indeed the true value of P c was still outside of the resolution of this experiment.It is only possible to say that P c < 1 MW.The marked difference between P t and P c is caused by the irregularity density function given in Fig. 1 output ERP (kW) at a given altitude, R (km), above the heater is given by (Robinson, 1989) The heater interaction altitude (at which the FAI are excited), R, was determined to be 158 km for this experiment (see below).Hence the threshold ERP, P t , of 1 MW equates to a threshold electric field, E t , of 50 mV/m.This is slightly greater than predicted thresholds (e.g.Istomin and Leyser, 1997). When SPEAR was constructed concerns were raised that the maximum ERP achievable, being only ∼10% of the heater at Tromsø, might not be adequate to artificially excite FAI.Not only was that shown not to be the case by Robinson et al. (2006) but the results presented here also indicate that SPEAR ERPs of only 1/30th of its nominal maximum capability can excite the FAIs.Thus SPEAR offers an opportunity to study the formation of FAIs on fine power scales.In addition, forthcoming experiments will attempt to ascertain the true value of P c . Understanding the observed ionospheric hysteresis utilising SPEAR is far less straightforward.To examine the observed hysteresis, the CUTLASS radar backscatter powers from the centre of each heated patch (the patches move slightly within the radar field of view as a result of natural ionospheric variability along the radar radio wave path) have been separated into up-and down-going parts of the cycle and plotted as a function of SPEAR ERP. Figure 6 shows the average effect over all seven cycles.This clearly demonstrates that ionospheric hysteresis is influencing the CUT-LASS radar measurements since the power at a given ERP is higher on the down-going part of the cycle as a result of the preconditioning of the ionosphere over SPEAR.These observations are very similar to those presented by Wright et al. (2006) following the Tromsø experiments except that for the current study the difference in the up-and down-going power levels is significantly smaller.The largest difference in powers observed in Fig. 6 is 5 dB whereas at Tromsø that difference was nearer 20 dB at equivalent ERPs.Since the two experiments were conducted at similar points in the solar cycle then perhaps it is likely that the preconditioning inflicted on the ionosphere by the Tromsø heater was much greater (due to the order of magnitude difference in maximum ERP of the two facilities) than that caused by SPEAR.For comparison with the setup employed for this study, the experiments at Tromsø utilised a heater operated with an Omode polarisation at a frequency of 4.544 MHz with its beam pointing along the field line.The maximum effective radiated power (ERP) on these occasions was 155 MW and the interaction (or UH) height was identified by the UHF radar to be in the altitude range 180-200 km (Wright et al., 2006). The backscatter detected by the CUTLASS radars indicate that a hysteresis effect is observed and that, perhaps, the same mechanism is responsible for exciting the artificial FAIs both over Svalbard and over the Norwegian mainland.However, incoherent scatter radar measurements taken within the heated volumes over SPEAR and over the Tromsø heater are very different.Figure 7 is reproduced from Wright et al. (2006) and shows a typical example of hysteresis identified in ionospheric electron temperatures within the heated volume observed over Tromsø using the EISCAT UHF radar, over a height range which encompasses that where the artificial FAI are generated (the upper-hybrid height, as identified using the ion-line overshoot effect; e.g.Rietveld et al., 2000).Similar hysteresis was observed in changes in electron density induced by the heating effect.In contrast with these measurements, Fig. 8 illustrates the modified electron density (upper panel) and electron temperatures (lower panel) as observed by the ESR within the heated volume over SPEAR.The plotted data represent measurements averaged over all seven SPEAR cycles at the interaction height (i.e. the approximate height at which the FAI were being excited), identified by the ESR to be 158 km.The changes in electron temperature and density plotted in Figs.7 and 8 are relative to the measured values immediately before each heater cycle commenced.The data are again separated into those recorded during the up-going and down-going parts of the SPEAR power cycle.The averaged data are representative of the measurements over each individual cycle and demonstrate that no hysteresis was observed. The reason why the ESR measurements are so strikingly different to those at Tromsø (despite the fact that the CUT-LASS radar scatter suggest that the FAI are exhibiting hysteresis effects) is unclear.It is true to say that incoherent scatter radars detect signals from very different structures to those to which the coherent radars are sensitive.Also, the ESR transmit frequency is only half that of the mainland UHF radar.However, it seems unlikely that either of these points can explain the differences. Other experiments undertaken utilising the ESR and SPEAR have also noted contrasts between the measurements taken at the two locations (e.g.Robinson et al., 2006).A relevant example of this is apparent in the time evolution of the parametric decay instability (PDI).At Tromsø, when the pump wave is activated, the radio waves interact linearly with Langmuir wave modes which exist close to the reflection height of the pump wave.This initially leads to a large signal intensification observed in the ion line spectra observed by the EISCAT UHF radar.Shortly afterwards the thermal oscillating two-stream instability (TOTSI) is believed to explosively excite FAI at the upper hybrid (UH) height (below the pump wave reflection height).The FAIs then act to absorb the pump wave energy before it can reach the reflection height and thus the PDI signal is quenched.This is often called the overshoot effect.However, during SPEAR experiments on Svalbard it has been noted (Robinson et al., 2006) that the PDI signal sometimes persists for long periods and even indefinitely.However, the CUTLASS radar measurements indicate the presence of irregularities.Perhaps this indicates that the density of the artificial FAI has not saturated or is, indeed, small.In either case, the "anomalous" action of the PDI over Svalbard may be consistent with the lack of observed hysteresis in the ESR measurements. The saturated magnitude of heater-induced density striations cannot exceed a few percent (Gurevich et al., 1995) and, if the UH damping is linear, the saturated effective amplitude of the trapped UH waves should roughly be proportional to the pump amplitude.As a result, T e /T e0 ∼ E 2/5 0 ∼ P 1/5 pump , where T e /T e0 is the ratio of the change in electron temperature to the unperturbed electron temperature, E 0 is the electric field of the pump wave at the UH height and P pump is the pump power.However, below 200 km inelastic losses become significant in limiting T e , especially at low heater ERP.Thus, a 30 km difference in altitudes between the ESR and Tromsø UHF radar observations (as is the case in this comparison) could lead to very different observed behaviour in the electron temperatures.It seems unlikely that FAIs at 158 km and low ERP would have reached saturation in magnitude.In addition, T e could be so small that it might not be detected by the ESR. Conclusion The SPEAR high power radar performed a power stepping experiment to investigate the properties of the ionospheric plasma over Spitsbergen, Norway.The high power radio waves excited artificial field-aligned irregularities (FAIs) which were then monitored by both CUTLASS radars and the EISCAT Svalbard Radar (ESR).It has been demonstrated that SPEAR is capable of exciting FAIs with an effective radiated power of only 1 MW, representing only 1/30th of the facility's nominal maximum output.Once the FAI are excited they are easily maintained and the threshold for collapse of the FAI was found to be less than 1 MW but beyond the range of the experiment undertaken.The determination of this parameter will be the subject of a future experiment. CUTLASS measurements indicated clear hysteresis in the artificially excited FAIs in that the received backscatter power from these intense ionospheric targets was higher for the power-down part of the cycle than for power-up.This agrees with observations previously made utilising the EIS-CAT heating facility at Tromsø.However, incoherent scatter radar measurements made with the ESR and the UHF radar at Tromsø appear radically different.Over Svalbard, no hysteresis was observed and the measured electron densities and electron temperatures appeared chaotic within the heated volume.This is in stark contrast with observations over Tromsø.Although this effect has been noted in other types of experiment, the reason for the difference is, as yet, unclear and is the subject of further investigation.It has been postulated that this might be the result of different physical processes dominating the ionospheric interactions in the two regions, which underlie very different magnetospheric morphologies. Fig. 1 . Fig. 1.A theoretical curve showing the relationship between anomalous absorption and heater power (solid curve).The dashed lines show the different paths that lead to the hysteresis effect. Figure2 Fig. 2 . Fig. 2.A schematic representation of field-aligned irregularity generation by a high power heater and a half-hop radio path of an HF radar which receives backscatter from these structures. Fig. 3 . Fig. 3.A map illustrating the beam patterns employed by the CUT-LASS radars during this experiment.The two fields of view of the radars are made possible by operating each radar in its stereo mode.Along the radar beams the range is marked every 10 range gates. Fig. 7 . Fig. 7.The temperature change of the electrons in the ionosphere modified by the Tromsø heater normalised by and plotted as a function of the heater output power.Again the up-and down-going parts (respectively symbolised by and +) of the heater cycle are overlaid to illustrate the hysteresis effect (reproduced from Wright et al., 2006).
5,997.6
2009-07-01T00:00:00.000
[ "Physics" ]
Jet physics in hadronic collisions : theoretical remarks I review some of the lessons that are emerging from the analysis of jet production at the LHC. Introduction Jets are the most direct and frequent signal of hard interactions in hadronic collisions (for a recent review, see [1]).They reflect the dynamics of the quarks and gluons produced in large-momentum-transfer (Q 2 ) processes.Jets may arise from the decay of heavier particles, and from higher-order radiative processes.In the former case, they can be used as discovery probes, or as means to measure the properties of their parent particles (e.g. the top quark mass).In the latter case, they directly probe the QCD dynamics (e.g.higher-order corrections or parton distribution functions (PDFs)), and they determine final states that can act as important backgrounds to searches or measurements. We should therefore look at jet physics in hadronic collisions from three perspectives: 1. Opportunities.Fully exploit jet probes to: • do precision measurements (PDFs, m top , α s , hard diffraction, ...); • probe the existence of BSM phenomena. • Question: how much can we push, and trust, the precision of these predictions? 2. Challenges.Verify and improve the suitability of our calculations under different and difficult dynamical regimes: • high orders in perturbation theory (PT): e.g.loop corrections, or processes proportional to large powers of α s , as in multijet production; • resummation: e.g. in processes characterized by very different hard scales, say µ 1 µ 2 , where the expansion parameter α S log(µ 1 /µ 2 ) is large.Typical examples here include production of multiple soft jets accompanying the creation of very heavy objects (t t+multijets), production of b quarks in high-E T jets (µ 1 ∼ E T and µ 2 ∼ m b ), etc. • Question: how reliable are our approximations? What is the best theoretical approach to a given observable (e.g.shower or fixed-order)? As shown in the various contributions to the plenary and paralel sessions of this Conference, the amount of measurements emerging from the analysis of data at the Tevatron, HERA and LHC is immense.Rather than reviewing all of this material, I shall focus here on a few specific examples, to give an overall assessment of the status of theoretical predictions, and highlight outstanding challenges and opportunities that emerge from the data. I would like to dedicate this contribution to the memory of Professor Kunitaka Kondo (1935Kondo ( -2011)).A gentle and generous man, who greatly contributed to the Tevatron program, to the understanding of jets in p p collisions, and to their use as analysis tools and probes of BSM phenomena, and who organized the last Hadron Collider conference to take place in Japan [2]. Inclusive jet cross section The E T distribution of inclusive jets is a direct probe of PDFs and of α S .Its behaviour at the largest values of E T , furthermore, allows to test the point-like nature of quarks.The robustness of such test, on the other hand, relies on the precision with which PDFs are known [3], stressing once more the relevance of the direct PDF determination, particularly in the large-x region.In this respect, measurements at large rapidity (y) are particularly important, since at large y one can probe the large-x behaviour of PDFs in a region of Q 2 where the absence of possible new physics effects has already been verified, and these do not influence the interpretation of the results in terms of PDFs. Next-to-leading order (NLO) parton-level QCD predictions for inclusive jet cross sections at hadron colliders have been available for a long time [4][5][6].To compare these predictions with data, the jet is defined as the sum of the one or two partons that can be clustered into an individual jet.This relies on the fact that, for inclusive quantities such as a jet cross section, the shower evolution, and thus the full-order resummation of collinear and soft emissions, modifies the predictions only by terms that are formally beyond NLO.The parton-level jet energy is then + [ E( ) -E( ) ] Figure 1.Schematic representation of the corrections needed to compare a fixed-order NLO calculation of jet rates with data: , where E NLO is the energy of the 1 or 2 partons in the NLO final state, E MB is the contribution from the fragments of the protons, and ∆ had E is the energy shift before and after the hadronization of the partons emitted during the shower evolution.corrected for two non-perturbative (non-PT) effects, which cannot be accounted for by the pure parton-level result: the presence of the underlying event (namely energy deposited inside the jet by the fragments of the colliding protons), and the hadronization corrections (partonic energy can be dragged in or out of the jet cone when partons are brought together to form hadrons).This is shown schematically in fig. 1. As an example, fig. 2 shows the results published by CMS, from the first 7 TeV run [8].Overall the agreement, within the quoted uncertainties, is excellent.Few remarks are nevertheless in order. The large theoretical systematics at small E T are dominated by non-PT effects; for central rapidities, the agreement of the data with the central value of the prediction is however excellent, and this therefore suggests that this systematics is likely overestimated.The data could therefore be used to constrain more tightly the impact of non-PT physics.The agreement however deteriorates at larger y, where it is only by fully exploting the non-PT systematics that data agree with theory within 1σ.Thus the question is whether these systematics are correlated or not across the y range.If they are, then there are indications of a discrepancy that cannot be attributed to non-PT physics. At large E T , on the other hand, the data tend to be on the low side of the predictions.Different PDFs are quite compatible with each other, and what dominates the theory systematics is the spread of any individual PDF set.There are therefore indications that one could use these data to improve the PDF knowledge.The discrepancy becomes bigger and bigger at large y, where however also the experimental systematics, driven by the jet energy scale, grows.The question is, once again, how much of this uncertainty is correlated between the central and forward y regions. The trend is different with a larger jet cone radius, as shown in fig.3, where higher-statistics CMS data [9], analyzed with a radius of 0.7, are on the high side of the theory curves.Since E T slopes at large y are much steeper than at central y, they are much more sensitive to small changes in the jet energy.It is therefore suggestive to conclude that higher-order perturbative corrections due to multiple gluon emission in the shower have a non-negligible nu- merical impact on the jet energy, in spite of their being formally of higher order. This conclusion is consistent with the results of an ATLAS study [10], in which the data are compared to fixed-order NLO predictions from [11] and to the results of a complete NLO+shower calculation [12], done with POWHEG and either Pythia [13] or Herwig [14] for the shower evolution.In the case of Pythia, POWHEG's agreement with data is significantly better than the fixedorder NLO result, particularly at large E T and large y.Furthermore, noticeable differences appear in general between Pythia and Herwig showers, and different tunes of the underlying event. So, while the overall agreement of data and theory is certainly satisfactory, I believe that few things need to be better understood in order to reliably use high-E T and high-y jet data to improve PDFs. Jet cross section ratios The availability of LHC data at different energies opens a new opportunity for precise measurements with jets.In fact, a large fraction of the experimental and theoretical systematics in the jet cross section measurements have some degree of correlation at different energies, and may therefore cancel in the measurement of cross section ratios.Historically, jet production at different energies have been compared as a function of the x T = E T /E beam variable.This reduces the theoretical systematics related to PDFs, since these are probed at the same x values.A direct comparison of E T spectra, on the other hand, enjoys a correlated theoretical scale uncertainty (the underlying hard process is the same at the two energies), as well as more closely correlated experimental systematics (e.g.jet energy scale).The larger PDF systematics of this ratio, compared to x T ratios, is also desirable, since it gives potential sensitivity to improve our knowledge of the PDFs themselves!Several examples of the remarkable theoretical precision that can be achieved through cross section ratios at different energies, are given in [15].For example, that study finds that: Notice that the scale uncertainty is much smaller than the PDF uncertainty, giving the opportunity to improve the PDF knowledge with a direct comparison with data. A first experimental exploration of E T cross section ratios, using data from the short 2011 run at against parton level NLO and against the NLO+shower result of POWHEG, using different choices of parton showers and underlying event tunes.The comparison with the results of fig. 4 is quite telling: on one side, for a fixed PDF the spread of theoretical predictions is much smaller in the ratio than in the 7 TeV cross section, while the PDF systematics remains quite large.Notice also the very small experimental uncertainty, which blows up only at the largest E T values, dominated by the poor statistics at 2.76 TeV.Reference [16] also includes a first study of the impact of the inclusion of these data in PDF fits, showing that they can indeed shift the central values, and reduce the overall uncertainty, of the gluon density over a wide range of x.The much larger statistics available today at both 7 and 8 TeV, and the more accurate luminosity determination, will certainly offer a very powerful tool to carry these studies even further. Multijets Final states with many hard jets are very interesting: on one side they pose a challenge to the theoretical calculations, which require the evaluation of huge numbers of complex Feynman diagrams, and possibly the resummation of large logarithms emerging from the existence of very disparate scales (e.g. the individual jet E T versus the transverse energy of the overall multijet system).On the other, multijet final states (possibly accompanied by other objects like gauge bosons or top quarks) are the dominant background to many searches of phenomena beyond the Standard Model.Their study is therefore a crucial component of the LHC physics programme. Available analyses from ATLAS and CMS are still limited to the low-statistics 2010 run, but provide already very valuable information.Figure 6 shows AT-LAS's measurements of the inclusive multijet [17] and W+multijet [18] rates, compared to the predictions of several theoretical approaches.In the multijet case, the comparison is done with leading-order matrix element calculations, merged with shower evolution (Sherpa [19], and Alpgen [20] plus Herwig and Pythia), and with a pure parton shower approach (Pythia).In the W+jets case, there is also the comparison with parton-level NLO results [21]. The leading-order matrix element plus shower results and the parton-level NLO results agree well with data, and the occasional minor discrepancies are consistent with the theoretical systematics, and can in principle be "tuned away", for example, by adapting the scale choice or the details of the merging algorithms.The pure showerevolution results from Pythia well with the multijet rates, but fall short of predicting the rates for W+multijets.This apparently contradictory conclusion has a possible explanation, graphically outlined in fig. 7.In the case of multijets, Pythia starts the shower evolution from a "seed" parton level configuration with two hard, recoiling, partons.Additional jets are emitted from both initial and final states and, to the extent that they are softer than the leading jets, the shower approximation appears to work well.In the case of W+jets, however, the partonic configuration that seeds the shower has a single jet recoiling against the W. The emission of softer jets from the initial states and from the final-state parton, will dominantly lead to configurations in which the W maintains a large transverse momentum.This evolution entirely misses, however, the important region of phase space in which the W boson is soft with respect to the rest of the event, as shown in inset (c) of the figure.The more jets we require in the final state, the larger must be the initial p T (W), to allow the recoiling jet and the initial state to have enough energy to radiate the required jets.This reduces even more the accessible phase-space for the W, giving an overall rate that drops more and more at larger jet multiplicity. One more observation is worthwhile.Even though the shower-only approach gives a good description of inclusive multijet rates, there is no guarantee that kinematical correlations among the jets are equally well predicted.Consider for example fig.8, taken again from [17].It describes the fraction of events with three or more jets, as a function of E lead T , the E T of the leading jet.The three plots correspond to different E T thresholds for the definition of the jets, E T > 60, 80 and 110 GeV.Pythia seems less accurate in the region of smaller E lead T .There appears to be a change in the agreement pattern for E lead T > 300, 400 and 500 GeV, respectively.Given that the total transverse energy of the jets, H T , must be larger than 2 × E lead T , we see that the best agreement is obtained when H T ∼ > 0.1×E min T ∼ α S × E min T , in other words when the E T of the radiated jet is indeed soft with respect to the Q scale of the event.Notice also that Alpgen and Sherpa reproduce these distributions quite well over the full range of E lead T . Conclusions and outlook Jet physics in hadron collisions is as healthy as ever.The multitude of tests and probes explored by the Tevatron and LHC experiments gives us great confidence that the theory tools we have available are good, and much work is in place to improve them even further.The leadingorder matrix element generators, merged with shower evolution, are very successfull in describing complex higherorder topologies and rates.NLO calculations are making great progress: they are now available for high multiplicity final states, beyond the most optimistic hopes of only few years ago.New techniques are emerging, which I could not review here, to fully automatize the calculation of the NLO cross sections and their parton showers evolution [22], extending also the possibility of merging together samples of different multiplicity [23][24][25], maintaining the NLO accuracy.With NLO results close to becoming a straightforward enterprise, limited only by CPU power, NNLO results start appearing.For example, after the end of the Workshop, a major new achievement was reported, namely the calculation of the NNLO inclusive E T jet cross section, limited to the gg initial state [26]. The calculation shows a significant reduction of the scale uncertainty, with a mild increase in rate, at the 20% level relative to NLO. A new era of quantitative and precise applications of hard QCD at the LHC is opening, much like the quantitative tests that came from LEP and HERA.The full benefits of precision jet physics at the LHC are yet to be explored.But since new physics, if any is there, is hiding well, our safest bet today to pull it out of nasty SM backgrounds is to invest in a full-scale campaign of improvement in the precision with which we can measure/predict hard processes at the LHC. Figure 2 . Figure 2. CMS measurement [8] of inclusive jet E T spectra, in different rapidity ranges, at √ S = 7 TeV.Jets are defined by the anti-k T algorithm [7], with R = 0.5.The data points are normalized to a parton-level NLO calculation, corrected for the effecs of hadronization and underlying event.The theoretical systematics includes both scale and PDF uncertainty. Figure 3 . Figure 3. Same as fig.2, but with a cone radius R = 0.7, and for a larger statistics analysis of CMS data [9]. Figure 4 . Figure 4. ATLAS results[10] for jet production in the forward y region, normalized to parton-level QCD predictions (scaled by non-PT effects).The data are compared to predictions from various NLO+shower results, using different shower models and underlying event tunes. √ S = 2 .76 TeV and from √ S = 7 TeV, has been documented recently by ATLAS [16].Figure 5 shows the cross section ratios, as a function of jet E T and for different y windows, compared 06001-p.3 Figure 5 . Figure 5. ATLAS results [16] for the ratio of jet E T distributions at √ S = 2.76 and at 7 TeV.The ratio is normalized to NLO QCD plus non-PT corrections, and compared to the predictions of various NLO+shower calculations. Figure 7 . Figure 7. Schematic picture of the shower generation of multijet (a) and W+multijet (b) final states, starting from the leading-order processes included in shower Monte Carlo programs.In the case of W+jets, only configurations with a large value of p T (W) can be generated (lower graph of (b)), while the important phase space region where p T (W) is small (c) is neglected. Figure 8 . Figure 8. ATLAS measurement [17] of ≥ 3-jet fractions, as a function of the leading jet E T , and for different jet thresholds.
4,053
2013-05-01T00:00:00.000
[ "Physics" ]
The Distribution of Charged Amino Acid Residues and the Ca2+ Permeability of Nicotinic Acetylcholine Receptors: A Predictive Model Nicotinic acetylcholine receptors (nAChRs) are cation-selective ligand-gated ion channels exhibiting variable Ca2+ permeability depending on their subunit composition. The Ca2+ permeability is a crucial functional parameter to understand the physiological role of nAChRs, in particular considering their ability to modulate Ca2+-dependent processes such as neurotransmitter release. The rings of extracellular and intracellular charged amino acid residues adjacent to the pore-lining TM2 transmembrane segment have been shown to play a key role in the cation selectivity of these receptor channels, but to date a quantitative relationship between these structural determinants and the Ca2+ permeability of nAChRs is lacking. In the last years the Ca2+ permeability of several nAChR subtypes has been experimentally evaluated, in terms of fractional Ca2+ current (Pf, i.e., the percentage of the total current carried by Ca2+ ions). In the present study, the available Pf-values of nAChRs are used to build a simplified modular model describing the contribution of the charged residues in defined regions flanking TM2 to the selectivity filter controlling Ca2+ influx. This model allows to predict the currently unknown Pf-values of existing nAChRs, as well as the hypothetical Ca2+ permeability of subunit combinations not able to assemble into functional receptors. In particular, basing on the amino acid sequences, a Pf > 50% would be associated with homomeric nAChRs composed by different α subunits, excluding α7, α9, and α10. Furthermore, according to the model, human α7β2 receptors should have Pf-values ranging from 3.6% (4:1 ratio) to 0.1% (1:4 ratio), much lower than the 11.4% of homomeric α7 nAChR. These results help to understand the evolution and the function of the large diversity of the nicotinic receptor family. INTRODUCTION Ca 2+ ions are able to permeate through the cation-selective pentameric nicotinic acetylcholine receptors (nAChRs; Katz and Miledi, 1969;Bregestovski et al., 1979;Eusebi et al., 1980; for review see Fucile, 2004). This Ca 2+ entry pathway plays relevant physiopathological roles, for instance positively modulating neurotransmitter release in neuronal presynaptic terminals (Wonnacott, 1997) or damaging the muscle endplates in patients with slow-channel congenital myasthenic syndromes (Engel et al., 2003). For these reasons the quantification of the amount of Ca 2+ flowing through a particular nAChR subtype is a relevant goal to understand its functional role and the physiological consequences of its activation. The very first methodological approach to quantitatively determine the ion channel Ca 2+ permeability was based on the measurement of the reversal potential shift upon changes in [Ca 2+ ] o , and then calculating the P Ca /P Na ratio using the Goldman-Hodgkin-Katz constant field assumptions (Lewis, 1979). A second experimental approach was later introduced by Erwin Neher and his group in the early 90's, basing on the possibility to simultaneously record transmembrane currents with the Patch-Clamp techniques and the [Ca 2+ ] i changes with fluorescence microscopy (Zhou and Neher, 1993;Neher, 1995), and leading to the direct measurement of the fractional Ca 2+ current, usually indicated as Pf and representing the percentage of the total current carried by Ca 2+ ions. A careful comparison of the two methods (Burnashev et al., 1995) indicated that the second one was more advisable, being independent from the constant field assumptions, not always respected in real ion channels. In the last two decades the Ca 2+ permeability of several nAChRs has been characterized in terms of Pf, using the Neher's methodological approach (see references in Table 2 and in Fucile, 2004). These studies highlighted a large variability of Ca 2+ permeability, with Pfvalues ranging from 1.5% (human α4β4; Lax et al., 2002) to 22% (rat α9α10; Fucile et al., 2006b), and raise questions about the physiological meaning and the structural determinants of this large spectrum. In particular, seminal studies have clearly indicated a fundamental role of charged amino acid rings flanking the transmembrane region TM2, for single-channel conductance (Imoto et al., 1988), cation selectivity (Corringer et al., 1999), and Ca 2+ permeation (Bertrand et al., 1993). It is widely known and accepted that these highly conserved charged residues, when mutated, may profoundly alter the ion selectivity filter of nAChRs, leading to significant changes of ion permeability (Bertrand et al., 1993;Corringer et al., 1999). Despite this long established knowledge, the possibility to quantitatively relate the Ca 2+ permeability to the presence of charged residues in key positions is lacking. Though in theory it is possible to build a molecular model to simulate the channel energy profile and numerically calculate the Ca 2+ /Na + flow ratio for each distinct nAChR subtype (Song and Corry, 2009), this approach would be extremely onerous. In this study the sum of the electrical charges present in different regions of the channels have been considered as modular components of the selectivity filter and quantitatively analyzed, with the aim to understand the contribution of each region to the Ca 2+ influx through nAChRs, and to build a quantitative model able to predict Pf -values from the amino acid sequences of nAChR subunits. METHODS The amino acid sequences of all nicotinic subunits considered in this study have been obtained by the Uniprot website, and all the Uniprot accession numbers are given in Table 1. For each pentameric subunit assembly considered, the sum of electrical charges arising from negative (aspartate, glutamate) and positive (arginine, histidine, lysine) residues has been calculated in different portions of the channel. In particular each histidine was considered contributing +0.1 in the extracellular solution (pH 7.4) or +0.4 in the intracellular one (pH 7.0). The channel sections considered in the study were: (1) the extracellular region, composed by both N-and C-terminal side of the five subunits; (2) the intracellular region between TM3 and TM4 transmembrane regions; (3) the {−5 ′ −4 ′ } positions in the intracellular side of TM2 (where 0 ′ is the conserved lysine residue immediately preceding TM2, according to Miller, 1989); (4) the {−1 ′ } position; (5) the {19 ′ } position in the extracellular side of TM2; (6) the {20 ′ } position; (7) the {21 ′ -24 ′ } positions; (8) the {25 ′ -29 ′ } positions. The fractional Ca 2+ current (Pf, i.e., the percentage of the total current carried by Ca 2+ ions) values were obtained from previous measurements made in our laboratory, in the same (or very similar) experimental conditions (for details see references in Table 2). When considering experiments in which the transfection of two subunits could result in the coexpression of two alternative stoichiometry, the mean charge distribution has been used. All the information used to build the model are reported in Table 1 (subunits) and in Table 2 (assembled receptors). Fitting Procedure To evaluate the contribution of charged amino acids placed in different region of nicotinic subunits to the Ca 2+ permeability of distinct nAChRs, a very simple methodological approach has been followed. In the presence of a single energy barrier G, if all other parameters are fixed (membrane potentials, temperature), and if the flowing ions are present only at the outside of the membrane at fixed concentration, the flowing current will be proportional to e − G/RT (Hille, 2001). In these conditions, where the term C Ca is constant when ion concentrations, potential and temperature are fixed, and G Ca is the energy barrier value for Ca 2+ ions. The same relation is valid for Na + fluxes: Negative amino acid residues are reported in red, positive ones in green. H, human; M, mouse; R, rat; C, chicken. In brackets the positions of amino acid residues according to Miller (1989). With a strong oversimplification, the term G Ca -G Na has been substituted by a weighted sum of the electric charges ( q i ) associated to the negative (glutamate, aspartate) and positive (lysine, arginine, histidine) amino acids present in 1, 2, 3, or 5 (n) different regions of the nicotinic subunits. Furthermore, the constant ratio C Na /C Ca has been substituted by a single constant const. Thus, P f data have been fitted with the following equation: For each fit the R 2 -value has been reported, and the predicted Pf -values have been compared with the real ones. The const and k i -values obtained by fitting the charge distribution in the five out the eight regions were then used to predict the Pf -values of several pentameric subunit combination giving rise to unreal or existing nicotinic receptors. The natural unit of electrical charge (nuec) has been used as charge unit throughout the paper. RESULTS The first aim of this study was to verify if the Ca 2+ permeability of 16 different nAChRs (measured in terms of fractional Ca 2+ current, Pf ) correlated with the local distribution of the electrical charges associated to the amino acid sequences of their subunits ( Table 1). The first region analyzed was the extracellular domain composed by both the large N-terminal and the short C-terminal segments. A net negative charge characterizes the extracellular domains of all nAChRs, with a clear segregation of muscle receptors, exhibiting a stronger negativity comprised between −50 and −70, while in all other nAChRs the charge sum was comprised between −10 and −40 ( Table 2 and Figure 1A). By contrast, in the intracellular regions composed by the five segments between transmembrane helices TM3 and TM4, there was a large prevalence of positive charges ( Table 2 and Figure 1B), likely contributing to interaction sites with intracellular negative molecules. Linear regressions were used to estimate the correlation between observed Pf and charge values, and the resulting coefficients and statistics indicated H, human; M, mouse; R, rat; C, chicken. In brackets the positions of amino acid residues according to Miller (1989). Pf, fractional Ca 2+ current. Charge values are expressed in natural unit of electric charge (nuec). for both intracellular and extracellular regions the absence of any correlation. These portions of the receptor-channel were no longer considered for further analysis in the present study. Then the charges near the TM2 helices were analyzed to look for significant correlations with Ca 2+ permeability. Six distinct locations were chosen, forming two intracellular rings, i.e., The second aim of this study was to quantitatively estimate the contribution of the charges distributed in different receptor regions to the nAChR Ca 2+ permeability, to have a better understanding of the structural determinants of this relevant physiological parameter, and to try to build a simple model allowing to predict the effect of different subunit compositions or mutations on the Pf -value. Thus, a progressive multidimensional fitting procedure was adopted (for details see Section Methods), in which the sigmoidal function described by Equation (5) was used to best fit the 16 known Pf -values, starting using as independent variable the sum of electrical charges in position {20 ′ }, where it was observed the most significant correlation. Successive fit procedures used the electrical charges present in two or more positions as independent variables, progressively choosing the positions giving the higher R 2 -values. In Figure 2 the sigmoidal curves best fitting the Pf -values are plotted using as abscissa the weighted sum of the charges in different receptor regions, where the k i constant were derived from the fitting procedure. For each fit the comparison between observed and predicted Pf -values is reported. As expected from the linear regression analysis, the best monodimensional fit was obtained using the position {20 ′ } data as independent variable (Figure 2A). However, this charge distribution alone is not able to explain large differences in Ca 2+ permeability. When adding a second independent variable, the best result was obtained with position {−5 ′ −4 ′ }, differently from what could be expected from linear analysis. This effect may be due to the similar pattern of chargereceptor association exhibited by both position {20 ′ } and position {25 ′ 29 ′ } (Figures 1F,H), not really helping to discriminate between different receptors. By contrast, the introduction of the data of position {−5 ′ −4 ′ } clearly helped to discriminate between highly Ca 2+ permeable nAChRs (e.g., α7 vs. α9α10 nAChRs; Figure 2B). The best fit result in terms of statistical significance was obtained with three independent variables, i.e., the charge distributions of position {20 ′ }, position {−5 ′ −4 ′ } Table 2 were plotted against the weighted sum of the electrical charges q i present in different pore regions, as indicated, with the multiplicative k i -values allowed to vary in order to best fit the data, according to Equation Frontiers in Molecular Neuroscience | www.frontiersin.org based on all five independent regions was made, providing the best correlation between observed and predicted Pf -values ( Figure 2D; R 2 = 0.95). The resulting k i constants values, reported in Table 3, were then used to weight the relative contribution of each position and to build a simple modular model linking the electrical charges present in the examined amino acid rings to a Pf -value. In this way it was possible to predict the hypothetical Pf -values of several combinations of human subunits which are not able to assemble into functional nAChRs in living cells (in red in Table 3), but also of existing receptors whose Ca 2+ permeability has not yet experimentally measured (in green in Table 3). The proposed model indicates very high Pf -values (>50%) for homomeric nAChRs constituted by α subunits (from α1 to α6), all lacking the five negative charges in position {25 ′ 29 ′ } in the α7 subunit, which determine for this last receptor the reduction to 10.4%. Interestingly, if the charges in position {−1 ′ } are removed, as for α7 E237A , the Pf -value is 0.0, exactly as expected from the direct measure of Ca 2+ permeability reported for this mutated nAChR (Bertrand et al., 1993). The model has been applied to the α7β2 nAChRs (Wu et al., 2016), which may be expressed with different subunit ratios. The higher the number of β2 subunits, the lower the Ca 2+ permeability, due to the increasing positivity in position {20 ′ }. Other heteromeric combination are reported, with higher values for combinations with α5 subunit. Thus, the proposed model suggests a dramatically high Ca 2+ permeability for non-existing homomeric nAChRs likely eliminated during evolution, describes the effect on Ca 2+ permeability of known mutations, and allows to reasonably predict the Pf -values of physiologically relevant heteromeric nAChRs. DISCUSSION The main results of this study are represented by an improved picture of the functional relations between the distribution of electrical charges and Ca 2+ fluxes in nAChRs and the availability of a new simple model to predict Pf -values associated to any (possible or impossible) pentameric combination of nicotinic subunits. Relating known Pf -values to the charge spatial design of the corresponding nAChRs provided several observations. First, the Ca 2+ permeability of these receptors does not depend on the total amount of charge present in the large Non-existing homomeric nAChRs are indicated in red. Potential heteromeric nAChRs in green. Values for homomeric α7 and α7 E237A nAChRs are reported for comparison, in black. Heterom. Ratio In brackets the positions of amino acid residues according to Miller (1989). Pf, fractional Ca 2+ current. Charge values are expressed in natural unit of electric charge (nuec). K i -values (expressed in nuec -1 ) indicate the multiplicative constants obtained by the fitting procedure described in the text, and used to predict the Pf-values. extracellular or intracellular domains. All extracellular domains have an overall negative charge, with muscle nAChRs exhibiting the most negative values in this region. The extracellular overall negative charge may be functionally relevant for agonist binding, assembly, posttranslational modification, and interactions with extracellular molecular apparatus, in particular for muscle nAChRs which cooperate with a large number of molecular components at the endplate (Witzemann et al., 2013), but there is no evidence of a role in shaping Ca 2+ permeability. This finding becomes relevant when considering that Ca 2+ permeability of nAChRs can be modulated by extracellularly applied drugs, such as verapamil or salbutamol (Piccari et al., 2011), which in theory could act in the large extracellular nAChR vestibule. Analogously, though all intracellular domains formed by the five TM3-TM4 segments exhibit overall positive charges, hosting interactions sites for cytoplasmic factors (Stokes et al., 2015), there is no correlation between Pf -values and the amount of electrical charges present at this location. By contrast, significant relationships between Ca 2+ permeability and charged residues are evident when considering the regions closer to TM2: in particular the best linear correlation is observed in the extracellular ring, in position {20 ′ }, where increasing negativity is significantly linked to higher Pf -values. Interestingly, although a similar relation is evident also in residues slightly farther in respect to TM2, in position {25 ′ 29 ′ }, in the model arising from the fitting procedure this two regions appear to differently contribute to the Ca 2+ permeability: negative residues in position {20 ′ }enhance it, while those in position {25 ′ 29 ′ } reduce it. The same kind of inverse relation is present also in the cytoplasmic ring (position {−5 ′ −4 ′ }). These results confirm for nAChRs the observation that negatively charged sites may either facilitate Ca 2+ flow, reducing the electrostatic energy profile for divalent cations, or counteract it, in particular if the negative residues represent a high affinity Ca 2+ binding site in which Ca 2+ itself may produce a long lasting electrical repulsion for other incoming divalent cations (Yang et al., 1993). Other charge distributions are extremely conserved: in particular, both at the intermediate ring and at position {21 ′ 24 ′ }) only muscle nAChRs exhibit a slight less negative charge than all other receptors. In the intermediate ring a reduction of the negative charge is associated to reduced Pf -values, as strongly confirmed by α7 E237A homomeric nAChR: this mutation has been shown to abolish Ca 2+ permeability (Bertrand et al., 1993) and consistently the present model reports only for this mutated nAChRs a zero Pf -value. The α5 subunit presents a unique feature: it is the only nicotinic subunit to host a negative residue in position {19 ′ } and a positive one in position {25 ′ 29 ′ }, both conferring high Ca 2+ permeability to α5 * nAChRs. A hypothetical α5 homomeric nAChR would exhibit, according to this model, the highest calculated Pf -value, and the replacement by an α5 subunit of FIGURE 3 | Spatial distribution of charged amino acid residues in the proximity of the channel pore, for different nAChRs. For the indicated nAChRs the distribution of positive (green) and negative (red) amino acid residues is reported for each distinct TM2-flanking regions. Up-and down-wards arrows indicate rings in which higher negativity is associated with higher or lower Ca 2+ permeability, respectively. In brackets the residues present if an α5 subunit replace an α4 subunit, yielding a strong increase in Ca 2+ permeability. For each nAChR measured and predicted Pf-values are reported (in brackets for α5-contaning nAChRs). any other α subunit in a heteromeric nAChR always enhance Ca 2+ permeability. This finding is of relevant physiological interest, given the role of α5 * nAChRs in modulating the selfadministration of nicotine and the overall use of tobacco (Fowler et al., 2011). The polymorphic variant α5 D398N , associated with a higher incidence of smoking and lung cancer (Bierut et al., 2008), does not affect the Ca 2+ permeability of these receptors, but appears to be differently modulated by Ca 2+ in the cytoplasmic domain (Sciaccaluga et al., 2015). Given the high Ca 2+ permeability, an attractive hypothesis is that Ca 2+ may modulate the same α5 * nAChRs which it flows through, with the α5 D398N subunit dysfunctional in this process, leading to less nicotine aversion (Frahm et al., 2011) and increased nicotine self-administration. All these observations confirm and summarize several previous reports concerning the role of charged amino acids in some of the regions considered in this study (Bertrand et al., 1993;Corringer et al., 1999;Tapia et al., 2007;Sciaccaluga et al., 2015), and allow a comprehensive description of the different selectivity filters exhibited by distinct nAChRs. In summary, it is possible to group nAChRs in three functional categories: Na + -selective; Ca 2+ -selective; non-selective between Na + and Ca 2+ . The last ones, in fixed conditions, allow the flow of Ca 2+ and Na + ions with the same probability, and should exhibit, in 2 mM [Ca 2+ ] i and 140 mM [Na + ] i , a Pf -value of 2.8%, due to the relative abundance of the two ions. Indeed, many heteromeric nAChRs exhibit Pf around this value (see Table 2). Ca 2+ -selective nAChRs favor Ca 2+ over Na + due to a complex organization of the charges placed in strategic positions: (i) an increased negativity in the immediate intra-and extra-cellular proximity of the channel pore (positions {−1 ′ }, {19 ′ }, and {20 ′ }), with a mechanism similar to the selectivity filter of voltage-gated Ca 2+ channels (Yang et al., 1993); (ii) a decreased negativity in farther intra-and extra-cellular positions ({−5 ′ −4 ′ } and {25 ′ 29 ′ }). The vice-versa is true for Na + -selective nAChRs, as for instance α7E237A or α4β4. In Figure 3 the spatial distributions of charged amino acid residues has been shown for several paradigmatic nAChRs, with upward arrows indicating positions in which higher negativity implies higher Ca 2+ permeability. The analysis of the predicted Pf -values reported in Table 3 leads to interesting observations. First, the surprisingly high Ca 2+ permeability of all nAChRs ideally formed by non-existing homomeric combinations of different α subunits. All these values are much higher than the Pf observed and predicted for homomeric α7 nAChR, suggesting that the exclusion of these receptors could be due to evolutionary processes opposing excessive Ca 2+ entry and excitotoxicity. In contrast with the usual association "α7-high Ca 2+ permeability, " this subunit appears to be the α subunit less able to create the conditions for large Ca 2+ fluxes through nAChRs, and exactly for this reason it could have been allowed by evolution to form homomeric receptors. This hypothesis is supported also by model predictions in which α7 substitutes for a different α subunit in heteromeric nAChRs, as for α4(β2) 2 (α7) 2 vs. (α4β2) 2 α4 (Pf -values of 1.9 vs. 4.0%, respectively). In this view, the insertion in heteromeric neuronal nAChRs of β2 or β4 subunits together with α2, α3, α4, or α6 appears a protective process, adding positive lysine residues instead of negative ones in the extracellular ring (position {20 ′ }). The same role of "Ca 2+ limiter" is played by β2 subunit in heteromeric nAChRs containing α7 subunits: decreasing the ratio α7:β2 considerably lowers the Pf -values of the corresponding nAChRs, suggesting distinct physiological roles for different subunit combinations forming these recently described heteromeric receptors (Wu et al., 2016). The predictive model presented in this study is based on a strongly oversimplified interpretation of the distribution of charged amino acid residues in selected regions of the receptorchannels, and presents clear limitations. First, it is well-known that the nAChR Ca 2+ permeability depend also on uncharged residues (Di Castro et al., 2007), and the geometry of the pore, highly relevant for ion-ion and ion-amino acid interactions, has been weakly taken into account, with a literature-based but still arbitrary regional partition. A deeper analysis should use molecular dynamics techniques to numerically build, for each individual nAChR, a proper model taking into account all the interactions able to modulate Ca 2+ fluxes, in an extremely complex picture. Furthermore, in the present study the overall extracellular distribution of charged amino acids did not appear to affect Ca 2+ fluxes through nAChRs, but a single acid residue in the extracellular domain has been shown to strongly decrease the Ca 2+ permeability of α7 nAChR (rat aspartate 44, corresponding to human aspartate 42; Colón-Sáez and Yakel, 2014), clearly indicating the a more complex analysis would be necessary. Despite all these known limitations, the present simple approach led to a surprisingly good description of the relation between charge distribution and Pf -values, strongly indicating that the amount of negative charges in strategic intracellular and extracellular TM2-flanking regions represents the main determinant of nAChR Ca 2+ permeability. This study might be useful as a starting point to build future more sophisticated models, to plan future experimental studies on poorly described heteromeric nAChRs (see Table 3), and to give hints to evaluate the Ca 2+ permeability of other receptorchannels, as for instance the highly relevant insect nAChRs, representing the major target for several insecticides (Dupuis et al., 2012) and never described in terms of Pf. Furthermore, in the future this approach could be useful to analyze other classes of ligand-gated channels, such as ionotropic glutamate receptors, whose Ca 2+ permeability play a major role in synaptic plasticity and in neurodegenerative processes. The knowledge of the molecular mechanisms regulating Ca 2+ entry through ion channels represents the first step to understand how to modulate it, in particular when and where the excessive accumulation of intracellular Ca 2+ endangers cell health and survival. AUTHOR CONTRIBUTIONS SF designed, wrote and approved this study, and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
5,934.6
2017-05-29T00:00:00.000
[ "Biology", "Chemistry" ]
Critical phenomena in gravitational collapse of Husain-Martinez-Nunez scalar field We construct analytical models to study the critical phenomena in gravitational collapse of the Husain-Martinez-Nunez massless scalar field. We first use the cut-and-paste technique to match the conformally flat solution ($c=0$ ) onto an outgoing Vaidya solution. To guarantee the continuity of the metric and the extrinsic curvature, we prove that the two solutions must be joined at a null hypersurface and the metric function in Vaidya spacetime must satisfy some constraints. We find that the mass of the black hole in the resulting spacetime takes the form $M\propto (p-p^*)^\gamma$, where the critical exponent $\gamma$ is equal to $0.5$. For the case $c\neq 0$, we show that the scalar field must be joined onto two pieces of Vaidya spacetimes to avoid a naked singularity. We also derive the power-law mass formula with $\gamma=0.5$. Compared with previous analytical models constructed from a different scalar field with continuous self-similarity, we obtain the same value of $\gamma$. However, we show that the solution with $c\neq 0$ is not self-similar. Therefore, we provide a rare example that a scalar field without self-similarity also possesses the features of critical collapse. Introduction Gravitational collapse is the main reason of various galactic structures and it remains one of the most interesting and fundamental problems in general relativity. The end state of gravitational collapse could be a black hole, naked singularity or flat spacetime. In a seminal work by Choptuik [1], some intriguing and universal properties concerning the formation of black holes from massless scalar fields were found. This is called the critical phenomenon. Particularly, near the threshold, the black hole mass can always be expressed in the form of power-law: for p > p * , where p is a parameter of the initial data to the threshold of black hole formation. Numerical simulations have shown that that the critical exponent γ is equal to 0.5 for solutions with continuous self-similarity (CSS) and γ ≈ 0.37 for solutions with discrete self-similarity (DSS). Details about the critical phenomenon can be learned in [2,3]. In addition to numerical calculation, analytical models were also built to explore the critical phenomena. Patrick R. Brady [4] studied an exact one parameter family of scalar field solutions which exhibit critical behaviours when black hole forms. J. Soda and K. Hirata [5] analytically studied the collapse of continuous self-similar scalar field in higher dimensional spacetimes and found a general formula for the critical exponents which agrees with the exponent γ = 0.5 for n = 4. A. Wang et al [6] constructed an analytical model by pasting the BONT model (a massless scalar field) with the Vaidya model. They demonstrated that the black hole mass obeys the power law with γ = 0.5. A. Wang et al [7] also analytically studied the gravitational collapse of a massless scalar field with conformal flatness. They showed that the mass of the 2 Husain-Martinez-Nunez (HMN) spacetime The Husain-Martinez-Nunez spacetime [15] satisfies the Einstein-scalar field equations The spherically symmetric solution is given by 1 where α = ± √ 3 2 . From the Ricci scalar we see that the curvature singularities are located at r = 2c (timelike singularity) and t = −1/a(spacelike singularity). Using Θ± to label the expansion of null geodesics, we have where √ h = R 2 sin θ = (at + 1)r 2 1 − 2c r 1−α sin θ, and λ± is the affine parameter of the null geodesics. The tangent to the null geodesic with affine parameter is We can get The apparent horizon satisfies Θ+ = 0, Θ− < 0, which is located at The detailed analyses about the HMN sapcetime can be found in [18]. We shall focus on the case −∞ ≤ t ≤ − 1 a and a < 0 because it corresponds to a black hole solution. When a = 0 and α = √ 3 2 , the apparent horizon in the HMN spacetime has "S-curve" shape. However, this does not make differences in our results. Therefore, in this paper, we just study the case α = − 2 . The MisnerSharp mass [34] is defined by where R denotes the areal radius. From [19], we know that in spherically symmetric spacetimes, the apparent horizon satisfies g ab ∇aR∇ b R = 0 . Therefore, on the apparent horizon, the Misner-sharp mass becomes 3 Critical behaviour of HMN scalar field with conformal flatness (c = 0) Matching at a timelike boundary To study the critical phenomenon of HMN massless scalar field, we start with the simple case c = 0, where the spacetime is conformally flat [15]. First, we need to join the HMN solution with an outgoing Vaidya solution such that the resulting spacetime is asymptotically flat. In this section, we assume that the boundary connecting the two solutions is a timelike hypersurface. We shall use "−" to label the inner HMN spacetime and "+" to label the exterior Vaidya spacetime (see Fig. 1). Figure 1: a = 0, c = 0 timelike hypersurface The interior spacetime is described by Eq. (4) for c = 0 The exterior spacetime is described by the outgoing Vaidya metric where f+ = 1 − 2m(U ) R . We choose ξ i = {λ, θ, φ} as the intrinsic coordinates on the hypersurface. Σ is determined by functions t(λ), r(λ) from the interior and T (λ), R(λ) from the exterior. The induced metric on Σ is given by Here the do"." means the derivative with respect to λ. We use the Darmois junction conditions to match the solutions across Σ where k ab is the extrinsic curvature of Σ. Denote the coordinates of the four-dimensional spacetime by {x µ }. Σ is determined by the functions {x µ (ξ i )}. Then the components of k ab can be calculated from where na is the spacelike normal to Σ. Computing kij from the interior and exterior, respectively, we obtain the nonvanishing components ar 2ṙ + 2(at + 1)ṫr Substituting Eqs. (19) and (20) into Eq. (21), we have Substituting Eqs. (26) and (27) into Eq. (22), with the help of Eq. (29), we obtain Eq. (28) yieldsṘ Therefore, one can solve Eqs. (29)-(31) and obtainṡ U = 2(at + 1) Thus, To proceed, we calculate the following derivatives: wheret = t + a −1 . Substitute the above results into Eq. (25) and according to Eq. (22),let the right-hand side of Eq. (24) be equal to the right-hand side of Eq. (25). After a lengthy calculation, we obtain the following result, which is surprisingly simple Obviously, the solution isṙ =ṫ orṙ = −ṫ. But this means that the hypersurface is null, inconsistent with our assumption. Therefore, we conclude that the two spacetimes cannot be matched through a timelike hypersurface if the continuity of the extrinsic curvature is required. Matching at a null hypersurface Matching the two solutions at a null hypersurface is more complicated than at a timelike hypersurface. We shall follow the method in [6] and [26]. First we use the coordinate transformation v = t + r to replace the coordinate r in Eq. (4) and obtain the metric in the interior where Let Σ be the null hypersurface v = v0. The normal to Σ is where s is a negative arbitrary function such that n a − is a future directed vector. We can introduce a transverse null vector Na by requiring Without loss of generality, we assume that N − a = Nvdva + Nrdra. Then it is easy to show that ∂ξ a as given in the [6]. Thus Similarly to Eq. (23), the transverse extrinsic curvature for the null surface is given by [26]. By straightforward calculation, we find (48) On the other hand, we need to join the HMN solution to the exterior Vaidya spacetime (18) at Σ, as shown in Fig. 2. Assume that the null surface Σ can be described by U = U0(R) from the Vaidya solution. It follows from (18) . The spacetime coordinates {x µ + } can be expressed as functions of ξ i : Define e +µ (i) ≡ The normal to Σ is given by where β is a negative function which will be determined later. Then the transverse null vector N a in Eq. (42) is The continuity condition on Σ requires [26] N + µ e +µ This also guarantees that the normal vectors n a defined on both sides are the the same. Mass of the black hole From Eq. (14), one can calculate the Misner-Sharp mass for the HMN spacetime described by metric (17) Note that the null surface is determined by Eqs. (64) and (65) immediately gives the coordinates at the intersection of Σ and the apparent horizon: Since r > 0 and a < 0, from Eq. (66), we see that the existence of the intersection requires v0 > |a| −1 Therefore, the Misner-Sharp mass at the intersection is As is known, the event horizon coincides with the apparent horizon in the outgoing Vaidya spacetime as shown in Fig. 3. It is also known that the mass function m in the Vaidya metric is constant alone the event horizon [35]. Thus, it is natural to take the mass in Eq. (69) to be the mass of the black hole. To investigate the critical behavior as a → 0, we impose the condition that ri in Eq. (67) does not change with a. This means that v0 must take the form where V is a positive constant independent of a. Thus, Eq. (69) gives the mass of black hole: Eq. (71) shows that the mass of black hole can be put in the form of Eq. (1) and the scaling exponent is γ = 0.5. Obviously, as a approaches zero, the mass of the black hole vanishes and the spacetime becomes Minkowski. Collapse of the general HMN scalar field In this section, we shall investigate the gravitational collapse associated with a general HMN scalar field (a = 0, c = 0). Matching to an outgoing Vaidya solution at a null hypersurface Under the coordinate transformation Eq. (4) can be rewritten in the form (73) Here, the function h(r) satisfies The areal radius R takes the form Similarly to section 3, we match the solution with an outgoing Vaidya solution at the null hypersurface v = v0 (see Fig. 4). Substitution of v = v0 into Eq. (75) yields the function r = r(R). By the method in section 3.2, the extrinsic curvature can be calculated as where It is easy to see that k + ab takes the same form as Eq. (57). Then k − ab = k + ab yields Thus, by our construction, the metric of the resulting spacetime is continuous and the extrinsic curvature of the null hypersurface is also continuous. Therefore, we have shown that the general HMN spacetime ( a = 0, c = 0) and Vaidya spacetime can be matched at a null hypersurface as showed in Fig. 4. 2 ) matching with an outging Vaidya solution. There are two singularities at t = | 1 a | and r = 2c, where r = 2c is a naked singularity. Mass of the black hole By the argument in section 3.3, one can show that m(r) in Eq. (79) is exactly the Misnersharp mass for the metric in Eq. (73). Therefore, from Eqs. (13), (16) and (75), we can obtain the mass on the apparent horizon mAH (r): The ingoing null hypersurface boundary is defined by From the Eq. (13) and the Eq. (80), we can get the coordinates (ri, ti) at the intersection of the apparent horizon and the null hypersurface v = v0, which satisfies v0 + Similarly, we choose the null hypersurface which intersects with the apparent horizon at a fixed radius ri = r0, i.e., independent of a. Again, we take the Misner-Sharp mass at the intersection as the black hole mass. Then, Eq. (79) gives the mass of the black hole where Eq. (83) shows clearly that the black hole mass satisfies the power law with γ = 0.5. However, the spacetime for a < 0 possesses a naked singularity r = 2c (see Fig. 4), in violation of the cosmic censorship conjecture. To remove the naked singularity, we join another outgoing Vaidya spacetime at v = v1(v1 < v0), as shown in Fig. 5. No naked singularity exists in this new spacetime. When we study the relation between the mass and the parameter a, we treat c as a constant. We see that M bh → 0 as a → 0. When a = 0, there is no black hole but a naked singularity as shown in section 4.3. To calculate the apparent horizon of the spacetime, we first choose two families of radial null vector fields where λ ± is the affine parameter of the null geodesic. According to Eq. (11) Therefore, the spacetime has no apparent horizon when r > 2c. According to Eq. (6), the spacetime possesses a naked singularity at r = 2c (see Fig. 6). Figure 6: Penrose diagram for the HMN spacetime(a = 0, c = 0). There is a naked singularity at r = 2c. Since the spacetime is asymptotically flat, we calculate the ADM mass and find When a = 0, the mass of the black hole is given by Eq. (83), which shows clearly that M → 0 as a → 0. However, when a = 0, as we just discussed, the spacetime is not Minkowski and its ADM mass is nonzero. Therefore, there exists a mass gap between the black hole solution (a = 0) and its limiting spacetime (a = 0). Conclusion In this paper, we have used the "cut and paste" method to construct analytical models and study the critical phenomena of the HMN scalar filed. We have shown that the HMN solution with conformal flatness (c = 0) can be matched with the Vaidya solution along a null hypersurface, but not a timelike hypersurface. We have derived the differential equation which specifies the metric function in the Vaidya solution. For c = 0, we have joined the scalar field onto two pieces of Vaidya spacetimes to avoid the naked singularity. We have studied the gravitational collapse for the HMN scalar field and shown that black hole mass satisfies the power law with γ = 0.5. This is consistent with previous results in the literature. When c = 0, the HMN spacetime has no CSS and the black hole also turns on at infinitely small mass. The result is different from the model in [7], which shows that the formation of black holes may turn on at finite mass when the gravitational collapse has no self-similarity. On the other hand, the mass gap exists between the black hole and the naked singularity during the gravitational collapse of HMN scalar field when c = 0 as discussed in section 4. Our work suggests that critical collapse can be studied from analytical models which are constructed by known solutions. More models should be investigated in the future in order to test the universal features in gravitational collapse. A Self-similarity of HMN spacetime In this appendix, we will prove that the HMN spacetime is CSS (continuous self similar) only when c = 0 and a = 0. A spacetime is continuous self-similar if there exists a conformal Killing vector field ξ a satisfying ∇ (a ξ b) = g ab . (91) As a result of the spherical symmetry, we can write Here, x and y are functions of r and t. Substituting this expression into Eq. (91), we find that yR,r + xR,t = R , yν,r + xν,t + y,r = 1 , yλ,r + xλ,t + x,t = 1 , Here, Obviously, C0 must be a constant independent of r and t. So the only solution is and consequently D(t) = D0 . Now Eqs. (98) and (99) become Thus, we have proven that the HMN spacetime is continuous self-similar only for c = 0 and a = 0.
3,763.2
2019-04-28T00:00:00.000
[ "Physics" ]
Compressed and Split Spectra in Minimal SUSY SO(10) The non-observation of supersymmetric signatures in searches at the Large Hadron Collider strongly constrains minimal supersymmetric models like the CMSSM. We explore the consequences on the SUSY particle spectrum in a minimal SO(10) with large D-terms and non-universal gaugino masses at the GUT scale. This changes the sparticle spectrum in a testable way and for example can sufficiently split the coloured and non-coloured sectors. The splitting provided by use of the SO(10) D-terms can be exploited to obtain light first generation sleptons or third generation squarks, the latter corresponding to a compressed spectrum scenario. INTRODUCTION The non-observation of new heavy states at the LHC puts strong constraints on the sparticle spectrum of supersymmetric (SUSY) theories, especially in the colored sector. Most importantly, this puts a strain on the ability of many SUSY models to solve the hierarchy problem of the Standard Model (SM) in a natural fashion. In minimal scenarios, such as the constrained minimal supersymmetric Standard Model (CMSSM), the stringent lower limits on colored states will similarly affect non-colored sparticles. The direct LHC search limits on these sparticle species as well as third generation squarks are on the other hand comparatively weak and can depend strongly on the details of the spectrum. Various solutions have been suggested to resolve the constraints and generate viable and testable scenarios. For example, phenomenological approaches like the phenomenological MSSM (pMSSM) do not contain a priori relations between different sparticle species and can be constructed to avoid the strong constraints but still provide states that can be produced at the LHC in the near future. On the other hand, such approaches often lack motivation. In this work, we focus on a minimal supersymmetric SO(10) model [1][2][3] incorporating one-step symmetry breaking from SO (10) down to the Standard Model gauge group at the usual Grand Unified Theory (GUT) scale M GUT ≈ 2 · 10 16 GeV where the SM gauge couplings unify within an MSSM spectrum. Such a framework is therefore well motivated: It not only incorporates gauge unification but the unification of matter fields in a 16-plet would also provide degenerate soft SUSY breaking scalar masses at the GUT scale. In this scenario, the soft SUSY breaking sector is given by the gravity induced mass parameters for the matter and Higgs superfields at the GUT scale. Being a subset of the MSSM at low energies, two Higgs fields are required to generate masses separately for up-and down-type fermions during electroweak symmetry breaking. In the SO(10) framework, these Higgs fields are generally produced from the superposition of doublet components in a set of Higgs fields at the GUT scale [4,5]. In the present analysis, we do not discuss the issue of Yukawa unification. Successful Yukawa unification of all fermion generations in SO(10) either requires a set of Higgs fields in large representations [4][5][6][7] or the presence of Planck-scale suppressed higher-dimensional operators [8,9]. In contrast to the CMSSM with its strictly degenerate soft scalar mass spectrum at the GUT scale, the scalar masses in the minimal SUSY SO(10) are non-universally shifted by D-terms associated with the breaking of SO (10) to the lower-rank SM group [10][11][12]. These D-terms are analogous to the electroweak D-terms in the MSSM due to the rank reducing breaking of the SM gauge group. As described below in section 2, the SO(10) D-terms depend on the details of the breaking of SO(10) but are generally expected to be of the order of the SUSY breaking scale. They can therefore have a sizable impact on the sparticle spectrum. The possible presence of the SO(10) D-terms represents the main deviation from the CMSSM case, and we will analyze their impact on the sparticle spectrum in light of the LHC searches. As opposed to the phenomenological models, the non-degeneracy is not ad hoc and can be described by the introduction of a single additional parameter m 2 D . Starting at the GUT scale, the non-degenerate scalar masses evolve, following the renormalization group (RG) of the MSSM [13] down to the electroweak scale. This results in a sparticle spectrum at the supersymmetry scale chosen at 1 TeV according to the SPA convention [14]. If these masses were to be observed at the LHC or at other future colliders, the reverse RG evolution upwards would allow the reconstruction of the physics scenario at the GUT scale [15][16][17][18][19][20][21]. In addition to the non-universality of scalar masses at the GUT scale due to SO(10) D-terms, we also allow for a non-degeneracy of the fermionic masses of the gauginos. While the gauge couplings unify at the GUT scale, the gauginos only do so if the messenger mediating the breaking of SUSY in a hidden sector is an SO(10) singlet [22]. This is not required though, and the messenger can be part of various SO (10) representations, provided it remains a singlet under the SM gauge groups. This paper is organized as follows: In section 2 we introduce the minimal SO (10) framework and the main consequences on the sparticle spectrum due to possible large D-terms and non-unification of the gaugino masses. Section 3 reviews the relevant direct sparticle mass limits from recent LHC searches. The results of our renormalization group analysis are presented in section 4 and we summarize our conclusions in section 5. SUSY SO(10) SUSY GUT models are largely fixed by their gauge group structure. In SO(10), a generation of the SM fermions is contained in a 16 representation with the addition of a right-handed neutrino. Variations are then induced by the choice of the breaking of the GUT group to the SM group SU(3) c × SU(2) L × U(1) Y . There are numerous ways in which this symmetry breaking can occur. A minimum of two breaking steps are required: one to break SO (10) to the SM group at a high scale M GUT ≈ 2 × 10 16 GeV (where the SM gauge couplings unify in the MSSM), and one to break the electroweak symmetry of the SM at M EW . Among all the different possible breaking paths from SO (10) Figure 1, we will adopt the minimal path labeled (a). It should be noted that for phenomenological purposes, this is equivalent to multi-step breaking scenarios close to the scale M GUT . The electroweak Higgs fields of the MSSM are contained in higher-dimensional representations of SO (10), which couple to the SM fermions via Yukawa-type interactions. The only allowed representations for this field, given the SO(10) group structure, are 10, 120, and 126. We do not consider non-renormalizable operators which broaden the range of allowed Higgs representations. The simplest choice is to use the 10 dimensional representation containing the electroweak Higgs fields. These choices motivate the superpotential where Y is a 3 × 3 matrix in generation space. The term W( ) collects all terms that involve the Higgs field(s) responsible for SO (10) breaking, which we can neglect in our low energy analysis. The Higgs sector described above, i.e., the SO(10) breaking Higgs and the 10 H containing the EW breaking Higgses, is not enough to predict the masses of all fermions in a Yukawa unified scenario. One would need to add larger representations and/or higher-dimensional operators, as mentioned before. However, extending this sector would not have a significant effect for the purpose of this study, for it is mostly focused on sfermion masses and any contribution coming from an extended Higgs sector can be neglected to the level of approximation at which we are working. As phenomenologically required, SUSY has to be broken and the generated soft-SUSY breaking sector will depend on the particular breaking mediation mechanism. We assume Supergravity (SUGRA) mediated SUSY breaking where SUSY is broken above the GUT scale in a hidden particle sector. Before SO(10) breaking, these terms take the form whereX represents the gaugino field,16 F and 10 H refer to the scalar components of the 16 F and 10 H superfields, respectively. The corresponding soft breaking masses are denoted as m 1/2 , m 2 16 F (in general a 3 × 3 matrix in generation space) and m 2 10 H , respectively. The term c.c. stands for complex conjugate and L collects any operators containing the field, which are irrelevant for our discussion. The SUSY breaking equivalents of the Yukawa coupling and Higgs μ-term are controlled by the common trilinear coupling A 0 and B 0 , respectively. In the following we will adopt the standard CMSSM boundary conditions for the trilinear soft-SUSY breaking parameters in the MSSM at the GUT scale: The corresponding boundary conditions for the soft scalar and gaugino masses will be discussed below. SCALAR D-TERMS The scalar potential of the SO(10) model, responsible for the symmetry breaking, is obtained from the scalar parts of the superpotential in Equation (1) plus the scalar soft breaking terms of Equation (2). In addition, there is an extra contribution that arises from the so called D-terms of the Kähler potential [11]. Such Dterms are generated during gauge symmetry breaking that reduces the rank of the original group, i.e., when one or more of the embedded U(1) subgroups is broken. The most prominent example is the electroweak D-term generated in the MSSM through the electroweak symmetry breaking (EWSB) of the SM gauge group to SU(3) × U(1) Q . For the breaking of a single U(1) subgroup, the process can be described as follows: The field acquiring a vacuum expectation value, in our case, has components with opposite charges under this U(1) subgroup, and (H u and H d for EWSB). After symmetry breaking and after integrating out the heavy and , scalar particle masses receive contributions of the form Kolda and Martin [11] where Q i and Q are the charges of the light scalar particle species i and the field under the broken U(1), respectively. The soft masses of the and fields are given by m andm, respectively, and they are related to the soft mass of the field(s) in Equation (2). The D-term m 2 D will therefore be roughly of the same order as the soft masses instead of the GUT scale where the breaking actually occurs. For more complicated breaking scenarios, the dependence of m 2 D on the soft masses will vary slightly, according to the Higgs representation(s) involved, but it will still remain of the same order. In the case of EWSB, a linear combination of the U(1) Y and the U(1) included in SU(2) L , generated by the I 3 generator, is broken. The electroweak D-terms has the value [23] with the third component of the weak isospin I i 3 and the charge Q i of sparticle i [tan β is the usual ratio of Higgs vacuum expectation values (VEVs)]. The contributions from the SO(10) D-term changes the boundary conditions for the scalar masses at the GUT scale. When the symmetry is spontaneously broken, the MSSM scalar masses match the SO(10) soft breaking masses in Equation (2), plus the contributions from the D-term. Assuming that all soft-SUSY masses are diagonal and universal in generation space, the boundary conditions for the MSSM soft masses The coefficients in front of m 2 D correspond to the U(1) charges of the different sparticles. This Abelian U(1) group is embedded into SO(10) via SU(5) ⊗ U(1) ⊂ SO(10) and thus all particles in the same representation of SU(5) will have the same charge. For completeness, we have also stated the boundary condition for the right-handed sneutrino soft mass m 2 ν . In the following, we will not consider the right-handed sneutrino as part of our spectrum. We implicitly assume it acquires a mass close to the GUT scale in a neutrino seesaw framework, and neglect the effect it could have on the running of the other sparticles as well as the lepton flavor violation it induces in the slepton sector. These effects depend delicately on the details of the neutrino sector. Equation (6) describes the crucial impact of the presence of an SO(10) D-term. Most importantly it will cause a splitting between the sparticle speciesQ,ũ,ẽ andL,d already at the GUT scale. This D-term induced splitting will be increased through RGE running, potentially causing a split spectrum at the low scales. The D-term will in general depend on the vacuum expectation value of the field that breaks the SO(10) gauge group, which in turn is related to the soft SUSY breaking masses as can be seen in the example Equation (4). The specific value of the term depends very strongly on the scalar potential of the SO(10) breaking sector, but because we want to keep our description as independent as possible from the GUT scale physics, we will parameterize this by allowing m D to be a free parameter in our model. Thus, provided that the Yukawa couplings are fixed by the fermion masses up to the ratio of electroweak VEVs tan β = v u /v d , and the B 0 and μ H parameters are obtained by imposing electroweak vacuum stability conditions, the only free parameters of our model relevant to low energy phenomenology are Figure 2 shows how the masses of the first generation sfermions are split due the effect of the D-term. In order to present the dependence on the D-term m 2 D in a convenient way, we define the function The rest of the model parameters are fixed by using the benchmark scenario provided in Table 1 of Buchmueller et al. [24], corresponding to a non-universal Higgs mass (NUHM1) high scale scenario. NON-UNIVERSAL GAUGINO MASSES A standard assumption of the CMSSM is the unification of the gaugino masses at the GUT scale to the common value m 1/2 in Equation (2). This is not necessarily true for more general SUSY breaking mechanisms. In particular, the SO(10) representation of the SUSY-breaking mediator field determines the matching conditions at the GUT scale. The field is required to be a singlet under the SM in order to preserve its symmetry but it does not need to be a singlet under SO(10). Table 1 shows different boundary conditions for a selection of possible representations of the mediating field [22]. In the simplest case, the mediator field is in the singlet representation, in which case the matching conditions at the GUT scale are: Other choices can have advantages, such as improved Yukawa unification [25]. Other examples are models with negative μ H which can be made compatible with the experimental value of the anomalous magnetic moment of the muon, a μ , by making μM 2 positive through the choice of a configuration with negative M 2 from Table 1. In models that undergo gauge mediated supersymmetry breaking, this non-universality emerges naturally at the messenger scale due to the nature of the breaking. At this messenger scale, usually around or above 10 6 GeV, the masses of gauginos are induced by one-loop corrections involving messenger fields, and are of the form Dine and Nelson [26] M a = α a 4π n a n a , where is the relative splitting of the fermionic and scalar parts of the messenger superfields (source of supersymmetry breaking) and n a is the Dynkin index of the messenger fields in the SM subgroup a. In this case there can be two sources of non-universality: first, there is a natural splitting due to the different values of the gauge couplings α a , and second, the sum of the Dynkin indices could naively be different for the three gauge groups. However, if The EW ratios take into account the approximate effect of the RGE running on the gaugino masses. these messengers come in complete representations of the unified group (in order to preserve the unification of gauge couplings), the sum of the Dynkin indices is the same for all three gauginos. In this case, the only splitting at the messenger scale comes from the different values of α a , which can be rather small, and depends mostly on the messenger scale. In this paper we will focus only on mSUGRA-inspired scenarios, where the only non-universality in the gaugino mass comes from the SO(10) representation of the mediator field. Unless otherwise stated, we will consider universal gauginos at the GUT scale, with mSUGRA induced supersymmetry breaking. The effect of having non-universal gauginos on the particle spectrum will be studied in section 4.3. RENORMALIZATION GROUP EVOLUTION Below the GUT scale, with the heavy gauge bosons and Higgs fields integrated out, the particle content of the minimal SUSY SO(10) model is the same as in the MSSM. We implicitly assume that the right-handed neutrinos and sneutrinos also decouple at or close to the GUT scale within a seesaw framework of light neutrino mass generation. Therefore the Renormalization Group Equations (RGEs) will be same as those of the MSSM but with different boundary conditions at the GUT scale. The complete RGEs for the MSSM and their approximate solutions are listed in Appendix A in Supplementary Material. In this section we will focus on the relevant consequences for the sparticle spectrum in the minimal SUSY SO(10) model using appropriate approximations. The RGEs for the scalar masses of the first two generations can be exactly solved at one loop by neglecting small Yukawa couplings. For the very same reason, there is no mixing between the left and right-handed squarks or sleptons under such an approximation. The RGEs are then given by with the gauge couplings g i and gaugino masses M i . The term S is defined as Although S has a dependence on all the scalar masses, this particular combination turns out to be exactly solvable, and the solution depends only on the gauge couplings and the value of S at the GUT scale. However, in the case that all scalar masses are universal, i.e., have the same value at the GUT scale, this term vanishes. It therefore has the role of quantifying the non-universality of a model. In our particular case, the universality is violated due to the appearance of the D-term, and so the only contribution left from this S term is proportional to m 2 D . Thus the masses for all first and second generation squarks and sleptons can be expressed analytically as [20] (14) where the C (n) a are constants, defined as The electroweak D-terms D i are defined in Equation (5) and they are usually sub-dominant to the soft scalar masses. The constants C (n) a depend only on the gauge couplings. However, there is a non-trivial dependence on tan β within the electroweak D-terms. Since they are essentially negligible, we fix tan β to the value in the benchmark scenario described in Equation (9), tan β = 39. The scalar masses for the 1st and 2nd generation squarks and sleptons can then be numerically written as For illustration, Figure 3 shows the running of the scalar masses in a representative example scenario. As the usual MSSM RGE running is driven by the gaugino mass m 1/2 , the additional impact of the SO(10) D-term is roughly determined by the ratio m 2 D /m 2 1/2 . For m 2 D /m 2 1/2 1, the spectrum will be of the usual CMSSM type, whereas for m 2 D /m 2 1/2 1, the impact of the SO(10) D-term on the sparticle spectrum will be sizeable. (14, 16) depend on the model parameters m 2 16 F , m 2 D , and m 1/2 with the same or very similar coefficients. We use this to construct linear combinations of these masses that depend on a reduced number of parameters, which will become very useful when trying to find an optimal scenario in the parameter space. The first combination to consider is among the particles belonging to different multiplets in the SU(5) subgroup of SO (10). Due to the presence of the D-terms this combination will induce a large splitting between the left and right handed squarks and sleptons, given by Different sparticle masses in the Equations Secondly, the splitting between those masses with similar D-term contributions, i.e., those supersymmetric particles that belong to the same multiplet in the SU(5) subgroup of SO (10) is given by These splittings are largely driven by the gauge contributions proportional to m 1/2 also present in the CMSSM. Nevertheless, a large SO(10) D-term m 2 D can appreciably contribute to the splitting for small m 1/2 . Thirdly, a small splitting is caused by the EW D-terms in the left-handed squarks and the left-handed sleptons, which, belonging to the same SU(2) multiplet, are quasi-degenerate, with a splitting proportional to M 2 Z , www.frontiersin.org May 2014 | Volume 2 | Article 27 | 5 The above relations are obtained by using only the 1-loop solution of the RGEs which may not be accurate for large values of m 2 D . We calculate the 2-loop corrections using the approximation discussed in Appendix A in Supplementary Material and find that these contributions are, at most, for the first two and the third generations, respectively. As expected, for large values of the parameters these contributions can be significant and hence we will take them into account in our analysis. REINTERPRETATION OF SQUARK AND GLUINO LIMITS The most stringent limits on superpartner masses currently come from searches for strongly charged superpartners viz. squarks and gluons. LHC searches based on multiple jets and missing energy currently rule out squarks masses of the order of 2 TeV and gluino masses of the order of 1 TeV depending on the model used for interpretation [27,28]. In this section, we determine how these limits translate to the SUSY SO(10) parameters. The supersymmetric SO(10) model has two parameters that affect the squark masses at tree level, m 16 and m 2 D . In particular, a non-zero m 2 D results in a split between left-and right-handed squarks. Therefore, the simplification in the CMSSM that all squarks of the first two generations are nearly degenerate is lost. For this analysis, we have retained the universal gaugino sector, meaning the gaugino masses originate from a common parameter at the GUT scale leading to a ratio M 1 : M 2 : M 3 = 1 : 2 : 6 at the electroweak scale. We factorize the problem of estimating final cross section after the cuts into two steps. Firstly, we analytically calculate the production cross section and the branching fractions. Secondly, we estimate the efficiencies of the cuts in each production mode for the jets+MET search channels reported by ATLAS using Monte Carlo simulation. The efficiency of the cuts is calculated using a simplified model with two parameters mg and mq. There are four production modes that result in jets+MET final states viz.gg,qq,qq * and qg. We assume each squark decays asq → qχ 0 1 and the gluino decays via eitherg → qq if mg > mq or viag → qqχ 0 1 otherwise. As a consistency check, we reproduce the ATLAS limits based on [27] for a simplified model where all squarks are degenerate and the lightest (bino-dominated) neutralino is the LSP with a mass a sixth of the gluino mass. The comparison is shown in Figure 4, where the CMSSM model with all squarks being degenerate (ũ L , d L ,ũ R ,d R ) is plotted in green and the observed ATLAS limit in dashed black. The Monte-Carlo simulation was performed using Pythia 8 [29][30][31] with Gaussian smearing of the momenta of the jets and leptons as a theorist's detector simulation. demonstrates that we approximately reproduce the exclusion limit reported by ATLAS in our simulation. To investigate the change in the ATLAS limits given a non-zero m 2 D , we use two separate simplified models. First, corresponding to m 2 D 0, we have the case where right-handed, down-type squarks are much lighter than the rest. We approximate this by setting md R = ms R = mb 1 = mq and all other squark masses set to 10 TeV. Second, corresponding to m 2 SUMMARY OF OTHER LHC SUSY SEARCHES After the first run of the LHC, a great amount of the data has been analyzed and comprehensive searches for supersymmetric signals have been carried out. Both ATLAS and CMS have done an extensive survey of many different scenarios and studied the data collected in the most model independent way possible, so as to exclude as much of the SUSY parameter space as possible. We summarize here the exclusion limits for some of the supersymmetric particles: Stops and sbottoms Stops are produced at the LHC mostly through the s-channel, and the primary decay modes aret → tχ 0 andt → bχ ± . The final states studied have the signature 4j + l + MET, with none to three b−tags and the current lower limit on the stop mass is around m˜t 650 GeV. However, if the stop is not allowed to decay to an on-shell top, m˜t < m t + mχ0 , the decay phase space is reduced and the process is suppressed which weakens the limit to m˜t 250 GeV. Searches for sbottoms are similar to those for stops, with similar production rates and complementary decays,b → bχ 0 andb → tχ ± . Consequently, the mass limits are similar, mb 650 GeV [32][33][34][35][36]. Sleptons, neutralinos, and charginos Although electroweak processes at the LHC are several orders of magnitude smaller than strong ones, the precision of the measurements done by ATLAS and CMS is good enough to provide a limit of m˜l 300 GeV. Similar to the sleptons, the limits on the neutralinos and charginos are considerably weaker than those of gluinos and squarks. Using purely electroweak processes such asχ 0 2χ ± → Zχ 0 W ±χ 0 orχ 0 2χ ± → lνll(νν), both LHC experiments have currently excluded masses up to mχ 300 GeV [37][38][39][40]. Finally, the extra Higgs states predicted by supersymmetry have also been subject to scrutiny. However, due to the strong dependence on the parameters in the MSSM (particularly tan β), the limits are not very strong. As of today, the limits seem to favor tan β 18 and Higgs masses around or above that of the found Higgs state, m H,A,H ± 100 GeV [41][42][43][44]]. ANALYSIS The SUSY SO(10) model has seven free parameters, m 2 16 F , m 2 10 H , m 1/2 , m 2 D , A 0 , tan β, sign(μ), when no constraints are imposed. We will use existing experimental limits to fix or constrain some of these model parameters using the results of section 3, focusing on the most interesting deviations from the standard CMSSM scenario. As discussed above, there is a lower limit on the mass of the lightest squark, at mq 2 TeV within the framework of the CMSSM. With the degeneracy of all scalar particles at the GUT scale, this bound also forces the sleptons to become heavy, usually well beyond the direct detection slepton mass limits. However, in the minimal SUSY SO(10) model, it is possible to evade the squark limits while keeping the slepton masses light, possibly at the level of experimental detectability. We will therefore seek to explore the model parameter space with a large splitting between the squark and slepton masses by taking advantage of the relation (18). Even in the CMSSM, one may obtain relatively light sleptons (compared to squarks) by increasing the RG running effect of the strong gauge coupling by increasing m 1/2 . A large value of m 1/2 is actually required due to the corresponding gluino mass limit mg 1 TeV. For a fixed squark mass, this approach has the disadvantage that it will also raise the lightest neutralino mass which is the preferred Lightest Supersymmetric Particle (LSP) candidate. In order to have the lightest neutralino lighter than any charged sparticle for as much of the parameter space as possible, we will fix the value of m 1/2 so as to produce a gluino with a mass roughly at the current limit, mg ≈ 1 TeV. The only other free parameter in Equation (18) is m 2 D , which has a comparatively small contribution toward the splitting. This is because the scalar species under consideration belong to the same SU(5) multiplets and the splitting is caused by a secondary effect in the RGEs. Notice also that the splitting for the5 and 10 multiplets has opposite signs in their dependence on m 2 D , cf. Equation (17), i.e., for m 2 D 0,ẽ L ,d R will be the lighter states whereas for m 2 D 0 it will beẽ R andũ L . We will therefore look for a region of parameter space where, by increasing m 2 D in both positive and negative directions, we achieve a large splitting between squarks and sleptons. Since m 1/2 is fixed, as stated above, and in order to keep the mass of the lightest first generation squark (mq) fixed to the lowest allowed value, we express m 2 16 F as a function of the other model parameters and the desired squark mass mq, where the constants c i are taken from Equation (16) for the corresponding squark species and δ 2 is the 2-loop correction to the mass of the lightest squark. The latter is significant for large |m 2 D | and m 2 16 F . The limit of this procedure is reached as soon as one of the particles becomes tachyonic (negative squared mass) at the electroweak scale. Due to the large third generation Yukawa couplings, especially for the top quark, the third generations of sparticles are usually lighter than the first two. We will consider this case first in the following section. In section 4.2, we will describe the possibility of having the first two generations lighter than the third by compensating the RG effect of the Yukawa couplings. To conclude, in section 4.3, we will study the additional impact of non-universal gauginos on the sparticle spectrum. LIGHT THIRD GENERATION Starting with the benchmark scenario described in Equation (9), and parameters set by the current LHC limits we will perform a scan over m 2 D to analyze how the masses of different sparticles behave. To achieve a light but viable SUSY spectrum, the value of m 1/2 is fixed such that mg = 1 TeV at the current exclusion limit. The value of m 2 16 F is then determined so as to keep the lightest squark at a mass of 2 TeV for a given m 2 D . Please note that while the limit on the squark mass is reduced for m 2 D 0, cf. section 3.1, we will use mq = 2 TeV in all cases for easy comparison. The remaining model parameters are thus fixed as unless otherwise noted. Figure 5 shows the dependence of the masses on m 2 D for both scenarios, using the 2-loop RGEs described in Appendix A in Supplementary Material. Most obviously, the splitting between the sparticles in different representations of SU (5) we have obtained, in a rather natural way, very light stops, sbottoms and staus, while the rest of the scalars are above 1 TeV. This is consistent with current experimental data [32,34] and would provide a natural solution to the hierarchy problem, with a reasonable fine tuning due to light stops and sbottoms. We have, however, chosen a mass for the gluino fixed at 1 TeV resulting in relatively light neutralinos, mχ0 1 ≈ 150 GeV. In addition to the low energy sparticle masses, Figure 5 also shows the derived value of the Higgs μ H term, and the soft mass m 16 F at the GUT scale, respectively. An example sparticle spectrum for this scenario is shown in Figure 10 (left) for m 2 D = −(1.83 TeV) 2 . The impact of different values for m 1/2 can be seen in Figure 6 (left) where the allowed (m 2 D , m 1/2 ) space is shown. Also displayed are the lightest slepton mẽ, the lightest stau mτ 1 and the lightest sbottom mb 1 mass. The outer, shaded (brown) area is excluded because there is at least one tachyonic state, usually the sbottom. The enclosing (orange) band denotes the parameter space where the neutralinoχ 0 1 is not the LSP. The bottom (blue) band is excluded by the gluino mass limit from the direct searches described in section 3, (mg 1.1 TeV). We can clearly see that increasing m 1/2 has the effect of lowering the masses of all the affected sparticles, particularly the sleptons, cf. Equation (18). However, the mass of the lightest neutralino increases with m 1/2 , and for mχ0 1 ≈ 0.4 TeV, one ofτ 1 ,ẽ, orb 1 becomes lighter. For m 1/2 close to the upper limit, m 1/2 ≈ 0.9 TeV, either the lightest stau or selectron is the NLSP. In order to have a better understanding why the third generation squarks are so light compared to their first and second generation counterparts, Figure 6 (right) displays the corresponding properties in the (m 2 D , A 0 ) parameter plane. Notice that for the sbottom and the stau, the effects of large m 2 D and large A 0 are similar, i.e., they both push the masses down. As a matter of fact, we can actually see that the sbottom is only the lightest for large A 0 (as was the case in Figure 5), but is heavier than the stau for small A 0 , and can even be rather heavy (mb 1 ≈ 2.4 TeV). The effect of A 0 on the first and second generation slepton mass is negligible due to the small Yukawa couplings, and we do not show it in the plot. LIGHT FIRST GENERATION As described above, the lightest sbottom and stop generically constitute the lightest sfermion states, except for large values of m 1/2 and |m 2 D |. The well known reason for this suppression, also with respect to the first two squark generations, are the large third generation Yukawa couplings which drive the masses down through RGE running. If we look into the terms in the RGEs proportional to the Yukawa couplings (see Appendix A in Supplementary Material), we find that they have the following dependence at the one loop level, Hence, in order to minimize this contribution, we need to compensate the increasingly large values of m 2 16 F with equally large and opposite sign values of m 2 10 H + A 2 0 . If we want to keep the trilinear couplings real, the best choice for this would be A 0 = 0 and m 2 10 H = −2m 2 16 F . Including two loop corrections to the masses, one needs to increase this proportionality by about 5-10% to compensate the suppression of the stau, stop and sbottoms masses with respect to the first two generations. In the following we will use the relation m 2 10 H = −2.1m 2 16 F . This clearly defines a rather fine-tuned solution as the Yukawa couplings are a priori unrelated to the soft SUSY breaking parameter. We nevertheless study this case as an extreme departure from the generic picture described in section 4.1. In summary, the base model parameters used in this section are described by unless otherwise noted. Figure 7 shows the effect of approximately compensating the third generation Yukawa couplings on the sparticle masses. We see that indeed the third generation sparticles are heavier than their first generation counterparts. In comparison with Figure 5, the SO(10) D-term m 2 D can be larger, up to m 2 D (5 TeV) 2 , in turn producing a wider splitting between the lightest squarks and the lightest sleptons. On the other hand, the heavy squarks and sleptons would be split off considerably, with masses up to 10 TeV. This is a clear example of a Split-SUSY [45][46][47][48] scenario, exhibiting a three-fold splitting: Very light sleptons ≈ 0.1-0.2 TeV, lightest squarks around 2-4 TeV and Equation (22). The colored areas are excluded or disfavored because there is at least one tachyonic state (brown), the neutralino is not the LSP (orange), the gluino mass is below the experimental limit (blue). very heavy squarks and sleptons at 9-10 TeV. An example sparticle spectrum for this scenario is shown in Figure 10 (right) for m 2 D = +(4.87 TeV) 2 . The combined dependencies on m 2 D and either m 1/2 or A 0 is displayed in Figure 8. The excluded or disfavored shaded areas are defined as before in Figure 6. We do not plot the lightest sbottom mass in Figure 8 (left) as it is too heavy to be of interest here. The main difference from the light third generation case displayed in The dependence on A 0 , Figure 8 (right) in this case is also rather different from Figure 6 (right). While the stau mass exhibits a similar behavior, the sbottom mass becomes heavier with increasing |m 2 D | but lighter with increasing A 0 . This is expected as we do not compensate the effect of A 0 on the Yukawadriven RGE contributions. As a consequence, the lightest sbottom will become the lightest sfermion for large A 0 3 TeV. The scenario described here would be optimal for sleptons searches at LHC because it allows for very light first, second and also third generation sleptons. Naively, one might expect that the presence of very light (left-handed) smuons is able to enhance the predicted value of the anomalous magnetic moment of the muon closer to the experimentally favored value, a μ ≡ a exp μ − a SM μ = (26.1 ± 8.0) × 10 −10 [49]. This is because the supersymmetric contributions to a μ are driven by muon sneutrino-chargino and smuon-neutralino loops. Unfortunately, the SUSY scenarios considered here require a large Higgs μ-term μ H as shown in Figures 5, 7. For a strongly split scenario as in our case, the SUSY contribution is roughly [50,51] a SUSY μ 10 −8 × tan β 10 with the lightest gaugino mass M 1 . Consequently, a strongly split scenario with large |m 2 D | in minimal SUSY SO(10) does not enhance a SUSY μ appreciably compared to the standard CMSSM case. NON-UNIVERSAL GAUGINOS As a final step of our analysis, we will briefly comment on the impact of non-universal gauginos at the GUT scale. In Table 1 FIGURE 8 | As Figure 6, but with the remaining model parameters fixed as described in Equation (24). we see that there are three representative cases: (a) The messenger field is in the singlet representation of the SU(5) embedded in SO (10). This corresponds to the standard universal case with an approximate gaugino hierarchy of |M 1 | : |M 2 | : |M 3 | = 1/6 : 1/3 : 1 near the EW scale, which we have discussed above. (b) The messenger is in the 24-dimensional representation. Here, the bino is comparatively lighter than in the CMSSM, with an approximate gaugino hierarchy of |M 1 | : |M 2 | : |M 3 | = 1/12 : 1/2 : 1 near the EW scale. This is phenomenologically interesting as it creates a larger splitting between the lightest neutralino (essentially the bino) and the gluino. It potentially permits a very light neutralino while satisfying the direct gluino mass limits, cf. section 3. For example, for a gluino mass at the current limit, mg ≈ 1.1 TeV, the lightest neutralino could be lighter than mχ0 1 ≈ 100 GeV, subject to direct search limits, cf. section 3.2. On the other hand, the ratio between M 2 and M 3 is smaller than that of normal CMSSM, making the second neutralino and lightest chargino slightly heavier. Such a change will for instance suppress the SUSY contribution to the anomalous magnetic moment of the muon. The largest contribution comes from a sneutrinochargino loop, and the experimental situation would prefer both the SU(2) gaugino and the sleptons to be light. (c) The messenger is in the 200-dimensional representation, corresponding to a low energy hierarchy |M 1 | : |M 2 | : |M 3 | = 5/3 : 2/3 : 1. The spectrum is rather different here, with the bino being the heaviest gaugino, while the mass of the wino is approximately 2/3 of the gluino mass. Hence, the lightest neutralino would be mostly wino and would have a relatively large mass for a given gluino mass, compared to the previous case. Other than the direct effect on the gaugino masses, the presence of non-universal gauginos at the GUT scale will also affect the masses of the scalar SUSY particles due to the impact on the RGE running. So far we have calculated scalar particle masses assuming degenerate gauginos at the GUT scale, resulting in a term ∝ m 2 1/2 as the main RGE effect on the scalar masses, see for example Equation (18). Allowing for arbitrary individual gaugino masses M 1 , M 2 , and M 3 at the GUT scale, these equations will take the form By far the largest contribution is due to the strong gauge effect of the gluino affecting the squarks. In fixing the gluino mass as mg ≈ 1.1 TeV in tune with the experimental bound, we essentially set the scale of the absolute squark masses. The gaugino non-universality will then induce an additional splitting between the squarks and sleptons, dominantly driven by the wino mass M 2 . A comparison of the three cases is shown in Figure 9, i.e., CONCLUSIONS Supersymmetric models are feeling the pinch from the lack of new physics signals at the LHC and in low energy observables. While any phenomenological limits can be evaded by sending the SUSY particle masses to higher scales, such a solution will usually negate the ability of many SUSY models to solve the hierarchy problem of the Standard Model. Minimal scenarios, such as the CMSSM are especially difficult in this regard as the stringent lower limits from LHC direct searches on colored states will similarly affect non-colored sparticles. As a consequence, there is now much effort going into the study of less constrained models of low energy SUSY with a large variety of spectra. For example, phenomenological approaches like the phenomenological MSSM do not contain a priori relations between different sparticle species. In this work, we focused on the other hand on a minimal supersymmetric SO(10) model incorporating one-step symmetry breaking from SO(10) down to the Standard Model gauge group at the usual GUT scale. Such SUSY GUT scenarios are of course very well motivated with the possibility of unifying the gauge and Yukawa couplings at the GUT scale. With respect to the SUSY spectrum, the GUT unification also provides a motivation for the degeneracy of the soft SUSY breaking masses and couplings. In contrast to the CMSSM though, the scalar masses in an SO (10) GUT are shifted by D-terms associated with the breaking of SO (10) to the lower-rank SM group. These D-terms do depend on the details of the gauge breaking but are generally expected to be of the order of the SUSY breaking scale (for example described by SUSY breaking mass m 2 16 F of the matter SO(10) 16-plet), and can be parametrized by a single additional quantity m 2 D . This provides a controlled departure from the degeneracy of the CMSSM. In addition, we also briefly discuss the possibility of non-universal gaugino masses at the GUT. This is a general possibility in SUSY GUT models with gravity mediated breaking if the SUSY breaking messenger is not a singlet under the GUT gauge group. We have considered three scenarios: Firstly, starting from a non-universal Higgs mass benchmark scenario, cf. Equation (22), we studied the impact of the D-term m 2 D on the sparticle spectrum, especially on the possibility to obtain light third generation squarks and sleptons. In particular, we found that for m 2 D −m 2 16 F , both stops, the lightest sbottom and the lightest stau can be very light, while the first generation squarks and sleptons are heavy. An example spectrum is shown in Figure 10 (left) for m 2 D ≈ −(1.8 TeV) 2 ≈ −0.5 × m 2 16 F . Such a spectrum can be viable as a solution to the hierarchy problem as it keeps the fine tuning under control. It belongs to a class of Split-SUSY scenarios with a compressed spectrum [52][53][54], with the lightest stop too light to decay into a top and the lightest neutralino. The LHC limit on the stop mass for this case is much more relaxed that in other scenarios. With a light stop mass just above the LHC limit for a compressed spectrum, m˜t 1 250 GeV, a rough estimate of the fine tuning would be M 2 SUSY /m 2 t ≈ m˜t 1 m˜t 2 /m 2 t ≈ 5. Secondly, we extended the previous case to make the first generation light, by way of changing the soft Higgs mass m 2 10 H . While this presents a rather extreme scenario which is fine-tuned to cancel the Yukawa contribution of the third generation states, it demonstrates the potential to deviate from the usual light stop/sbottom/stau case (although this is usually preferred due to naturalness considerations). The direct LHC limits on first and second generation slepton masses are still comparatively weak and can accommodate light sleptons m˜l 300 GeV. An example spectrum for this case is shown in Figure 10 (right) for m 2 D ≈ +(4.9 TeV) 2 ≈ 0.3 × m 2 16 F , resulting in a severely split scenario. Consequently, it requires a considerable fine-tuning, not only by manually engineering the light selectrons, but also due to the necessary cancelations of the large contributions to the Higgs mass from the heavy stops, M 2 SUSY /m 2 t ≈ mt 1 m˜t 2 /m 2 t ≈ 3 × 10 3 . As mentioned, the main purpose of the two limiting examples provided here is to define a rough range of possible spectra in the minimal SUSY SO(10) model with large D-terms. If taken seriously, a spectrum with light first generation sleptons would naively be advantageous to explain the apparent discrepancy between the measured value of the anomalous magnetic moment of the muon a μ and its SM prediction. Unfortunately, due to the splitting between left-and right-handed smuons in combination with the large Higgs μ-term, it is not possible to appreciably raise the SUSY contribution to a μ . For m 2 D 0, only the right-handed down-type squarks will be light and, as we have demonstrated, this weakens the current direct LHC limit on the corresponding squark masses from mq 2 TeV to mq 1 TeV. Finally, we have also briefly looked at the case of non-universal gauginos at the GUT scale. In addition to the universal case, we studied two different choices for the representation of the messenger fields; one where the messenger is in the 24 representation of the SU(5) subgroup embedded in SO (10), and one where it is in the 200 representation. The former leads to a lighter, binolike lightest neutralino, but it negligibly affects the scalar particle masses. The latter case, leading to bino heavier than the gluino and a wino-like lightest neutralino, has a greater impact on the scalar SUSY particle masses. Both cases can of course affect the possible decay channels and therefore the visible signatures in detail. For example, raising the neutralino masses will facilitate the realization of compressed spectra and the possibility of stopneutralino co-annihilation affecting the dark matter relic density of the universe.
11,180.4
2014-03-10T00:00:00.000
[ "Physics" ]
Littlest Seesaw We propose the Littlest Seesaw (LS) model consisting of just two right-handed neutrinos, where one of them, dominantly responsible for the atmospheric neutrino mass, has couplings to (νe, νμ, ντ) proportional to (0, 1, 1), while the subdominant right-handed neutrino, mainly responsible for the solar neutrino mass, has couplings to (νe, νμ, ντ) proportional to (1, n, n − 2). This constrained sequential dominance (CSD) model preserves the first column of the tri-bimaximal (TB) mixing matrix (TM1) and has a reactor angle θ13∼n−123m2m3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\theta}_{13}\sim \left(n-1\right)\frac{\sqrt{2}}{3}\frac{m_2}{m_3} $$\end{document}. This is a generalisation of CSD (n = 1) which led to TB mixing and arises almost as easily if n ≥ 1 is a real number. We derive exact analytic formulas for the neutrino masses, lepton mixing angles and CP phases in terms of the four input parameters and discuss exact sum rules. We show how CSD (n = 3) may arise from vacuum alignment due to residual symmetries of S4. We propose a benchmark model based on S4 × Z3 × Z3′, which fixes n = 3 and the leptogenesis phase η = 2π/3, leaving only two inputs ma and mb = mee describing Δm312, Δm212 and UPMNS. The LS model predicts a normal mass hierarchy with a massless neutrino m1 = 0 and TM1 atmospheric sum rules. The benchmark LS model additionally predicts: solar angle θ12 = 34°, reactor angle θ13 = 8.7°, atmospheric angle θ23 = 46°, and Dirac phase δCP = −87°. JHEP02(2016)085 1 Introduction The discovery of neutrino oscillations, implying mass and mixing, remains one of the greatest discoveries in physics in the last two decades. Although the origin of neutrino mass is presently unknown (for reviews see e.g. [1][2][3][4]), whatever is responsible must be new physics beyond the Standard Model (BSM). For example, the leading candidate for neutrino mass and mixing is the seesaw mechanism involving additional right-handed neutrinos with heavy Majorana masses [5][6][7][8][9][10], providing an elegant explanation of the smallness of neutrino mass. 1 However, in general, the seesaw mechanism typically involves many parameters, making quantitative predictions of neutrino mass and mixing challenging. In this respect, the seesaw mechanism offers no more understanding of flavour than the Yukawa couplings of the SM. Indeed it introduces a new flavour sector associated with right-handed neutrino Majorana masses, which cannot be directly probed by high energy particle physics experiment. Clearly a different approach is required to make progress with the new (or nu) Standard Model that involves the seesaw mechanism. Here we shall make use of the theoretical touchstones of elegance and simplicity (which indeed motivate the seesaw mechanism in the first place) to try to allow some experimental guidance to inform the high energy seesaw mechanism. If the assumptions we make prove to be inconsistent with experiment then we must think again, otherwise the framework of assumptions remains viable. In this paper, then, we focus on natural implementations of the seesaw mechanism, where typically one of the right-handed neutrinos is dominantly responsible for the atmospheric neutrino mass [12,13], while a second subdominant right-handed neutrino accounts for the solar neutrino mass [14]. This idea of sequential dominance (SD) of right-handed neutrinos is an elegant hypothesis which, when combined with the assumption of a zero coupling of the atmospheric neutrino to ν e , leads to the generic bound θ 13 < ∼ m 2 /m 3 [15,16], which appears to be approximately saturated according to current measurements of the reactor angle. This bound was derived over a decade before the experimental measurement of the reactor angle. This success supports the SD approach, and motivates efforts to understand why the reactor bound is approximately saturated. In order to do this one needs to further constrain the Yukawa couplings beyond the assumption of a single texture zero as assumed above. The idea of constrained sequential dominance (CSD) is that the "atmospheric" righthanded neutrino has couplings to (ν e , ν µ , ν τ ) proportional to (0, 1, 1), while the "solar" right-handed neutrino has couplings to (ν e , ν µ , ν τ ) proportional to (1, n, n − 2) where n is a real number. It turns out that such a structure preserves the first column of the tri-bimaximal (TB) [17] mixing matrix (TM1), leading to the approximate result θ 13 ∼ (n − 1) √ 2 3 m 2 m 3 , which we shall derive here from exact results. This scheme is therefore a generalisation of the original CSD(n = 1) [18] which led to TB mixing and the prediction θ 13 = 0, which is now excluded. It is also a generalisation of CSD(n = 2) [19] which predicted θ 13 ∼ √ 2 3 m 2 m 3 , which was subsequently proved to be too small when more precise measurements of θ 13 were made. It seems we are third time lucky since CSD(n = 3) [20] 1 For a simple introduction to the seesaw mechanism see e.g. [11]. The new approach and results in this paper are summarised below: • The approach here is more general than previously considered, since we allow n to be a real number, rather than being restricted to the field of positive integers. The motivation is that the vacuum alignment vector (1, n, n − 2) is orthogonal to the first column of the TB matrix (2, −1, 1) (which in turn is orthogonal to the second and third TB columns (1, 1, −1) and (0, 1, 1)) for any real number n, emerges very naturally as depicted in figure 1. This provides a plausible motivation for considering the vacuum alignment direction (1, n, n − 2) for any real number n. We refer to the associated minimal models as the Littlest Seesaw (LS). The LS with CSD(n) predicts a normal mass hierarchy with a massless neutrino m 1 = 0, both testable in the near future. Actually the above predictions also arise in general two right-handed neutrino models. What distinguishes the LS model from general two right-handed neutrino models are the predictions for the lepton mixing angles and CP phases as discussed below. • For the general case of any real value of n ≥ 1, for the first time we shall derive exact analytic formulas for the neutrino masses, lepton mixing angles and CP phases (both Dirac and Majorana) in terms of the four input parameters. This is progress since previously only numerical results were used. We also show that CSD(n) is subject to the TM1 mixing sum rules and no other ones. From the exact results, which are useful for many purposes but a little lengthy, we extract some simple approximations which provide some rough and ready insight into what is going on. For example, the approximate result θ 13 ∼ (n − 1) √ 2 3 m 2 m 3 provides an analytic understanding of why CSD(n ≥ 5) is excluded, which until now has only been a numerical finding. • We show that the successful case of CSD(3) arises more naturally from symmetry in the case of S 4 , rather than using A 4 , as was done in previous work [20][21][22][23][24][25]. The reason is that both the neutrino scalar vacuum alignments (0, 1, 1) and (1, 3, 1) preserve residual subgroups of S 4 which are not present in A 4 . This motivates models based on S 4 , extending the idea of residual symmetries from the confines of two sectors (the charged lepton and neutrino sectors) as is traditionally done in direct models, to five sectors, two associated with the neutrinos and three with the charged leptons, as summarised in the starfish shaped diagram in figure 2. • Finally we present a benchmark LS model based on S 4 ×Z 3 ×Z 3 , with supersymmetric vacuum alignment, which not only fixes n = 3 but also the leptogenesis phase η = 2π/3, leaving only two continuous input masses, yielding two neutrino mass squared splittings and the PMNS matrix. A single Z 3 factor is required to understand η = JHEP02(2016)085 2π/3 as a cube root of unity, while an additional Z 3 is necessary to understand the charged lepton mass hierarchy and also to help to control the operator structure of the model. The model provides a simple LS framework for the numerical benchmark predictions: solar angle θ 12 = 34 • , reactor angle θ 13 = 8.7 • , atmospheric angle θ 23 = 46 • , and Dirac phase δ CP = −87 • , which are readily testable in forthcoming oscillation experiments. The layout of the remainder of the paper is as follows. In section 2 we briefly introduce the two right-handed neutrino model and motivate CSD(n). In section 3 we show how CSD(n) implies TM1 mixing. In section 4 we briefly review the direct and indirect approaches to model building, based on flavour symmetry. In section 5 we pursue the indirect approach and show how vacuum alignment for CSD(n) can readily be obtained from the TB vacuum alignments using orthogonality. In section 6 we write down the Lagrangian of the LS model and derive the neutrino mass matrix from the seesaw mechanism with CSD(n). In section 7 we discuss a numerical benchmark, namely CSD(3) with leptogenesis phase η = 2π/3 and its connection with the oscillation phase. In sections 8, 9, 10 we derive exact analytic formulas for the angles, masses and CP phases, for the LS model with general CSD(n) valid for real n ≥ 1, in terms of the four input parameters of the model. In section 11 present the exact TM1 atmospheric sum rules, which we argue are the only ones satisfied by the model. In section 12 we focus on the reactor and atmospheric angles and, starting from the exact results, derive useful approximate formulae which can provide useful insight. In section 13 we show how vacuum alignment for CSD(3) can arise from the residual symmetries of S 4 , as summarised by the starfish diagram in figure 2. In sections 14 and 15 we present a benchmark LS model based on the discrete group S 4 × Z 3 × Z 3 , with supersymmetric vacuum alignment, which not only fixes n = 3 but also the leptogenesis phase η = 2π/3, reproducing the parameters of the numerical benchmark. Section 16 concludes the paper. There are two appendices, appendix A on lepton mixing conventions and appendix B on S 4 . Seesaw mechanism with two right-handed neutrinos The two right-handed neutrino seesaw model was first proposed in [14]. Subsequently two right-handed neutrino models with two texture zeros were discussed in [28], however such two texture zero models are now phenomenologically excluded [29] for the case of a normal neutrino mass hierarchy considered here. However the two right-handed neutrino model with one texture zero (actually also suggested in [14]), remains viable. With two right-handed neutrinos, the Dirac mass matrix m D is, in LR convention, The (diagonal) right-handed neutrino heavy Majorana mass matrix M R with rows JHEP02(2016)085 The light effective left-handed Majorana neutrino mass matrix is given by the seesaw formula Using the see-saw formula dropping the overall minus sign which is physically irrelevant, we find, by multiplying the matrices in eqs. (2.1), (2.2), Motivated by the desire to implement the seesaw mechanism in a natural way, sequential dominance (SD) [12][13][14] assumes that the two right-handed neutrinos ν sol R and ν atm By explicit calculation, using eq. (2.4), one can check that in the two right-handed neutrino limit det m ν = 0. Since the determinant of a Hermitian matrix is the product of mass eigenvalues det(m ν m ν † ) = m 2 1 m 2 2 m 2 3 , one may deduce that one of the mass eigenvalues of the complex symmetric matrix above is zero, which under the SD assumption is the lightest one m 1 = 0 with m 3 m 2 since the model approximates to a single right-handed neutrino model [12,13]. Hence we see that SD implies a normal neutrino mass hierarchy. Including the solar right-handed neutrino as a perturbation, it can be shown that, for d = 0, together with the assumption of a dominant atmospheric right-handed neutrino in eq. (2.5), leads to the approximate results for the solar and atmospheric angles [12][13][14], Under the above SD assumption, each of the right-handed neutrinos contributes uniquely to a particular physical neutrino mass. The SD framework above with d = 0 leads to the relations in eq. (2.6) together with the reactor angle bound [15,16], This result shows that SD allows for large values of the reactor angle, consistent with the measured value. Indeed the measured reactor angle, observed a decade after this theoretical bound was derived, approximately saturates the upper limit. In order to understand why this is so, we must go beyond the SD assumptions stated so far. As already mentioned, we refer to a two right-handed neutrino model in which the Dirac mass matrix in the flavour basis satisfies eq. (2.8), with n being a real number, as the "Littlest Seesaw" or LS model. The justification for this terminology is that it represents the seesaw model with the fewest number of parameters consistent with current neutrino data. To be precise, in the flavour basis, the Dirac mass matrix of the LS model involves two complex parameters e, a plus one real parameter n. This is fewer than the original two right-handed neutrino Dirac mass matrix which involves six complex parameters [14]. It is also fewer than the two right-handed neutrino model in [15,16] which involves five complex parameters due to the single texture zero. It is even fewer than the minimal right-handed neutrino model in [28] which involves four complex parameters due to the two texture zeroes. It remains to justify the Dirac structure of the LS model in eq. (2.8), and we shall address this question using symmetry and vacuum alignment in subsequent sections. Trimaximal mixing A simple example of lepton mixing which came to dominate the model building community until the measurement of the reactor angle is the tribimaximal (TB) mixing matrix [17]. It predicts zero reactor angle θ 13 = 0, maximal atmospheric angle s 2 23 = 1/2, or θ 12 = 45 • , and a solar mixing angle given by The mixing matrix is given explicitly by Unfortunately TB mixing is excluded since it predicts a zero reactor angle. However CSD in eq. (2.8) with two right-handed neutrinos allows a non-zero reactor angle for n > 1 and also predicts the lightest physical neutrino mass to be zero, m 1 = 0. One can also check that the neutrino mass matrix resulting from using eq. (2.8) in the seesaw formula JHEP02(2016)085 In other words the column vector (2, −1, 1) T is an eigenvector of m ν with a zero eigenvalue, i.e. it is the first column of the PMNS mixing matrix, corresponding to m 1 = 0, which means so called TM1 mixing [30][31][32] in which the first column of the TB mixing matrix in eq. (3.1) is preserved, while the other two columns are allowed to differ (in particular the reactor angle will be non-zero for n > 1), Interestingly CSD in eq. (2.8) with n = 1 [18] predicts a zero reactor angle and hence TB mixing, while for n > 1 it simply predicts the less restrictive TM1 mixing. Having seen that CSD leads to TB, or more generally TM1 mixing, we now discuss the theoretical origin of the desired Dirac mass matrix structure in eq. (2.8). Flavour symmetry: direct versus indirect models Let us expand the neutrino mass matrix in the diagonal charged lepton basis, assuming exact TB mixing, as m ν are the respective columns of U T B and m i are the physical neutrino masses. In the neutrino flavour basis (i.e. diagonal charged lepton mass basis), it has been shown that the above TB neutrino mass matrix is invariant under S, U transformations: A very straightforward argument [33] shows that this neutrino flavour symmetry group has only four elements corresponding to Klein's four-group Z S 2 × Z U 2 . By contrast the diagonal charged lepton mass matrix (in this basis) satisfies a diagonal phase symmetry T . In the case of TB mixing, the matrices S, T, U form the generators of the group S 4 in the triplet representation, while the A 4 subgroup is generated by S, T . As discussed in [33], the flavour symmetry of the neutrino mass matrix may originate from two quite distinct classes of models. The class of models, which we call direct models, are based on a family symmetry such as S 4 , for example, where the symmetry of the neutrino mass matrix is a remnant of the S 4 symmetry of the Lagrangian, with the generators S, U preserved in the neutrino sector, while the diagonal generator T is preserved JHEP02(2016)085 in the charged lepton sector. If U is broken but S is preserved, then this leads to TM2 mixing with the second column of the TB mixing matrix being preserved. However if the combination SU is preserved then this corresponds to TM1 mixing with the first column of the TB mixing matrix being preserved [34]. Of course, the S 4 symmetry is completely broken in the full lepton Lagrangian including both neutrino and charged lepton sectors. In an alternative class of models, which we call indirect models, the family symmetry is already completely broken in the neutrino sector, where the observed neutrino flavour symmetry Z S 2 × Z U 2 emerges as an accidental symmetry. However the structure of the Dirac mass matrix is controlled by vacuum alignment in the flavour symmetry breaking sector, as discussed in the next section. The indirect models are arguably more natural than the direct models, especially for m 1 = 0, since each column of the Dirac mass matrix corresponds to a different symmetry breaking VEV and each contribution to the seesaw mechanism corresponds to a different right-handed neutrino mass, enabling mass hierarchies to naturally emerge. Thus a strong mass hierarchy m 1 m 2 < m 3 would seem to favour indirect models over direct models, so we pursue this possibility in the following. Indirect approach and vacuum alignment The basic idea of the indirect approach is to effectively promote the columns of the Dirac mass matrix to fields which transform as triplets under the flavour symmetry. We assume that the Dirac mass matrix can be written as m D = (aΦ atm , bΦ sol , cΦ dec ) where the columns are proportional to triplet Higgs scalar fields with particular vacuum alignments and a, b, c are three constants of proportionality. Working in the diagonal right-handed neutrino mass basis, the seesaw formula gives, By comparing eq. (5.1) to the TB form in eq. (4.1) it is clear that TB mixing will be achieved if Φ atm ∝ Φ 3 and Φ sol ∝ Φ 2 and Φ dec ∝ Φ 1 , with each of m 3,2,1 originating from a particular right-handed neutrino. The case where the columns of the Dirac mass matrix are proportional to the columns of the PMNS matrix, the columns being therefore mutually orthogonal, is referred to as form dominance (FD) [35][36][37]. The resulting m ν is form diagonalizable. Each column of the Dirac mass matrix arises from a separate flavon VEV, so the mechanism is very natural, especially for the case of a strong mass hierarchy. Note that for m 1 m 2 < m 3 the precise form of Φ dec becomes irrelevant and for m 1 = 0 we can simply drop the last term and the model reduces to a two right-handed neutrino model. Within this framework, the general CSD Dirac mass matrix structure in eq. (2.8) corresponds to there being some Higgs triplets which can be aligned in the directions, 2) The first vacuum alignment Φ atm in eq. (5.2) is just the TB direction Φ T 3 in eq. (4.2). The second vacuum alignment Φ sol in eq. (5.2) can be easily obtained since the direction (1, n, n − 2) is orthogonal to the TB vacuum alignment Φ T 1 in eq. (4.2). Figure 1. The mutually orthogonal vacuum alignments Φ i in eq. (4.2) used for TB mixing. The alignment vector Φ sol is orthogonal to Φ 1 and hence is in the plane defined by Φ 2 and Φ 3 = Φ atm . Note that the vectors Φ sol and Φ atm in eq. (5.2) are not orthogonal for a general value of n, so any seesaw model based on these alignments will violate form dominance. JHEP02(2016)085 For example, in a supersymmetric theory, the aligning superpotential should contain the following terms, enforced by suitable discrete Z n symmetries, where the terms proportional to the singlets O ij and O sol ensure that the real S 4 triplets are aligned in mutually orthogonal directions, Φ i ⊥ Φ j and Φ sol ⊥ Φ 1 as depicted in figure 1. From eqs. (4.2), (5.2) we can write Φ sol = Φ 2 cos α + Φ 3 sin α where tan α = 2(n − 1)/3 and α is the angle between Φ sol and Φ 2 , as shown in figure 1. Φ sol is parallel to Φ 2 for n = 1, while it increasingly tends towards the Φ 3 = Φ atm alignment as n is increased. The Littlest Seesaw The littlest seesaw (LS) model consists of the three families of electroweak lepton doublets L unified into a single triplet of the flavour symmetry, while the two right-handed neutrinos ν atm R and ν sol R are singlets. The LS Lagrangian in the neutrino sector takes the form, which may be enforced by suitable discrete Z 3 symmetries, as discussed in section 14. Here φ sol and φ atm may be interpreted as either Higgs fields, which transform as triplets of the flavour symmetry with the alignments in eq. (5.2), or as combinations of a single Higgs electroweak doublet together with triplet flavons with these vacuum alignments. Note that, in eq. (6.1), φ sol and φ atm represent fields, whereas in eq. (5.2) Φ sol and Φ atm refer to the VEVs of those fields. In the diagonal charged lepton and right-handed neutrino mass basis, when the fields Φ sol and Φ atm in eq. (6.1) are replaced by their VEVs in eq. (5.2), this reproduces the Dirac JHEP02(2016)085 mass matrix in eq. (2.8) [20] and its transpose: which defines the LS model, where we regard n as a real continuous parameter, later arguing that it may take simple integer values. The (diagonal) right-handed neutrino heavy Majorana mass matrix The low energy effective Majorana neutrino mass matrix is given by the seesaw formula where η is the only physically important phase, which depends on the relative phase between the first and second column of the Dirac mass matrix, arg(a/e) and m a = |e| 2 Matm and m b = |a| 2 M sol . This can be thought of as the minimal (two right-handed neutrino) predictive seesaw model since only four real parameters m a , m b , n, η describe the entire neutrino sector (three neutrino masses as well as the PMNS matrix, in the diagonal charged lepton mass basis). As we shall see in the next section, η is identified with the leptogenesis phase, while m b is identified with the neutrinoless double beta decay parameter m ee . A numerical benchmark: CSD(3) with η = 2π/3 We now illustrate the success of the scheme by presenting numerical results for the neutrino mass matrix in eq. (6.5) for the particular choice of input parameters, namely n = 3 and This numerical benchmark was first presented in [25,26]. In section 14 we will propose a simple LS model which provides a theoretical justification for this choice of parameters. In table 1 we compare the above numerical benchmark resulting from the neutrino mass matrix in eq. (7.1) to the global best fit values from [38] (setting m 1 = 0). The (3) with a fixed phase η = 2π/3 from [25]. In addition we predict β = 71.9 • which is not shown in the table since the neutrinoless double beta decay parameter is m ee = m b = 2.684 meV for the above parameter set which is practically impossible to measure in the forseeable future. These predictions may be compared to the global best fit values from [38] (for m 1 = 0), given on the last line. agreement between CSD (3) and data is within about one sigma for all the parameters, with similar agreement for the other global fits [39,40]. Using the results in table 1, the baryon asymmetry of the Universe (BAU) resulting from N 1 = N atm leptogenesis was estimated for this model [26]: Using η = 2π/3 and the observed value of Y B fixes the lightest right-handed neutrino mass: The phase η determines the BAU via leptogenesis in eq. (7.2). In fact it controls the entire PMNS matrix, including all the lepton mixing angles as well as all low energy CP violation. The leptogenesis phase η is therefore the source of all CP violation arising from this model, including CP violation in neutrino oscillations and in leptogenesis. There is a direct link between measurable and cosmological CP violation in this model and a correlation between the sign of the BAU and the sign of low energy leptonic CP violation. The leptogenesis phase is fixed to be η = 2π/3 which leads to the observed excess of matter over antimatter for M 1 ≈ 4.10 10 GeV, yielding an observable neutrino oscillation phase δ CP ≈ −π/2. Exact analytic results for lepton mixing angles We would like to understand the numerical success of the neutrino mass matrix analytically. In the following sections we shall derive exact analytic results for neutrino masses and PMNS parameters, for real continuous n, corresponding to the physical (light effective left-handed Majorana) neutrino mass matrix, in the diagonal charged lepton mass basis in eq. (6.5), which we reproduce below, Note that the seesaw mechanism results in a light effective Majorana mass matrix given by the La- . This corresponds to the convention of appendix A. JHEP02(2016)085 Since this yields TM1 mixing as discussed above, it can be block diagonalised by the TB mixing matrix, where we find, It only remains to put m ν block into diagonal form, with real positive masses, which can be done exactly analytically of course, since this is just effectively a two by two complex symmetric matrix, where the angle θ ν 23 is given exactly by, where x, y, z were defined in terms of input parameters in eq. (8.3) and From eqs. (8.2), (8.4), (8.11) we identify, JHEP02(2016)085 Explicitly we find from eq. (8.12), using eqs. (3.1), (8.5), (8.6), (8.7), and introducing two charged lepton phases, φ µ and φ τ , from which comparison we identify the physical PMNS lepton mixing angles by the exact expressions where we have selected the negative sign for the square root in parentheses, applicable for the physical range of parameters, and defined Then, by taking the Trace (T) and Determinant (D) of eq. (9.2), using eq. (9.1), we find from which we extract the exact results for the neutrino masses, where we have selected the positive sign for the square root which is applicable for m 2 3 > m 2 2 . Furthermore, since m 2 3 m 2 2 (recall m 2 3 /m 2 2 ≈ 30 ) we may approximate, m 2 3 ≈ T = |x| 2 + 2|y| 2 + |z| 2 (9.8) The sequential dominance (SD) approximation that the atmospheric right-handed neutrino dominates over the solar right-handed neutrino contribution to the seesaw mass matrix implies that m a m b and |z| |x|, |y| leading to JHEP02(2016)085 The SD approximation in eqs. (9.10), (9.11) is both insightful and useful, since two of the three input parameters, namely m a and m b , are immediately fixed by the two physical neutrino masses m 3 and m 2 , which, for m 1 = 0, are identified as the square roots of the measured mass squared differences ∆m 2 31 and ∆m 2 21 . This leaves, in the SD approximation, the only remaining parameters to be n and the phase η, which, together, determine the entire PMNS mixing matrix (3 angles, and 2 phases). For example if n = 3 and η = 2π/3 were determined by some model, then the PMNS matrix would be determined uniquely, without any freedom, in the SD approximation. When searching for a best fit solution, the SD approximation in in eqs. (9.10), (9.11) is useful as a first approximation which enables the parameters m a and m b to be approximately determined by ∆m 2 31 and ∆m 2 21 since this may then be used as a starting point around which a numerical minimisation package can be run using the exact results for the neutrino masses in eqs. Extracting the value of the physical phases δ and β in terms of input parameters is rather cumbersome and it is better to use the Jarlskog and Majorana invariants in order to do this. The Jarlskog invariant The Jarlskog invariant J [41] can be derived starting from the invariant [42,43], where the Hermitian matrices are defined as In our conventions of section A, by explicit calculation one can verify the well known result that JHEP02(2016)085 The above results show that I 1 is basis invariant since it can be expressed in terms of physical masses and PMNS parameters. We are therefore free to evaluate I 1 in any basis. For example, in the diagonal charged lepton mass basis, one can shown that the quantity in eq. (10.3) becomes, where in eq. (10.9), the Hermitian matrix H ν = m ν m ν † involves the neutrino mass matrix m ν in the basis where the charged lepton mass matrix is diagonal, i.e. the basis of eq. (6.5), where we find From eqs. (10.5) and (10.9) we find, after equating these two expressions, where the minus sign in eq. (10.13) again clearly shows the anti-sign correlation of sin δ and sin η, where η is the input phase which appears in the neutrino mass matrix in eq. (6.5) and leptogenesis in eq. (7.2). In other words the BAU is proportional to − sin δ if the lightest right-handed neutrino is the one dominantly responsible for the atmospheric neutrino mass N 1 = N atm . In this case the observed matter Universe requires sin δ to be negative in order to generate a positive BAU. It is interesting to note that, up to a negative factor, the sine of the leptogenesis phase η is equal to the sine of the oscillation phase δ, so the observation the CP violation in neutrino oscillations is directly responsible for the CP violation in the early Universe, in the LS model. The Majorana invariant The Majorana invariant may be defined by [44], In our conventions of section A, this may be written, By explicit calculation we find an exact but rather long expression which is basis invariant since it involves physical masses and PMNS parameters. We do not show the result here since it is rather long and not very illuminating and also not so relevant since Majorana CP violation is not going to be measured for a very long time. However, since m 2 e m 2 µ m 2 τ , we may neglect m 2 e and m 2 µ compared to m 2 τ , and also drop s 2 13 terms, to give the compact result, We are free to evaluate I 2 in any basis. For example, in the diagonal charged lepton mass basis, the quantity in eq. (10.14) becomes, where in eq. (10.18), the neutrino mass matrix m ν is in the basis where the charged lepton mass matrix is diagonal, i.e. the basis of eq. (6.5). Evaluating eq. (10.18) we find the exact result, sin β in terms of input parameters in both the numerator and demominator. For low values of n (e.g. n = 3) the sign of sin β is the same as the sign of sin η and hence the opposite of the sign of sin δ given by eq. (10.12). It is worth recalling at this point that our Majorana phases are in the convention of eq. (A.7), namely P = diag(e i β 1 2 , e i β 2 2 , 1), where we defined β = β 2 and β 1 is unphysical since m 1 = 0. In another common convention the Majorana phases are by given by P = diag(1, e i α 21 2 , e i α 31 2 ), which are related to ours by α 21 = β 2 − β 1 and α 31 = −β 1 . For the case at hand, where m 1 = 0, one finds β = α 21 − α 31 to be the only Majorana phase having any physical significance (e.g. which enters the formula for neutrinoless double beta decay). This is the phase given by eq. (10.21). Eq. (10.21) is independent of s 13 since we have dropped those terms. It is only therefore expected to be accurate to about 15%, which is acceptable, given that the Majorana phase β is practically impossible to measure in the forseeable future for the case of a normal mass hierarchy with the lightest neutrino mass m 1 = 0. However, if it becomes necessary in the future to have a more accurate result, this can be obtained by equating eq. Exact sum rules The formulas in the previous section give the observable physical neutrino masses and the PMNS angles and phases in terms of fewer input parameters m a , m b , n and η. In particular, the exact results for the neutrino masses are given in eqs. (9.5), (9.6), (9.7), the exact results for the lepton mixing angles are given in eqs. (8.17), (8.18), (8.19) and the exact result for the CP violating Dirac oscillation phase is given in eq. (10.12), while the Majorana phase is given approximately by eq. (10.21). These 8 equations for the 8 observables cannot be inverted to give the 4 input parameters in terms of the 8 physical parameters since there are clearly fewer input parameters than observables. On the one hand, this is good, since it means that the littlest seesaw has 4 predictions, on the other hand it does mean that we have to deal with a 4 dimensional input parameter space. Later we shall impose additional theoretical considerations, which shall reduce this parameter space to just 2 input parameters, yielding 6 predictions, but for now we consider the 4 input paramaters. In any case it is not obvious that one can derive any sum rules where input parameters are eliminated, and only relations between physical observables remain. Nevertheless in this model there are such sum rules, i.e. relations between physical observables not involving the input parameters. An example of such a sum rule is eq. (8.18), which we give below in three equivalent exact forms, This sum rule is in fact common to all TM1 models, and is therefore also applicable to the LS models which predict TM1 mixing. Similarly TM1 predicts the so called atmospheric JHEP02(2016)085 sum rule, also applicable to the LS models. This arises from the fact that the first column of the PMNS matrix is the same as the first column of the TB matrix in eq. (3.1). Indeed, by comparing the magnitudes of the elements in the first column of eq. (8.15) to those in the first column of eq. (8.16), we obtain, Eq. (11.2) is equivalent to eq. (11.1) while eqs. (11.3) and (11.4) lead to equivalent mixing sum rules which can be expressed as an exact relation for cos δ in terms of the other lepton mixing angles [31,32], Note that, for maximal atmospheric mixing, θ 23 = π/4, we see that cot 2θ 23 = 0 and therefore this sum rule predicts cos δ = 0, corresponding to maximal CP violation δ = ±π/2. The prospects for testing the TM1 atmospheric sum rules eqs. (11.1), (11.5) in future neutrino facilities was discussed in [45,46]. The LS model also the predicts additional sum rules beyond the TM1 sum rules that arise from the structure of the Dirac mass matrix in eq. (6.2). Recalling that the PMNS matrix is written in eq. (A.5) as U = V P , where V is the the CKM-like part and P contains the Majorana phase β, the LS sum rules are [20], where the sum rule in eq. (11.6) is independent of both n and β. We emphasise again that the matrix elements V αi refer to the first matrix on the right-hand side of eq. (8.16) (i.e. without the Majorana matrix). Of course similar relations apply with U replacing V everywhere and the Majorana phase β disappearing, being absorbed into the PMNS matrix U , but we prefer to exhibit the Majorana phase dependence explicitly. However the LS sum rule in eq. (11.6) is equivalent to the TM1 sum rule in eq. (11.5), as seen by explicit calculation. The other LS sum rules in eqs. (11.7) and (11.8) involve the phase β and are not so interesting. The reactor and atmospheric angles Since the solar angle is expected to be very close to its tribimaximal value, according to the TM1 sum rules in eq. (11.1), independent of the input parameters, in this section we JHEP02(2016)085 focus on the analytic predictions for the reactor and atmospheric angles, starting with the accurately measured reactor angle which is very important for pinning down the input parameters of the LS model. The reactor angle The exact expression for the reactor angle in eq. (8.17) is summarised below, The above results are exact and necessary for precise analysis of the model, especially for large n (where n is in general a real and continuous number). We now proceed to derive some approxinate formulae which can give useful insight. The SD approximations in eqs. (9.10), (9.11) show that m b /m a ≈ (2/3)m 2 /m 3 . This suggests that we can make an expansion in m b /m a , or simply drop m b compared to m a , as a leading order approximation, which implies tan B ≈ tan A and hence cos(A − B) ≈ 1. Thus eq. (12.2) becomes, where we have kept the term proportional to m b (n − 1) 2 , since the smallness of m b may be compensated by the factor (n − 1) 2 for n > 1. Eq. 12.5 shows that t 1, hence we may expand eq. (12.1) to leading order in t, Hence combining eqs. (12.5) and (12.6), we arrive at our approximate form for the sine of the reactor angle, For low values of (n − 1) such that m b (n − 1) 2 m a , eq. (12.7) simplifies to, JHEP02(2016)085 using the SD approximations in eqs. (9.10), (9.11) that m b /m a ≈ (2/3)m 2 /m 3 , valid to 10% accuracy. For example, the result shows that for the original CSD [18], where n = 1, implies sin θ 13 = 0, while for CSD (2) m 3 , leading to θ 13 ≈ 9.5 • , in rough agreement with the observed value of θ 13 ≈ 8.5 • , within the accuracy of our approximations. We conclude that these results show how sin θ 13 ∼ O(m 2 /m 3 ) can be achieved, with values increasing with n, and confirm that n ≈ 3 gives the best fit to the reactor angle. We emphasise that the approximate formula in eq. (12.8) has not been written down before, and that the exact results in eqs. (12.1), (12.2), (12.3), (12.4) are also new and in perfect agreement with the numerical results in table 1. The atmospheric angle The exact expression for the atmospheric angle in eq. (8.19) is summarised below, (12.10) and t and B were summarised in eqs. (12.2), (12.3), (12.4). The above results, which are exact, show that the atmospheric angle is maximal for B ≈ ±π/2, as noted previously. We may expand eq. (12.10) to leading order in t, Hence combining eqs. (12.9), (12.11), we arrive at an approximate form for the tangent of the atmospheric angle, where t was approximated in eq. (12.5). We observed earlier that, for m b m a , tan B ≈ tan A and hence A ≈ B. Unfortunately it is not easy to obtain a reliable approximation for A in eq. (12.4), unless n > ∼ 1 in which case A ≈ −η. However, for n − 1 significantly larger than unity, this is not a good approximation. For example for n = 3 from eq. (12.4) we have, Taking m b /m a = 1/10, this gives which shows that A ≈ −η is not a good approximation even though m b m a . If we set, for example, η = 2π/3, as in table 1, then eq. (12.14) gives JHEP02(2016)085 which happens to be close to −π/2. Hence, since A ≈ B, this choice of parameters implies cos B ≈ 0, leading to approximately maximal atmospheric mixing from eq. (12.12), as observed in table 1. At this point it is also worth recalling that for maximal atmospheric mixing, the TM1 sum rule in eq. (11.5) predicts that the cosine of the CP phase δ to be zero, corresponding to maximal Dirac CP violation δ = ±π/2, as approximately found in table 1. CSD(3) vacuum alignments from S 4 We saw from the discussion of the reactor angle, and in table 1, that the solar alignment in eq. (5.2) for the particular choice n = 3 was favoured. In this section we show how the desired alignments for n = 3 can emerge from S 4 due to residual symmetries. Although the charged lepton alignments we discuss were also obtained previously from A 4 [20], the neutrino alignments in eq. (5.2) for n = 3 were not previously obtained from residual symmetries, and indeed we will see that they will arise from group elements which appear in S 4 but not A 4 . We first summarise the vacuum alignments that we desire: in the neutrino sector as in eq. (5.2) with n = 3, and, in the charged lepton sector. For comparison we also give the tribimaximal alignments in eq. (4.2): We first observe that the charged lepton and the tribimaximal alignments individually preserve some remnant symmetry of S 4 , whose triplet representations are displayed explicitly in appendix B. If we regard ϕ e , ϕ µ , ϕ τ as each being a triplet 3 of S 4 , then they each correspond to a different symmetry conserving direction of S 4 , with, a 2 ϕ e = ϕ e , a 3 ϕ µ = ϕ µ , a 4 ϕ τ = ϕ τ . (13.4) One may question the use of different residual symmetry generators of S 4 to enforce the different charged lepton vacuum alignments. However, this is analagous to what is usually assumed in the direct model building approach when one says that the charged lepton sector preserves one residual symmetry, while the neutrino sector preserves another residual JHEP02(2016)085 symmetry. In the direct case, it is clear that the lepton Lagrangian as a whole completely breaks the family symmetry, even though the charged lepton and neutrino sectors preserve different residual symmetries. In the indirect case here, we are taking this argument one step further, by saying that the electron, muon and tau sectors preserve different residual symmetries, while the charged lepton Lagrangian as a whole completely breaks the family symmetry. However the principle is the same as in the direct models, namely that different sectors of the Lagrangian preserve different residual subgroups of the family symmetry. The tribimaximal alignment φ 2 is enforced by a combination of d 2 and f 1 being conserved, which suggests that φ 2 should be also identified as a triplet 3 of S 4 . On the other hand, the tribimaximal alignment φ 3 (which is the same as the atmospheric alignment φ atm ) may be enforced by symmetry if φ 3 (i.e. φ atm ) is in the 3 representation, since then we see that, As in the case of the charged lepton sector, we see that different parts of the neutrino sector will preserve different residual subgroups of the family symmetry S 4 for the tribimaximal alignments φ 2 and φ 3 = φ atm . In order to obtain the alignments φ 1 and φ sol we must depart from the idea of residual symmetries and resort to dynamical terms in the potential that enforce orthogonality, as discussed in section 5. However, once the tribimaximal alignments φ 2 and φ 3 have been accomplished, the remaining tribimaximal alignment φ 1 is simple to obtain, see figure 1. Similarly the general solar alignment in eq. (5.2) then follows from the orthogonality to φ 1 , as is also clear from figure 1. We now observe that the particular solar alignment φ sol in eq. (13.1) can be natually enforced by a symmetry argument if φ sol is a triplet 3 of S 4 since then, which by itself constrains the alignment to be (1, m, 1), for continuous real m. However orthogonality to φ 1 further constrains the alignment to be (1, n, n − 2), for continuous real n. Taken together, the constrained forms (1, n, n − 2) and (1, m, 1), fix n = m and n − 2 = 1, and hence n = m = 3, corresponding to the alignment (1, 3, 1) as desired in eq. (13.1). To summarise we see that the desired alignments in eqs. (13.1) and 13.2 emerge naturally from the residual symmetries of S 4 , together with the simple orthogonality conditions which can be readily obtained in models as in eq. 14 A benchmark model with S 4 × Z 3 × Z 3 We now present a model based on S 4 × Z 3 × Z 3 , which can reproduce the numerical benchmark discussed in section 7. The S 4 will help produce the vacuum alignments with JHEP02(2016)085 n = 3, as discussed in the previous section, the Z 3 will help to fix η = 2π/3 while the Z 3 will be responsible for the charged lepton mass hierarchy. This will yield the most predictive and successful version of the LS model, corresponding to the numerical results in table 1, perfectly reproduced by the exact analytic results, where the two remaining free parameters m a and m b are used to fix the neutrino mass squared differences. The entire PMNS matrix then emerges as a parameter free prediction, corresponding to the CSD(3) benchmark discussed in section 7. With the alignments in eqs. (13.1) and (13.2), arising as a consequence of S 4 residual symmetry, summarised by the starfish diagram in figure 2, together with simple orthogonality conditions, as further discussed in the next section, we may write down the superpotential of the starfish lepton model, as a supersymmetric version 3 of the LS Lagrangian in eq. (6.1), where Note that we have a qualitative understanding of the charged lepton mass hierarchy as being due to successive powers of θ, but there is no predictive power (for charged lepton masses) due to the arbitrary flavon VEVs and undetermined order unity dimensionless Yukawa couplings which we suppress. When the Majorons get VEVs, the last two terms in eq. (14.1) will lead to the diagonal heavy Majorana mass matrix, where M atm ∼ ξ atm and M sol ∼ ξ sol . With the above seesaw matrices, we now have all the ingredients to reproduce the CSD(3) benchmark neutrino mass matrix in eq. (7.1), apart from the origin of the phase η = 2π/3, which arises from vacuum alignment as we discuss in the next section. We have argued in section 13 that in general the vacuum the alignments in eqs. (13.1) and (13.2), arise as a consequence of S 4 residual symmetry, summarised by the starfish diagram in figure 2, together with simple orthogonality conditions. It remains to show how this can be accomplished, together with the Majoron VEVs, by explicit superpotential alignment terms. The charged lepton alignments in eq. (13.2) which naturally arise as a consequence of S 4 , can be generated from the simple terms, where A l are S 4 triplet 3 driving fields with necessary Z 3 ×Z 3 charges to absorb the charges of φ l so as to allow the terms in eq. (15.1). F-flatness then leads to the desired charged lepton flavon alignments in eq. (13.2) due to,  for l = e, µ, τ . The vacuum alignment of the neutrino flavons involves the additional tribimaximal flavons φ i with the orthogonality terms in eq. (5.3), where we desire the tribimaximal alignments in eq. (13.3) and as usual we identify φ atm ≡ φ 3 . We shall assume CP conservation with all triplet flavons acquiring real CP conserving VEVs. Since there is some freedom in the choice of φ 1,2 charges under Z 3 × Z 3 , we leave them unspecified. The singlet driving fields O ij and O sol have R = 2 and Z 3 × Z 3 charges fixed by the (unspecified) φ i charges, The tribimaximal alignment for φ 2 in the 3 in eq. (13.3) naturally arises as a consequence of S 4 from the simple terms, where A 2 is an R = 2, S 4 triplet 3 driving field and ξ 2 is a singlet, with the same (unspecified) Z 3 × Z 3 charge as φ 2 . F-flatness leads to, leading to the tribimaximal alignment for φ 2 in eq. (13.3). Note that in general the alignment derived from these F -term conditions is φ 2 ∝ (±1, ±1, ±1) T . These are all equivalent. For example (1, 1, −1) is related to permutations of the minus sign by S 4 transformations. The other choices can be obtained from these by simply multiplying an overall phase which would also change the sign of the ξ 2 VEV. JHEP02(2016)085 The tribimaximal alignment for φ atm ≡ φ 3 in the 3 in eq. (13.3) naturally arises from W flav,atm , (15.6) where A 3 is an S 4 triplet 3 driving field and ξ 3 is a singlet, with suitable Z 3 × Z θ 3 charges assigned to all the fields so as to allow only these terms. F-flatness leads to, which, using the orthogonality of φ 2 and φ 3 using eq. (15.3) and the pre-aligned electron flavon in eq. (13.2), leads to the tribimaximal alignment for φ atm ≡ φ 3 in eq. (13.3). The tribimaximal alignment for φ 1 then follows directly from the orthogonality conditions resulting from eq. (15.3). The solar flavon alignment comes from the terms, which, using the pre-aligned muon flavon in eq. (13.2), leads to the form (1, m, 1) for φ sol , with m unspecified, depending on the muon flavon VEV. On the other hand the last term in eq. (15.3) gives the general CSD(n) form in eq. (5.2), (1, n, n − 2) for φ sol . The two constrained forms (1, n, n − 2) and (1, m, 1), taken together, imply the unique alignment (1, 3, 1) for φ sol in eq. (13.1). To understand the origin of the phase η = 2π/3 we shall start by imposing exact CP invariance on the high energy theory, in eqs. (14.1) and (14.2), then spontaneously break CP in a very particular way, governed by the Z 3 symmetry, so that η is restricted to be a cube root of unity. The Majoron flavon VEVs are driven by the superpotential, where P, P are two copies of "driving" superfields with R = 2 but transforming as singlets under all other symmetries, and M is real due to CP conservation. Due to F-flatness, These are satisfied by ξ atm = |(ΛM 2 ) 1/3 | and ξ sol = |(Λ M 2 ) 1/3 |e −2iπ/3 where we arbitrarily select the phases to be zero and −2π/3 from amongst a discrete set of possible choices in each case. More generally we require a phase difference of 2π/3 since the overall phase is not physically relevant, which would happen one in three times by chance. In the basis where the right-handed neutrino masses are real and positive this is equivalent to η = 2π/3 in eq. (6.5), as in the benchmark model in eq. (7.1), due to the see-saw mechanism. JHEP02(2016)085 Conclusion The seesaw mechanism provides an elegant explanation of the smallness of neutrino masses. However in general it is difficult to test the mechanism experimentally, since the righthanded Majorana masses may have very large masses out of reach of high energy colliders. The heavy Majorana sector also introduces a new flavour sector, with yet more parameters, beyond those describing low energy neutrino physics. This is of serious concern, since the seesaw mechanism may be our best bet for extending the Standard Model to include neutrino masses. Given that the seesaw mechanism is an elegant but practically untestable mechanism with a large number of parameters, in this paper we have relied on theoretical desiderata such as naturalness, minimality and predictability to guide us towards what we call the "Littlest Seesaw" model which is essentially the two right-handed neutrino model bundled together with further assumptions about the structure of the Yukawa couplings that we call CSD(n). Understandably one should be wary of such assumptions, indeed such principles of naturalness and minimality without experimental guidance could well prove to be unreliable. However we are encouraged by the fact that such principles in the guise of sequential dominance with a single texture zero, led to the bound θ 13 m 2 /m 3 , suggesting a large reactor angle a decade before it was measured. The additional CSD(n) assumptions discussed here are simply designed to explain why this bound is saturated. It is worth recapping the basic idea of sequential dominance that one of the righthanded neutrinos is dominantly responsible for the atmospheric neutrino mass, while a subdominant right-handed neutrino accounts for the solar neutrino mass, with possibly a third right-handed neutrino being approximately decoupled, leading to an effective two right-handed neutrino model. This simple idea leads to equally simple predictions which makes the scheme falsifiable. Indeed, the litmus test of such sequential dominance is Majorana neutrinos with a normal neutrino mass hiearchy and a very light (or massless) neutrino. These predictions will be tested soon. In order to understand why the reactor angle bound is approximately saturated, we need to make additional assumptions, as mentioned. Ironically, the starting point is the original idea of constrained sequential dominance (CSD) which proved to be a good explanation of the tri-bimaximal solar and atmospheric angles but predicted a zero reactor angle. However, this idea can be generalised to the "Littlest Seesaw" comprising a two right-handed neutrino model with constrained Yukawa couplings of a particular CSD(n) structure, where here n > 1 is taken to be a real parameter. We have shown that the reactor angle is given by θ 13 ∼ (n−1) √ 2 3 m 2 m 3 so that n = 1 coresponds to original CSD with θ 13 = 0, while n = 3 corresponds to CSD(3) with θ 13 ∼ 2 √ 2 3 m 2 m 3 , corresponding to θ 13 ∼ m 2 /m 3 , which provides an explanation for why the SD bound is saturated as observed for this case, with both the approximation and SD breaking down for large n. In general, the Littlest Seesaw is able to give a successful desciption of neutrino mass and the PMNS matrix in terms of four input parameters appearing in eq. (6.5) where the reactor angle requires n ≈ 3. It predicts a normally ordered and very hierarchical neutrino mass spectrum with the lightest neutrino mass being zero. It also predicts TM1 JHEP02(2016)085 mixing with the atmospheric sum rules providing further tests of the scheme. Interestingly the single input phase η must be responsible for CP violation in both neutrino oscillations and leptogenesis, providing the most direct link possible between these two phenomena. Indeed η is identified as the leptogenesis phase. Another input parameter is m b which is identied with the neutrinoless double beta decay observable m ee , although this is practically impossible to measure for m 1 = 0. The main conceptual achievement in this paper is to realise that making n continuous greatly simplifies the task of motivating the CSD(n) pattern of couplings, which emerge almost as simply as the TB couplings, as explained in figure 1. The main technical achievement of the paper is to provide exact analytic formulae for the lepton mixing angles, neutrino masses and CP phases in terms of the four input parameters of CSD(n) for any real n > 1. The exact analytic results should facilitate phenomenological studies of the LS model. We have checked our analytic results against the numerical bechmark and validated them within the numerical precision. We also provided new simple analytic approximations such as: θ 13 ∼ (n − 1) (3) is quite well motivated by a discrete S 4 symmetry, since the neutrino vacuum alignment directions are enforced by residual symmetries that are contained in S 4 , but not A 4 , which has hitherto been widely used in CSD(n) models. This is illustrated by the starfish diagram in figure 2. In order to also fix the input leptogenesis phase to its benchmark value η = 2π/3, we proposed a benchmark model, including supersymmetric vacuum alignment, based on S 4 × Z 3 × Z 3 , which represents the simplest predictive seesaw model in the literature. The resulting benchmark predictions are: solar angle θ 12 = 34 • , reactor angle θ 13 = 8.7 • , atmospheric angle θ 23 = 46 • , and Dirac phase δ CP = −87 • . These predictions are all within the scope of future neutrino facilities, and may provide a useful target for them to aim at. A Lepton mixing conventions In the convention where the effective Lagrangian is given by 4 Performing the transformation from the flavour basis to the real positive mass basis by, JHEP02(2016)085 the PMNS matrix is given by Since we are in the basis where the charged lepton mass matrix m E is already diagonal, then in general V E L can only be a diagonal matrix, consisting of arbitrary phases, where an identical phase rotation on the right-handed charged leptons V E R = P E leaves the diagonal charged lepton masses in m E unchanged. In practice the phases in P E are chosen to absorb three phases from the unitary matrix V † ν L and to put U in a standard convention [47], where, analogous to the CKM matrix, The irreducible representations of S 4 are two singlets 1 and 1 , one doublet 2 and two triplets 3 and 3 [48]. The triplet 3 in the basis of [48] corresponds to the following 24 matrices,
13,993
2016-02-01T00:00:00.000
[ "Physics" ]
The Effect of Diet on the Composition and Stability of Proteins Secreted by Honey Bees in Honey Honey proteins are essential bee nutrients and antimicrobials that protect honey from microbial spoilage. The majority of the honey proteome includes bee-secreted peptides and proteins, produced in specialised glands; however, bees need to forage actively for nitrogen sources and other basic elements of protein synthesis. Nectar and pollen of different origins can vary significantly in their nutritional composition and other compounds such as plant secondary metabolites. Worker bees producing and ripening honey from nectar might therefore need to adjust protein secretions depending on the quality and specific contents of the starting material. Here, we assessed the impact of different food sources (sugar solutions with different additives) on honey proteome composition and stability, using controlled cage experiments. Honey-like products generated from sugar solution with or without additional protein, or plant secondary metabolites, differed neither in protein quality nor in protein quantity among samples. Storage for 4 weeks prevented protein degradation in most cases, without differences between food sources. The honey-like product proteome included several major royal jelly proteins, alpha-glucosidase and glucose oxidase. As none of the feeding regimes resulted in different protein profiles, we can conclude that worker bees may secrete a constant amount of each bee-specific protein into honey to preserve this highly valuable hive product. Introduction Honey, the carbohydrate source for honey bee colonies, is produced by in-hive worker bees through a process of ripening foraged nectar, honeydew or other sweet plant saps (e.g., inversion of sugar) until long-storable honey is obtained [1,2]. Carbohydrates (mainly glucose and fructose), minerals, amino acids, plant secondary metabolites and proteins can be found in variable amounts, each being characteristic of specific honey types. Proteins detectable in honey (0.58-7.86% [3]) are mainly secreted from salivary and hypopharyngeal glands of forager and in-hive bees [4] and might be of minor Bee Feeding Regime; Honey Ripening, Storage, and Protein Analysis Apis mellifera carnica worker honey bees were collected from honey frames (performing honey processing tasks) to avoid the sampling of freshly emerged, forager, wax bees and guardians. For the experiment, bees were housed in wooden cages (13 cm × 11 cm × 8 cm, in groups of approximately 40 bees) with pieces of freshly prepared empty combs (from the same hives) attached to the cage wall [21]. The bees were starved for one hour and then supplied with different nutrients ad libitum-pure multifloral honey or 50% sucrose solution (w/w)-to compare highly complex hive food with simple sugar solution. After 3 days, honey-like product samples were collected carefully from single cells of the honey comb using a pipette and stored at −20 • C until protein profile analysis. Previous experiments already showed that providing honey bee colonies with sucrose solution (>40% w/w) resulted within 3 days in a product with a total sugar concentration above 80%, which is the value of ripened honey [22,23]. To verify the honey-like character of the stored products from this study, sugar concentrations of three random samples from all feeding regimes were determined using a refractometer. Following initial screening (Figure 1), sugar solution with different additives was shown to be the most suitable nourishment to study the impact of different food sources on honey proteome composition and stability. Four different feeding regimes (ad libitum) were selected (each with three replicates): (1) 50% sucrose solution (w/w) only; (2) 50% sucrose solution (w/w) and polyfloral pollen in addition (origin: Romania), ad libitum; (3) 50% sucrose solution (w/w) plus quercetin (2.26%, w/v); and (4) Apiinvert ® (common artificial bee food; mix of 31% sucrose, 30% glucose and 39% fructose) only, diluted to 50% total carbohydrate concentration. The latter diet served as a control for the combination of mono-and di-saccharides. Quercetin is a major plant secondary metabolite (flavonoid) frequently found in nectar, pollen and bee products [24,25] and is a significant and attractive cue of numerous plant nectar sources for honey bee foragers [26]. We used a concentration that clearly exceeds the usual concentration found in nectar in the field to assure the perception of the flavonoid from the sugar solution and trigger a potential response of worker bees. To follow honey production and ripening on honey combs, all food solutions were stained with 0.1% Brilliant Blue FCF ( Figure S2) according to Ehrenberg et al. [27]. Staining sugar solutions were shown not to influence the honey-like product protein composition ( Figure 1). A few days after setting up cages, a minimum of two samples (which means two individual cells) of each cage (three cages per feeding regime) were analysed by gel electrophoresis. Changes in the protein stability of randomly picked, freshly ripened honey-like product samples were studied while storing them for 28 days at 35 • C. Samples of each treatment group were taken once a week (day 6, 14, 21 and 28). The honey proteome composition and stability was measured using sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS PAGE; 12% gels, Colloidal Coomassie staining-Brilliant Blue G-250), gel imaging (gels scanned with 600 dpi, auto white balance, 24 bit depth, RGB colour representation, and captured with ImageJ 1.48v) and software analysis (GelAnalyzer 2010a) as described in the work by Mures , an and colleagues [28]. Before loading and running gels, samples were diluted to 1:1 with distilled water, and 10 µL of each sample was incubated with 5 µL 5 × SDS Laemmli sample buffer [28] at 95 • C for 5 min (standard gels) or at 70°C for 10 min (protein stability gels). Electrophoresis was performed at a constant voltage of 175 V for 1 h (standard gels). Protein stability gels were run at 80 V for 1 h and afterwards 175 V for 1 h to increase resolution. GelAnalyzer was used to estimate raw volume for the five most common protein bands ( Figure 2) of each sample (5-6 replicates per treatment group) [28]. To account for variance in total protein amount, 4 of the 5 protein bands are given as relative values, normalized to the band with the highest density (always the band at 50-60 kDa, Figure 2). Protein Precipitation, Dye Removal, and Identification via Mass-Spectrometry Fifty microliters of dyed honey-like product was diluted with distilled water (up to 500 µL final volume); proteins were precipitated using 1% NaDOC (50 µL) and 50% TCA (110 µL) and incubated for 30 min on ice. After centrifugation (~18,000× g, 10 min), the supernatant was removed and the pellet washed in 200 µL ice-cold acetone. Following another identical washing step, the protein pellet was air-dried and washed five times in 500 µL Tris-HCl (0.1 M, pH 8.0) to remove most of the blue dye. Next, this light blue-coloured pellet was washed four times in 500 µL ammonium bicarbonate (25 mM), including incubation at 37 • C for 1 h. In the last washing step, the protein pellet was incubated at 37 • C, overnight, in 500 µL DMSO under shaking conditions (300 rpm). After the effective removal of the blue dye, samples were centrifuged for 10 min (~18,000× g), and DMSO was removed and prepared for mass-spectrometry. Mass spectrometric analyses were performed using the principles described earlier [29]. Briefly, 1 µL of tryptic peptides (~400 ng peptides) was trapped on a 20 mm × 180 µm fused silica M-Class C18 trap column (Waters, Eschborn, Germany) and washed for 5 min at 5 µL/min with a 1% solution of 0.1% formic acid (FA) in acetonitrile (ACN) in 99% trifluoroacetic acid (0.1% in water) before being separated on a 250 mm × 75 µm fused silica M-Class HSS T3 C18 column (with 1.8 µm particle size) (Waters, Eschborn, Germany) over a 35 min gradient consisting of increasing concentrations of 7-40% of 0.1% FA in ACN within 0.1% FA in water (Carl Roth, Karlsruhe, Germany). Eluting peptides were ionized at 2.1 kV from a pre-cut PicoTip Emitter (New Objective, Woburn, MA, USA) with source settings of 80 C and a nano N 2 flow of 0.4 bar. Ions passed into the Synapt G2S Mass Spectrometer (Waters, Eschborn, Germany), which was operated in both the positive ion mode and resolution mode with the following settings: ion trap cell mobility separation with a release time of 500 µs, and afterwards "cooled" for 1000 µs; the helium pressure was set to 4.7 mbar, and the IMS cell nitrogen pressure was 2.87 mbar; the wave height was 38 V, and the wave velocity was ramped from 1200-400 m/s. Glu-1-Fibrinopeptide B (250 fmol/µL, 0.3 µL/min) was used as lock mass (m/z = 785.8426, z = 2). The RAW MS mzML data files were generated with ProteinLynx Global Server (PLGS) 3.0.1 (Waters, Milford, MA, USA) with the following settings: automatic calculation of chromatographic peak width and MS TOF resolution, the lock mass for charge 2 was '785.8426 Da/e', thresholds were set to 135 counts for low energy, 80 counts for elevated energy and 750 counts for intensity, respectively. These mzML files were initially run through the sampling search engine Preview v3.3.11 (Protein Metrics, Cupertino, CA, USA) against A. mellifera (Amel_4.5, https://www.ncbi.nlm.nih.gov/assembly/GCA_000002195.1), which generated full search parameters: a precursor mass tolerance of 20 ppm and a fragment mass tolerance of 30 ppm. Digestion specificity was set to trypsin with possible N-raggedness. The lock mass Glu-1-Fibrinopeptide B was used for the recalibration of fragments and precursors. A database search was set with these modifications: the fixed modification of carbamidomethylation on Cys, variable modifications of (di) oxidation on Met, N-terminal Gln->pyro-Glu and Glu->pyro-Glu conversion, N-terminal acetylation, and deamidation of Gln. Full searches were run through Byonic v3.3.11 (Protein Metrics, Cupertino, CA, USA) with the additional following settings: the maximum number of precursors per MS2 was set to 10, as recommended by the manufacturer for MSE data. Protein FDR was set to 1%, against both A. mellifera (Amel_4.5) and an internally curated focused database generated for A. mellifera hypopharyngeal glands and brains, consisting of sequences emphasised for uniqueness (database curation procedure published earlier [30]). Output mzIdentML files were then loaded into Scaffold Q + S 4.8.9 (Proteome Software Inc., Portland, OR, USA). Results All honey-like products analysed in this study were shown to have increased sugar concentrations (sucrose solution only: 57-74%, sucrose solution plus pollen: 65-68%, sucrose solution plus quercetin: 57-76%, Apiinvert: 73-77%) in comparison to 50% sugar solution (starting material). On the other hand, all sugar values were lower than for ripened honey (some random samples: 80.5-82.5%). This means that all products are worker bee-processed sugar solutions, which are on their way to becoming ripened honey. Comparing the protein profiles of honey-like products produced by honey-processing worker bees fed with sucrose solution versus pure polyfloral honey indicated that sucrose solution-based honey-like products showed the typical protein bands known from many different honey types, with clearly identifiable bands between 45 to 85 kDa (Figure 1 and Figure S1; in accordance with [13]). The band with the highest density is always a product at 50-60 kDa (Figures 1 and 2). Royal jelly protein extract (see [28] for details) showed a comparable protein profile, which was the same as in pure sucrose-based honey-like product protein samples. The royal jelly extract was used as the positive control, since it is known that honey resembles royal jelly in protein composition [19]. Multifloral honey-fed worker bees produced honey-like products with a blurry, non-typical protein profile (Figure 1), caused by modified proteins, as a result of processes such as glycation (see Section 4 for details). High-resolution denaturating PA gels (12%) revealed additional protein bands between 15-20 kDa as well as around 10 kDa for all four treatment groups ( Figure 2). The statistical comparison of protein band densities for the most common bands demonstrated that the protein concentrations differ significantly from each other (Kruskal-Wallis ANOVA, H = 70.64, dF = 3, n = 84, p < 0.0001); however, the honey-like products of the four treatment groups do not (K-W ANOVA: H = 2.15, dF = 3, n = 84, p = 0.54) ( Figure S3). This means that neither the simultaneous ad libitum feeding of floral proteins (pollen) nor the addition of a plant secondary metabolite (quercetin) influence the protein composition and quantity of bee-secreted honey proteins. This was also valid for a more complex sugar food source (Apiinvert) compared to sucrose only ( Figure S3). The temporal analysis of honey-like product protein profiles showed no remarkable change ( Figure 2). Consequently, the short-term storage (up to 4 weeks) of freshly ripened honey-like products in this study did not cause a major detectable degradation of proteins added by worker bees. However, the stored product from sucrose solution plus pollen feeding, on day 28, showed that slight signs of degradation may have occurred ( Figure 2). As the major band (~55 kDa, MRJP1) remained unchanged and the lower product (~50-55 kDa) became shorter, MRJP2 is the candidate for the shortened product. For the Apiinvert protein dynamics, a single product~40 kDa seems to be unique for this honey-like product; however, comparing different samples of all four treatment groups showed that this product is also present in the other three honey-like products ( Figure S4). Differences seen in Figure 2 might originate from the normal variance in protein staining. Complete honey-like product proteome mass spectrometry analysis confirmed that honey bees, irrespective of their diet, add seven different proteins to 'nectar' (sugar solution) in the process of honey ripening. The proteins are produced in the hypopharyngeal glands, including MRJP1, 2, 3, 5 and 7, alpha-glucosidase (Hbg3) and glucose oxidase ( Table 1). The relative abundance of each protein per treatment group did not differ between feeding regimes (X 2 -test: X 2 = 4.99, p = 0.96). This implies that the addition of food additives does not result in differences in protein quantity and composition. The majority of all honey proteins were MRJP 1, 2 and 3, with MRJP1 being the most abundant and MRJP2 and 3 being detected in equal concentrations (Table 1). In spite of the very high similarity of MRJPs [31], a distinct identification of MRJP 5 and 7 was possible, using the modified reference honey bee protein data base [30] that excluded similar peptides of highly related proteins. The molecular weight (MW) of all identified proteins matched well with the protein bands shown in Figure 2 (MW values see Table 1). Using modified identification criteria, the protein band below 20 kDa (Figure 2) might be a protein identified with a molecular weight of~19.5 kDa; however, this is of unknown function (XP_397512.1, uncharacterized protein LOC408608-Apis mellifera). This protein has also been identified in monofloral honey samples using classical protein identification (mass spectrometry of proteins cut out from SDS gels) (Table S1). Table 1. Proteins identified for four different honey-like products (based on feeding sugar solution plus several supplements) using mass spectrometry. Shown are results after analysis with Scaffold_4.8.9 and a newly generated Apis mellifera reference protein database [30] (basic settings: minimum number of peptides: 2, protein threshold: 99%, peptide threshold: 95%). Discussion Sugar solution-based honey-like products include exclusively bee-specific proteins. This is in accordance with previous studies [8,32]. The investment of worker honey bees adding proteins based on the individuals' reserves is not without purpose: storing bee-produced proteins in honey(-like products) prevents or at least strongly decelerates protein degradation ( Figure 2) and therefore provides an alternative protein storage strategy compared to direct storage in secreting gland tissue (even by overwintering bees [33]) or haemolymph. Long-term experiments have shown that honey protein content decreases by 46.7% after 6 months, independent of the botanical origin [34]. However, under natural conditions, honey might not be stored for longer than 6 months in the hive, as honey bees produce and consume honey regularly, depending upon the brood status, number of individuals and flowering season. The general energy requirements of the colony for processes such as temperature regulation are also a critical factor influencing honey storage duration [35,36]. Furthermore, the storage of honey in wax cells and finally cell capping may significantly contribute to counteracting-and/or the retardation of-the decay of proteins and other honey compounds. Major royal jelly proteins, alpha-glucosidase and glucose oxidase dominated the spectral counts with protein profiles of the different honey-like products, independent of the food source (with or without additives) ( Figure 2, Table 1). Thus, as mentioned earlier, honey proteomes resemble the royal jelly proteome [19], which has, in addition, enzymes relevant for honey bees' carbohydrate metabolism. Alpha-amylase was not detected in any of the samples. This is expected, as this enzyme (~56 kDa) is secreted into honey exclusively by forager bees [37], which was precluded by the experimental design. Worker bees may need an environmental or another signal (perceived while foraging for nectar and pollen) to activate the secretion of amylase, which is required to convert starch of plant origin (mainly from pollen) into maltose. Furthermore, we were unable to detect apisimin (~8 kDa) and defensin-1 (~11 kDa). This is unsurprising, as apisimin generates only one possible tryptic peptide which may not 'fly' in mass spectrometric analyses [30], and defensin-1 is mostly present in honey with <1% of the total honey proteome [38], which might explain its absence in our samples. Nevertheless, in Figure 2, a single band (evenly present in all samples) slightly above 10 kDa can be observed, which might be defensin-1 (10.717 kDa). Specific antibodies targeting apisimin or defensin-1 [11,38] need to be used in future studies to confirm the presence of both proteins. Protein sizes, band clearness and brightness on SDS gels varied between tested hive products (royal jelly proteins isolated from royal jelly versus those isolated from honey-like products) and the different bee foods initially tested (honey versus sugar solution) ( Figure 1, Figure 2, Figure S1 and Figure S4). This observation is based on protein modifications caused by nectar polyphenols and carbohydrates [34,39], known as glycation. In contrast to the well-known glycosylation (N-or O-linked oligosaccharides as result of enzymatic reactions), glycation is based on an enzyme-independent chemical modification known as Maillard reaction. These modifications lead to slightly higher molecular weight proteins compared to unmodified proteins (an increase of 3-5% of total molecular weight [40]) and therefore demonstrate a slower migration behaviour of proteins on an SDS PAGE gel, leading to bands riding higher on the gel. The successful identification of honey proteome composition not only depends on the resolution and sensitivity of the used analytical method, but also on a careful and clean sampling process. Using a state-of-the-art technique, Erban and colleagues [18] failed to detect profilin, superoxide dismutase and apisimin, which are known as honey proteins [12]. On the other hand, they were the first to describe many thus-far undiscovered honey proteins (e.g., hymenoptaecin, venom-related and venom-like proteins, proven allergens, serine proteases, inhibitors of serine proteases, and isoforms of glucose dehydrogenase). Our study also failed to detect the rare honey proteins described in both studies [12,18]. This could be the result of methodological issues (extraction efficiency, detection sensitivity, etc.) or simply a lack of contamination. Honey bee-driven contamination from honey bee proteins not supplied to honey, by workers, is discussed as a major reason why Erban and colleagues detected so many unknown honey proteins [18]. These newly-detected proteins might belong to larvae remaining in the combs or bee venom, used by worker bees for comb disinfection [24,41], at the time of honey extraction [18]. As a consequence, future studies should use freshly prepared honey frames and extract honey from single cells or specific parts of the comb to avoid interference from cross-contamination. Floral proteins, stored in-hive as pollen and bee bread, are the major nitrogen source of adult honey bees. Larval honey bees mainly rely on royal jelly as a protein source. Proteins identified in honey, including MRJPs, may contribute to the health and development of larvae and adult honey bees or present an alternative nitrogen resource for the whole colony. However, total quantities of proteins are lower for honey-based proteins in comparison to pollen, bee bread or royal jelly. Consequently, it may not only the amount of honey proteins but also their chemical modifications (e.g., glycation and glycosylation) that make them indispensable for still-unknown biological functions. Currently, the enzymatic (carbohydrate metabolism) and antimicrobial activity (e.g., apisimin, defensin, MRJPs) has been verified for most proteins identified from honey. These functions are guaranteed, as variable food sources resulted in equal honey protein quality and quantity. Nevertheless, it is clear that several non-discovered nutritive and non-nutritive functions remain elusive and have to be investigated in future studies. Conclusions In conclusion, honey-processing worker bees seem not to adjust their honey protein quality and quantity depending on nectar quality or protein (pollen, bee bread) availability. Those workers add gland-produced proteins as a nutritive protein (nitrogen) source to keep antimicrobial effects constant to preserve hive products, or secreted proteins serve as molecules involved in social immunity (as recently shown for MRJP3 and RNA uptake [42]). Further non-nutritive functions, especially for MRJPs (many of them still with unknown function [31]), might be essential for the biological properties of honey. One of the major functions of bee-secreted peptides and proteins is the prevention of the microbial spoilage of royal jelly, honey and bee bread. These hive products are essential for feeding the brood and the queen, and as a consequence, the survival of the whole colony relies on the quality of the food sources.
4,953.2
2019-09-01T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
THE PHYSICAL ORIGIN OF FITTING LAWS FOR ROTATIONAL ENERGY TRANSFER. An historical overview of the various parameterised forms describing RET processes (fitting laws) is presented. The physical models behind these "laws" are compared. Particular attention is paid to the role of angular momentum constraints and to the energy depen- dence of the state-to-state cross-sections. Finally it is shown how general trends can be inferred from the topology of the intermolecular potential energy surface. INTRODUCTION Many of the collision systems in which rotational energy transfer (RET) has been studied are too large for accurate close-coupled quantum calculations. Even though the problem may be somewhat simplified for these heavier systems throuqh the use of the fixed-nuclei or energy sudden (ES) 61 I factorisation , the accurate calculation of the basis cross-sections is still a formidable and often impossible task since the intermolecular potentials, particularly of electronically excited molecules, are not known with any precision. An alternative approach over recent years has been to look for simple parameterized fitting laws to describe the cross-sections. We describe here below the various forms which have been proposed in historical order. FITTING LAWS The earliest of these was the exponential gap law (EGL) 2,3 which was advanced over a decade ago in order to interpret infrared chemiluminescence experiments. The EGL proposes that the efficiency of a particular rotational transfer channel decreases exponentially as the amount of energy exchanged, compared to the average kinetic energy, 4,5 increases. Levine, Bernstein and coworkers later realised that the EGL could be justified from theoretical arguments based on information theory and surprisal analysis. Linear surprisal plots imply kjj, C(E) k(Ojj exp(-E)IEjj I/E) (1) 2 where E I/2 lU + Bj(j+I). C(E) and E) are (possibly energy dependent) empirical parameters cho-(0;," is the prior sen for a best fit to the data. k-. FITTING LAWS FOR ROTATIONAL ENERGY TRANSFER 63 statistical rate constant. Heller and later Sancturary were able to approximately describe the energy dependence of the EGL using very simple semi-classical mechanics. However while the information theory and thermodynamic approach can be used to justify expressions such as the EGL it is still by no means clear, on the microscopic dynamical level, why rotationally inelastic cross-sections should be so insensitive to the intermolecular potential as to allow such a simple two parameter reduction of RET data sets. Furthermore the EGL only "agrees" with experimental and numerically calculated data sets in a very broad sense, the detailed comportment of the cross-sections particularly at large energy gaps is not well described. 2.2 Power laws 8 Some years ago Pritchard and coworkers presented a slightly modified form of the EGL in which the scattering amplitude varied as the inverse power of the energy gap. This power gap law (PGL) was found to give somewhat better agreement with experimental data for the Na2(AIT +) + Xe 9 system than the EGL. In a later publication they extended their ideas by explicitly including the quantum mechanical factorisation resulting from the sudden approximation to create a combined fitting and scaling law known as the infinite order sudden power law (lOS-P) 64 B. J. WHITAKER AND Ph. BRECHIGNAC kj/j, (2j' + 1) exp,(E. E. )/kT x T. ( (4) where (3 is the in plane angle between the initial and final angular momentum vectors, 13 More generally Derouard finds that the multipolar rate constants may be written 2j' +1 rr(2 + 1) PK (cos (3) k0 dJ (6) The connection between the quantum mechanical expressions and these latter result is to be found in the large quantum number limits of the Racah coefficients in the ES-factorisation. For the basis rate constants Derouard chooses ko/j (' ( + 1/2)-" (7) i.e. using the fact that for J >>1, ( , -1 nels. The rotational constant of 1 2 is .029 cm and the rovibronic spectrum is consequently fairly dense. This means that at room temperature the number of energetically accessible channels is typically 200. The data however shows that quantum jumps greater than 40 are extremely unlikely FITTING LAWS FOR ROTATIONAL ENERGY TRANSFER 67 events ; the cross-section for j + 40 is three orders of magnitude smaller than that for j +Z when /.1 of the v(0-16) 1T.+ 3]+ band is exg u cited. Dexheimer and coworkers found that it was impossible to fit these data using a power Law and that it was necessary to postulate an additional ad,-hoc constraint on the angular momentum (and another freely adjustable parameter) .18 The best functional form for the basis set was found to be for collisions with the isotopic pairs 3"4He and H2, D 2 at fixed thermal energy (see Fig. I). They conclude that the parameter * is 9roportional to 1/2 which is consistent with the hypothesis of momentum constraints. Our intuition then suggests that the physical basis for the fitting laws may be found in the conservation of angular momentum rather than in the conservation of energy. Thus the initial ideas of the energy gap laws, such as the EGL, are transposed into an angular momentum language, through the relation E. Bj(j + 1) in the power laws, such as the IOS-P and ECS-P, and finally angular momentum constraints are explici-, tly included in the parameter (IOS-EP). Although it is rather abusive to call eq. (8) "rotational rainbow model" the understanding of RET has gained very much from these experimental findings. V,, (R)P,. (cos e) (9) and that the short-range repulsive part of Vz(R) is responsible for the large quantum jumps Aj which we have seen to be of particular interest, since it is in these channels that we observe the strongest deviations from statistical energy redistribution. The basis of the first condition comes from considering the few systems which cannot be con- Here the collision is adiabatic, indeed there is evidence that some trajectories lead to the formation of a long-lived collision complex, and it is interesting to note that, while the 3 adjustable parameters ECS-P law fits the RET data quite well for these systems, the AON works less well. The AON is derived from a semi-classical infi- The essential step in the derivation of the AON is to assume an exponentially repulsive potential for V, As long as the kinetic energy is limited in range (thermal cell conditions for instance) this approximation is always justified. If we denote ne range of V, by r 0 the path integral becomes x exp(-R/r 0) dR (12) in which R 0 is the distance of closest approach for a particular trajectory on the isotropic potential VO, and A is the strength of the ani sotropy. Assuming rectilinear trajectories the integral can be analytically evaluated for head-on and large impact parameter collisions. The intermediate case can be evaluated numerically and can be shown to be within a factor of two or so of the analytic result. It is important to have the proper asymptotic behaviour. The result is" with 3/2 a 2 (% V c ro/lu (14) where V c V(R 0) This last result is arrived at by noting that physically realistic potentials will be characterised by range parameters much smaller than R O. where c is an integration constant given by 2 2r 0 /a. In practice the two quantities a and c are treated as free parameters in a fitting procedure. This cross section is obviously a decreasing function of n, which falls to zero for n a. (,) The effective impact parameter is defined as the shortest distance between the line of the transferred linear momentum of the incoming particle and the centre of mass of the ellipse. It is important to note that n > a is physically unrealistic since it would imply a negative impact parameter. Then the quantity ha Aj remax presents the maximum transferable angular momentum. Channels for which /, >a are therefore closed by setting q/O O. Comparison of AON with power law model Providing that the channel number is not of the same order as a, the numerical behaviour of the power law and the AON is very similar. The differences between the two are only apparent in the large Grawert channels where the angular momentum constraints become important. In that case the numerical behaviour of the AON can be reproduced by the EP law (eq. 8). Note, however that the AON does not require the setting of a th.ird independent paraneter , and that he obtains a fittin law of the form (19) o'j,.+j (2j' + 1)/(2j + 1) a(Ij' j1-1 -b) (20) for the thermally averaged cross-section. In this very simple law the parameter a is related to the zero-impact parameter distance of closest approach R 0 t h rough a (SkTITrl)l/2 ITR02 (21) Plots of statistically weighted cross-sections against IAj -I , including data for the I2(3II)-+ He system are found to be reasonably linear providin,q the initial level is small. Deviations are attributed to the breakdown of the ES approximation, but it is interesting to speculate that better results might be obtained by taking into account the variation of a with the relative velocity. Energy dependence Most recent developments have concentrated on the ener,n,y dependence of the cross-sections. ; (25) The reason for this arises from the ambiguity in the ES approximation as to wether the kinetic energy in the entrance or exit channel should be used B. J. WHITAKER AND Ph. BRECHIGNAC for the calculation of a particular cross-section. The absence of an energetic threshold for the q-O crosssections make them a natural choice for the basis functions, but this choice is supported by the fact that the lowest velocity half-collision is more efficient than its high velocity counterpart for transferring angular momentum. The smaller the larger the crosssection. This reflects the increased interaction time. The energy dependence of the fitting law may be very important, particularly if the cross-sections are measured close to threshold. As an example of this we + consider RET in the CsH(A T. + H 2 collisional system, recently measured by Ferray, Visticot, and Sayer They observed laser excited fluorescence from rotational states up to 0.75 kT away from the initially populated state. In order to deconvolute the effects of multiple collisions they used both the AON and the IOS-P to solve the coupled rate equations. They find that the IOS-P works better than the AON for 6J >7 because of the very rapid fall-off of the AON in the high j channels. The initial rate of descent is rapid so that the parameter a is small 15 increases. This leads to a larger value of a in the high 6j channels. This variation of a with the kinetic energy is further discussed below. Indeed, coming back to the hard ellipse model the maximum transferred angular momentum has been found to be AJma x 2 (AI-B1) PO (26) where A and B are the two arms of the ellipse, and PO ' -J2' pu (27) is the linear momentum of the incident particle. As long as the kinetic energy is kept constant, which was the case in the measurements by Derouard and 19 * 3 4 Sadeghi in 12 + He, He, H2, D2, AJma x scales as 1/2 (see Fig. 1). This is in agreement with the experiment and with both the IOS-EP and IOS-AON, as shown in Ref. 20. But if the energy e is changed the effective ellipse will be obtained by var.ious "cuts" of the potential "trunk" at different heights, resulting in a variation of the important quantity (A B I) as a function of e In most usual systems the isotropic potential Vo(R) is much steeper than the first anisotropic term V2(R) (as assumed in the derivation of the AON). The consequence of this situation is that (A B 1) decreases rapidly when e increases, as it is schematically s.hown in Fig. 3. In practice the exact description of the energy dependance of Aj max requires the knowledge of the potential shape. Note that, if it is known, the rather crude approximation The Na 2 + He system is an example for which this has been done. The ab initio potential surface calculated by R. Schinke et al is such that Vo(R) and V2(R) have essentially the same R dependence, which seems to be an extreme case (very "soft" molecule). If Vo(R) and V2(R) are taken exponential with the same range r 0 the parameter (A -B I) is readily found independent of the energy (see Fig. 3 Schematic representation of a "hard" potential (left) and of a "soft" potential (right) (see text). (A1-B1) is the difference between the two arms of the effective ellipse. Then Aj scales as At the same time the paramax meter (x, like V2(RO) is found to scale as s (see Eq. 1/2 14) so that a. (6)
2,982
1986-01-01T00:00:00.000
[ "Physics" ]
Multi-Institutional Dosimetric Evaluation of Modern Day Stereotactic Radiosurgery (SRS) Treatment Options for Multiple Brain Metastases Purpose/Objectives: There are several popular treatment options currently available for stereotactic radiosurgery (SRS) of multiple brain metastases: 60Co sources and cone collimators around a spherical geometry (GammaKnife), multi-aperture dynamic conformal arcs on a linac (BrainLab Elements™ v1.5), and volumetric arc therapy on a linac (VMAT) calculated with either the conventional optimizer or with the Varian HyperArc™ solution. This study aimed to dosimetrically compare and evaluate the differences among these treatment options in terms of dose conformity to the tumor as well as dose sparing to the surrounding normal tissues. Methods and Materials: Sixteen patients and a total of 112 metastases were analyzed. Five plans were generated per patient: GammaKnife, Elements, HyperArc-VMAT, and two Manual-VMAT plans to evaluate different treatment planning styles. Manual-VMAT plans were generated by different institutions according to their own clinical planning standards. The following dosimetric parameters were extracted: RTOG and Paddick conformity indices, gradient index, total volume of brain receiving 12Gy, 6Gy, and 3Gy, and maximum doses to surrounding organs. The Wilcoxon signed rank test was applied to evaluate statistically significant differences (p < 0.05). Results: For targets ≤ 1 cm, GammaKnife, HyperArc-VMAT and both Manual-VMAT plans achieved comparable conformity indices, all superior to Elements. However, GammaKnife resulted in the lowest gradient indices at these target sizes. HyperArc-VMAT performed similarly to GammaKnife for V12Gy parameters. For targets ≥ 1 cm, HyperArc-VMAT and Manual-VMAT plans resulted in superior conformity vs. GammaKnife and Elements. All SRS plans achieved clinically acceptable organs-at-risk dose constraints. Beam-on times were significantly longer for GammaKnife. Manual-VMATA and Elements resulted in shorter delivery times relative to Manual-VMATB and HyperArc-VMAT. Conclusion: The study revealed that Manual-VMAT and HyperArc-VMAT are capable of achieving similar low dose brain spillage and conformity as GammaKnife, while significantly minimizing beam-on time. For targets smaller than 1 cm in diameter, GammaKnife still resulted in superior gradient indices. The quality of the two sets of Manual-VMAT plans varied greatly based on planner and optimization constraint settings, whereas HyperArc-VMAT performed dosimetrically superior to the two Manual-VMAT plans. INTRODUCTION Stereotactic radiosurgery (SRS) was first conceptually introduced by neurosurgeon, Lars Leksell, in 1951 (1, 2). The evolution of this technology alongside advances in image guidance have enabled the Gamma Knife to serve as the leading workhorse for treating cranial malignancies with hypofractionation. Although it was the first of its kind to perform SRS, the Gamma Knife has not been the only player, with other accelerator modalities adapting to offer solutions for patients requiring SRS (3,4). Advancements in hardware and software design have since propelled linacs to become a popular and more widely available technology for stereotactic treatment capability. This is particularly pertinent for the treatment of multiple brain metastases, which were traditionally treated with surgery and/or whole brain radiation therapy (WBRT). With more studies promoting the benefits of SRS for multiple brain metastases such as: improved local control when adding SRS to WBRT (5)(6)(7)(8), similar survival (WBRT+SRS vs. SRS only) (8)(9)(10)(11)(12)(13)(14)(15)(16)(17) and less cognitive deterioration (SRS only) (18)(19)(20)(21), the ratio of patients receiving SRS treatments annually increased 15.8 percentage points from 2004 to 2014 and the number of facilities offering SRS annually increased 19.2 percentage points (22). Supporting evidence for SRS of a large number of brain metastases has further contributed to this effect (14,20,(23)(24)(25)(26)(27)(28)(29). This growing demand for SRS, coupled with the ease of access to conventional linacs, has stimulated the development of a number of new technologies to facilitate the implementation of linac-based SRS for the treatment of multiple metastases. The common goal of all these linac SRS techniques is to use a single isocenter to treat all of the metastases simultaneously, in order to avoid prohibitively long treatments with multiple isocenters and thereby improve patient comfort and throughput. The most current single isocenter linac-based SRS options include multi-aperture dynamic conformal arcs on a linac (30-32) (BrainLab Elements TM v1.5, Munich, Germany), volumetric arc therapy (VMAT) calculated with the conventional optimizer (33-43) (Varian Medical Systems, Palo Alto, CA) or VMAT delivery calculated with the newer Varian HyperArc solution (44)(45)(46)(47). With this large variety of commercially available SRS treatment techniques, it is important to assess and be aware of the different strengths and weaknesses of the numerous options available for patients seeking treatment for multiple metastases. As the different technologies have emerged, there have been a number of studies comparing some of the techniques against each other. Thomas et al. (48), Liu et al. (49), and Potrebko et al. (50) each compared VMAT to GammaKnife for 28 patients with 2-9 targets, 6 patients with 3-4 metastases and 12 patients with at least 7 metastases, respectively. Mori et al. compared Elements to GammaKnife for two patients each with 9 metastases (32). Ohira et al. (44) compared HyperArc to conventional VMAT for 23 patients with 1-4 metastases, meanwhile Slosarek et al. (46) has most recently compared CyberKnife, VMAT and HyperArc for a set of 15 patients with 3-8 metastases each. Overall, these studies have found that VMAT is generally comparable to GammaKnife (with some minor differences such as improved conformity indices at the cost of potentially increased low dose spread), as is Elements to GammaKnife, and similarly now HyperArc is to VMAT. However, most of the published studies have only compared two technologies to each other, with the exception of Slosarek et al. (46), which added CyberKnife to the mix. This makes it difficult to assess whether one technique may truly be superior to another for a certain patient scenario because there is a lack of comparison data on the same subset of patients for the multiple SRS techniques available. It is therefore the aim of this work to provide a more rigorous and comprehensive evaluation of the dosimetric differences between the following state-ofthe-art SRS modalities: GammaKnife, Elements, Manual-VMAT, and HyperArc-VMAT. METHODS AND MATERIALS Sixteen patients with a range of 4-10 metastases each, for a total of 112 metastases, were included in this study. The patient's age ranged from 36 to 81 years old and consisted of the following primary cancers: renal cell carcinoma, esophageal, oropharyngeal, melanoma, breast, colon, and non-small cell lung carcinoma (adenocarcinoma and large cell). Five of the 16 patients did receive prior radiation treatment: SRS alone, WBRT alone, or both SRS and WBRT. The target volumes and prescribed doses (Gy) are detailed in Table 1 for each of the 16 patients. Details on each of the SRS modalities utilized in this comparison study are described as follows. The most up to date commercially available product is the Leksell GammaKnife Icon (Elekta, Stockholm, Sweden), containing 192 60 Co sources and 4, 8, and 16 mm cone collimator options, which is an upgrade of the Perfexion unit, in that it allows frameless treatments with the addition of on-board cone-beam computed tomography (CBCT) imaging and a real-time motion tracking device. BrainLab Elements TM v1.5 is a commercial treatment planning system that automatically optimizes a dedicated group of dynamic conformal arcs to treat each of the lesions within the brain (via a single arc or a composition of multiple arcs) with a single common isocenter. Volumetric arc therapy enables intensity-modulated dose delivery via varying MLC positions and dose rate, simultaneous to varying gantry rotation speed, thus significantly increasing the degrees of freedom for the optimization algorithm. There is no physical difference in terms of the delivery for conventional VMAT vs. HyperArc. The major difference lies on the planning side for HyperArc, where the software assists the user by automatically selecting an optimal mono-isocenter, collimator angles, and non-coplanar arc setup with the intent of delivering the most conformal plan while minimizing low dose spillage into the surrounding normal brain structures. With conventional VMAT optimization, the planner is responsible for selecting and manipulating all of these variables. For every patient, a treatment plan was generated according to each of the four SRS techniques: GammaKnife, Elements, Manual-VMAT, and HyperArc-VMAT. Note, all patients were treated clinically with Elements and all other modalities were retrospectively planned for comparison in this study. A total of three different planners were included in this study. A single planner created all of the treatment plans across all patients per specified SRS modality to remove planner variability within each SRS modality. A single SRS planner with 8-10 years of experience generated all of the Elements plans used to treat the 16 patients in this study. A second SRS planner with 1-3 years of experience generated all GammaKnife plans and one set of Manual-VMAT A plans across all patients. Finally, a third SRS planner with 3-5 years of experience generated all Manual-VMAT B plans for the 16 patients. All HyperArc-VMAT plans were generated by the same planner for Manual-VMAT B but after all manual plans were done, i.e., Manual-VMAT B plans were not influenced by HyperArc plans. An additional Manual-VMAT plan was created for every patient following another institution's planning standard, in order to also evaluate the potential differences that may arise between two different treatment planner's styles. The difference in planning techniques between the two VMAT plans are summarized as follows: for VMAT A an upper and lower constraint was used for all targets, and no aggressive objective on low dose spread was applied, whereas VMAT B only applied lower constraints to target volumes but with additional objectives to control low dose spread. Beam arrangements for the Elements plans were selected from a set of six predefined templates with a range of 5-6 couch angles with 28, 32, 35, All linac plans were normalized such that the 100% isodose line covered 99% of the target volume. The GammaKnife plans were normalized with the same goal of covering 99% of the target volume with the prescription dose. This resulted in a range of 49-73% prescribed isodose lines with a median of 54%. All of the plan doses were imported into the same treatment planning system platform and version of Varian Eclipse (Varian Medical Systems, Palo Alto, CA) for dosimetric evaluation at a calculation grid size of 1 mm. Note, target normalization was entirely performed in each plan's native treatment planning system and no differences in target coverage were discovered after importing into Eclipse during dosimetric evaluation. The target volume metastasis for all patients in this study was defined as the planning target volume (PTV), already incorporating setup margins. All of the extracted and calculated dosimetric parameters described below are compared equivalently across all SRS techniques in terms of PTV. Thus, there are no inherent biases in comparing conformity indices for GTV vs. PTV when comparing GammaKnife vs. linac-based SRS. The following dosimetric parameters were extracted per patient target volume across all SRS treatment plans: RTOG conformity index (CI-RTOG) defined as the ratio of the 100% isodose volume to the target volume; Paddick conformity index (CI-Paddick) defined as the ratio of the square of the volume of the target enclosed by the 100% isodose volume to the multiplication of the target volume with the 100% isodose volume; Gradient Index (GI) defined as the ratio of the 50% isodose volume to the 100% isodose volume; and the volume of 12Gy delivered to the surrounding brain tissue contributed only from that individual target (V 12Gy ) and the volume of 12Gy delivered to the surrounding brain tissue per individual target after subtracting that individual target volume (V 12Gy -TV). Additionally, the following dosimetric parameters were extracted per patient across pertinent organs-at-risk (OARs): the total volume of brain receiving 12Gy, 6Gy, and 3Gy (V 12Gy , V 6Gy , V 3Gy ) the mean dose to the brain excluding the targets (Brain mean dose), the maximum dose to the brainstem (D max Brainstem), maximum dose to the left eye and optic nerve (D max L Eye and D max L ON), maximum dose to the right eye and optic nerve (D max R Eye and D max R ON), and maximum dose to the optic chiasm (D max OC). Lastly, the total treatment time for each plan was also extracted for comparison (linac plans times were calculated assuming a dose rate of 1,400 MU/min). Statistical evaluation of the extracted parameters was performed with JMP Pro v14 (SAS, Cary, NC). The Wilcoxon signed rank test was applied in the format of matched pairs to compare each of the plans against each other per extracted dosimetric parameter. Differences were found to be statistically significant with p < 0.05. Figure 1 graphically compares both types of conformity indices across all five SRS plans grouped according to target size. It is evident that for very small target sizes (<1 cm), GammaKnife, HyperArc-VMAT and both Manual-VMAT plans perform similarly well across both conformity indices. All are superior to the Elements conformity results. However, for target size diameters above 1 cm, HyperArc-VMAT and both Manual-VMAT plans result in superior conformity as compared to GammaKnife and Elements. Figure 2 also graphically divides the results per target bin size for GI and both V 12Gy dose metrics. The GI results show that GammaKnife is superior amongst small target diameters (<1 cm), but above that GI is similar amongst all techniques with the exception of VMAT A with the largest range. Amongst the two V12 Gy parameters, it is apparent that HyperArc-VMAT is slightly inferior compared to GammaKnife for the small targets (<1 cm) and even outperforms GammaKnife for large targets above 1 cm in diameter. When comparing total V 12Gy per patient, i.e., combining all per target V 12Gy , HyperArc-VMAT is slightly lower than GammaKnife by a median difference of 1.3 cc, which is statistically significant but clinically equivalent. Not surprisingly, the data in Figure 2 demonstrates an increase in both V 12Gy metrics as the target size increases. Also noteworthy are the widely variable results between the two Manual-VMAT planning techniques, where VMAT B consistently provides lower V 12Gy and V 12Gy -TV volumes of the brain than VMAT A . Yet, neither Manual-VMAT plan performed as well as the HyperArc-VMAT amongst these parameters. RESULTS Displaying all of the data together, rather than dividing by target bin size, Figure 3 displays the trends observed amongst the remaining extracted parameters representative of low dose spread: brain mean dose, V 12Gy , V 6Gy , and V 3Gy . Here it is again evident that HyperArc-VMAT is comparable with GammaKnife in terms of low dose spillage into the brain, when looking at the entire dataset of target sizes. Elements performs similarly to the Manual-VMAT plans, but inferior to GammaKnife and HyperArc-VMAT in this aspect. The visually evident differences amongst the plans in Figures 1, 3 are further detailed in Table 2, which lists the median difference as well as the Wilcoxon signed rank results per extracted parameter for every potential matched pair of plan comparisons amongst the five options. The median differences are displayed as a result of the row plan subtracted from the column-listed plan. Because a majority of the table displays statistically significant differences with p < 0.05, the only 6 (of a total of 70) non-significant p-values were instead bolded and underlined in the table to stand out. The purpose of this table was to serve as a more detailed reference of the magnitude of the differences when looking at two specific SRS plans per extracted dosimetric parameter. 2 | Gradient Index (GI), V12 Gy per target (defined as the volume of 12Gy delivered to the surrounding brain tissue contributed only from that individual target), and V12 Gy -TV(defined as the total volume of brain receiving 12Gy per target excluding the target volume) results displayed as box plots per SRS plan type, divided into five separate target size diameter bins. FIGURE 3 | Box plot results per SRS plan type for the following dosimetric parameters across all patients: the total volume of brain receiving 12Gy, 6Gy, and 3Gy (V 12Gy , V 6Gy , V 3Gy ) and the mean dose to the brain excluding the targets (Brain mean dose). Figure 4 compares the plan results for all of the studied OARs: maximum dose to the brainstem, optic chiasm left and right eyes, left and right optic nerves. It is important to note that each of the plans satisfied normal tissue constraints amongst all of the patients. Overall, not many patterns nor striking differences between the SRS techniques were observed when it came to sparing OARs and in general they all performed similarly well. The large range observed in D max Brainstem for GammaKnife planning is a result of target location coupled with source geometry and an inability to optimize the beam's trajectory as is possible with Elements and VMAT treatment planning software. As a visual comparison of the dosimetric results, Figure 5 displays axial, coronal, and sagittal views of the five different SRS plans per patient case #15 with a total of 10 metastases. This patient was selected due to the presence of multiple small metastases as well as a larger, more irregularly shaped target volume, all treated within the same plan. The slice locations were selected so as to show case as many of the treated metastases as possible. Lastly, treatment delivery times listed in Table 3 were extracted from the GammaKnife treatment plans and approximately calculated for the Elements and Manual-VMAT plans based on the total MUs required (since the dose rate and gantry rotation speed can vary), assuming a dose rate of 1,400 MU/min with 6X flattening-filter-free energy. Unsurprisingly, GammaKnife plans took hours longer to deliver than any linac-based radiosurgery plan. Elements and Manual-VMAT A had similar beam-on times, but HyperArc-VMAT and Manual-VMAT B were longer for almost every single case. The higher MU is a result of the increased modulation, which often happens when more stringent constraints are applied during the optimization process. This is consistent with the brain V 12Gy results exhibited in Figure 2 and the mean differences listed in Table 2, where HyperArc-VMAT and Manual-VMAT B result in the least low dose spillage across all target size groups. DISCUSSIONS The overall findings of this comparison study have demonstrated that as would be expected, all of the commercially available options for SRS are able to achieve acceptable conformality and OAR dose sparing limits. However, looking more closely at each dosimetric parameter has revealed interesting information. While it was not surprising to find the improved conformity results of the linac-based SRS techniques over GammaKnife for larger and more irregular volumes (due to the more advanced inverse optimization features as well as the ability of MLC shaping), it was certainly unexpected to see HyperArc-VMAT be able to compete with GammaKnife in terms of V12 Gy . Also expected was GammaKnife's outperformance amongst GI for small targets. However, for the larger target sizes, GammaKnife resulted in similar GIs to HyperArc-VMAT and Manual-VMAT B . This information coupled with the results from Table 3 of total beam-on times of minutes vs. hours, suggests that linac-VMAT radiosurgery is a valuable contender to GammaKnife for patients seeking treatment of multiple brain metastases, particularly for large and irregularly-shaped target volumes. Another rather interesting find was the large deviation seen in the results between the Manual-VMAT A and VMAT B plans, where the optimization objective setting was the main difference between the two techniques, with one having applied upper constraints (VMAT A ) and the other avoiding upper constraints entirely (VMAT B ) but with a more stringent control on low dose spread. VMAT B outperformed VMAT A across basically all of the studied parameters: CI, GI, V12 Gy , V6 Gy , OAR doses, etc., but all essentially at the cost of longer beam-on times. This large variation in plan quality indicated that the quality of care using VMAT for the treatment of multiple brain metastases is largely dependent on planner experience and institutional standards. Thus, in order to improve the standardization of quality of care, planning procedures and optimization objective settings need to be carefully standardized across our community even at this level of detail. Furthermore, it can be seen from the results that even though Manual-VMAT B had in general the longest beam-on time, i.e., highest modulation complexity, its plan quality was still mostly inferior compared to HyperArc-VMAT. This indicates that the objective settings used in VMAT B are suboptimal and do not provide as good of a balance (relative to HyperArc-VMAT) between modulation complexity and plan quality. To this end, HyperArc-VMAT could help improve both the optimization efficiency and plan quality standardization for SRS treatment of multiple brain metastases using a VMAT delivery technique. As a quick and straightforward summary of our findings, a spider plot was generated in Figure 6 to serve as a qualitative description of the data. The categories spanned not only dosimetric results, but also considered efficiency and skill in terms of staff and time resources required: conformity, low dose fall-off, inter-planner variability and skill, delivery efficiency, and patient-specific QA effort. Each of the SRS techniques (GammaKnife, Elements, HyperArc-VMAT, and Manual-VMAT) was ranked relative to each other according to the specific category item. Across the different target size bins, Figure 1 demonstrated that HyperArc-VMAT resulted in comparable or superior CI amongst the SRS techniques, thus earning a ranking of 1. GammaKnife had excellent conformity at the smaller target size bins, but that deteriorated with increasing size (compared to VMAT), thus earning it a ranking of 3, after VMAT with a rank of 2. Elements was consistently inferior to the other SRS modalities in terms of CI and thus was ranked last at 4. Regarding the category of dose fall-off, GammaKnife was consistently superior according to Figures 2, 3, thus it was ranked the highest (1), followed by HyperArc-VMAT (2), Elements (3) and then Manual-VMAT (4), due to the dependence on planning strategy and skill. In terms of required planning skill and inter-planner variability, Elements and HyperArc-VMAT are less dependent on this aspect, in that all of the programmed presets only require minimal planner interaction, thus earning both a ranking of 1. GammaKnife would then rank lower (at 3), given that each target is typically forwardplanned by the user. (Note however that the forward-planning of multiple metastases in GammaKnife allows the user to finetune the coverage of each target, whereas in VMAT planning the software only allows normalization to a single target at the highest dose level when prescribing different doses to different size metastases.) Manual-VMAT ranked the lowest at 4, due to the potential for greatest variability amongst different planners with the large degree of customizable plan settings (compared to GammaKnife), which can result in varying plan quality as seen in plans A vs. B. Table 3 displays the beam-on time and thus the delivery efficiency are straightforward in this respect: Elements had the lowest average beam-on time (rank = 1), followed by comparable beam-on times of HyperArc-VMAT and Manual-VMAT (both ranked at 2), and GammaKnife coming in last (rank 4) with the longest beam-on times. Furthermore, GammaKnife treatment requires the presence of an authorized medical physicist as well as a physician trained in emergency procedures for the entirety of the treatment, which may pose an additional burden on staff resources (as compared with linac-based radiosurgery). Lastly, when it comes to required patient-specific QA, GammaKnife does not require any and thus would be ranked the highest at 1, followed by Elements ranking at 2 (whether to perform dose verification for 3D-DCA SRS plans varies according to institutional policies) and then both VMAT techniques (all ranked at 3) which require additional resources i.e., physics staff to perform the time-consuming QA, involving plan preparation, device setup, beam delivery and plan analysis. The overall purpose of Figure 6 is to allow the reader to qualitatively evaluate the differences in focus amongst the SRS techniques per category of interest, in the context of multiple metastases treatment. Another practical aspect to consider when interpreting the differences seen in the results is the accuracy and precision of these treatment machines and how truly capable they are to deliver exactly what is displayed to the user in the treatment planning software. Inevitably, uncertainties exist throughout the entire treatment process, from simulation to on-board imaging and patient setup, all the way through to radiation delivery. Although it is beyond the scope of this paper, it is important to be aware of the potential geometric uncertainties present not only from the hardware (imaging and radiation isocenter coincidence, gantry rotation and sag, couch positional accuracy, MLC positional accuracy, etc.), but in the patient immobilization (frameless mask treatments for linac and GammaKnife) aspect as well, which can alter the expected conformity indices as calculated by the planning software. This type of data analysis will be the goal of our future studies. Upon evaluation of the dosimetric and logistical differences of these currently available SRS treatment techniques, the question arises whether any of these differences actually have a clinically tangible impact. The clinical implications of the disparities in the low dose spillage or the conformity indices, in terms of local control or quality of life, is a much more vast and complicated discussion that ultimately is very difficult to determine. It would require multi-institutional prospective clinical trials with long term follow-up, which sadly may be rather difficult to obtain, given the average length of survival of patients with multiple brain metastases. However, for the purposes of this comparison study, we have analyzed and presented the data in such a manner as to provide the community with a tool for selecting an SRS modality for a specific patient scenario when more than one option is available, or even for the case of selecting which type of SRS modality fits best within one's clinical needs based on their specific patient population. CONCLUSIONS HyperArc-VMAT and Manual-VMAT plans resulted in superior CI when compared with GammaKnife and Elements for target diameters > 1 cm in size, albeit at the expense of more MUs (relative to Elements). For targets < 1 cm, GammaKnife, HyperArc-VMAT and both Manual-VMAT plans achieved similar CI, but still all superior to Elements. In the smaller target size bins, GammaKnife resulted in superior GI. In terms of low dose spread into the brain, HyperArc-VMAT achieved comparable (target size < 1 cm) or slightly better V12 Gy values as GammaKnife (target size > 1 cm). All five SRS plans were able to meet the surrounding normal tissue limits, and overall resulted in similar doses to the pertinent OARs. Beam-on times were hours longer for GammaKnife vs. each of the linac-based SRS plans, with VMAT A and Elements resulting in shorter times relative to VMAT B and HyperArc-VMAT. Manual-VMAT plan quality varied greatly between the two institutional planning strategies employed. In summary, this study demonstrated that HyperArc-VMAT is capable of achieving similar or slightly better low dose spread into the brain as GammaKnife, while maintaining excellent conformity as well as minimizing inter-planner variability and beam-on time for patients seeking treatment of multiple metastases. GammaKnife remains superior in terms of gradient index and eliminates the need for patient-specific QA. Elements strengths include delivery/QA efficiency and inter-planner consistency due to automated optimization of pre-defined templates. Manual-VMAT is subject to larger inter-planner variability as compared to HyperArc-VMAT. DATA AVAILABILITY All datasets generated for this study are included in the manuscript and/or the supplementary files.
6,329.4
2019-06-07T00:00:00.000
[ "Medicine", "Engineering" ]
A druggable copper-signalling pathway that drives inflammation Inflammation is a complex physiological process triggered in response to harmful stimuli1. It involves cells of the immune system capable of clearing sources of injury and damaged tissues. Excessive inflammation can occur as a result of infection and is a hallmark of several diseases2–4. The molecular bases underlying inflammatory responses are not fully understood. Here we show that the cell surface glycoprotein CD44, which marks the acquisition of distinct cell phenotypes in the context of development, immunity and cancer progression, mediates the uptake of metals including copper. We identify a pool of chemically reactive copper(ii) in mitochondria of inflammatory macrophages that catalyses NAD(H) redox cycling by activating hydrogen peroxide. Maintenance of NAD+ enables metabolic and epigenetic programming towards the inflammatory state. Targeting mitochondrial copper(ii) with supformin (LCC-12), a rationally designed dimer of metformin, induces a reduction of the NAD(H) pool, leading to metabolic and epigenetic states that oppose macrophage activation. LCC-12 interferes with cell plasticity in other settings and reduces inflammation in mouse models of bacterial and viral infections. Our work highlights the central role of copper as a regulator of cell plasticity and unveils a therapeutic strategy based on metabolic reprogramming and the control of epigenetic cell states. Inflammation is a complex physiological process triggered in response to harmful stimuli 1 . It involves cells of the immune system capable of clearing sources of injury and damaged tissues. Excessive inflammation can occur as a result of infection and is a hallmark of several diseases [2][3][4] . The molecular bases underlying inflammatory responses are not fully understood. Here we show that the cell surface glycoprotein CD44, which marks the acquisition of distinct cell phenotypes in the context of development, immunity and cancer progression, mediates the uptake of metals including copper. We identify a pool of chemically reactive copper(ii) in mitochondria of inflammatory macrophages that catalyses NAD(H) redox cycling by activating hydrogen peroxide. Maintenance of NAD + enables metabolic and epigenetic programming towards the inflammatory state. Targeting mitochondrial copper(ii) with supformin (LCC-12), a rationally designed dimer of metformin, induces a reduction of the NAD(H) pool, leading to metabolic and epigenetic states that oppose macrophage activation. LCC-12 interferes with cell plasticity in other settings and reduces inflammation in mouse models of bacterial and viral infections. Our work highlights the central role of copper as a regulator of cell plasticity and unveils a therapeutic strategy based on metabolic reprogramming and the control of epigenetic cell states. Inflammation is a complex physiological process that enables clearance of pathogens and repair of damaged tissues. However, uncontrolled inflammation driven by macrophages and other immune cells can result in tissue injury and organ failure. Effective drugs against severe forms of inflammation are scarce 5,6 , and there is a need for therapeutic innovation 7 . The plasma membrane glycoprotein CD44 is the main cell surface receptor of hyaluronates [8][9][10] . It has been associated with biological programmes 11 that involve cells capable of acquiring distinct phenotypes independently of genetic alterations, which is commonly defined as cell plasticity 12,13 . For instance, inflammatory macrophages are marked by increased expression of CD44 and its functional implication in this context has been demonstrated 14,15 . However, the mechanisms by which CD44 and hyaluronates influence cell biology remain elusive 14,[16][17][18] . The recent discovery that CD44 mediates the endocytosis of iron-bound hyaluronates in cancer cells links membrane biology to the epigenetic regulation of cell plasticity, where increased iron uptake promotes the activity of α-ketoglutarate (αKG)-dependent demethylases involved in the regulation of gene expression 19 . Hyaluronates have been shown to induce the expression of pro-inflammatory cytokines in alveolar macrophages (AMs) 20 , and macrophage activation relies on complex regulatory mechanisms occurring at the chromatin level [21][22][23] . This body of work raises the question of whether a general mechanism involving CD44-mediated metal uptake regulates macrophage plasticity and inflammation. Here we show that macrophage activation is characterized by an increase of mitochondrial copper(ii), which occurs as a result of CD44 upregulation. Mitochondrial copper(ii) catalyses NAD(H) redox cycling, thereby promoting metabolic changes and ensuing epigenetic alterations that lead to an inflammatory state. We developed a metformin dimer that inactivates mitochondrial copper(ii). This drug induces metabolic and epigenetic shifts that oppose macrophage activation and dampen inflammation in vivo. CD44 mediates cellular uptake of copper To study the role of metals in immune cell activation, we generated inflammatory macrophages using human primary monocytes isolated from blood (Fig. 1a). Activated monocyte-derived macrophages (aMDMs) were characterized by the upregulation of CD44, CD86 and CD80, together with a distinct cell morphology ( Fig. 1b and Extended Data Fig. 1a-c). Using inductively coupled plasma mass spectrometry (ICP-MS), we detected higher levels of cellular copper, iron, manganese and calcium in aMDMs compared with non-activated MDMs (naMDMs) (Fig. 1c and Extended Data Fig. 1d). In contrast to other metal transporters, knocking down CD44 antagonized metal uptake ( Fig. 1d and Extended Data Fig. 1e,f) and, unlike CD44, levels of these other metal transporters did not increase upon macrophage activation (Fig. 1e). Of note, levels of these transporters remained unchanged under CD44-knockdown conditions (Extended Data Fig. 1g). Treating MDMs with an anti-CD44 antibody 24 antagonized metal uptake upon activation ( Fig. 1f and Extended Data Fig. 2a). Conversely, supplementing cells with hyaluronate upon activation increased metal uptake, whereas addition of a permethylated hyaluronate 25 , which is less prone to metal binding, had no effect ( Fig. 1g and Extended Data Fig. 2b). Inflammatory macrophages were also characterized by the upregulation of hyaluronate synthases (HAS) and the downregulation of the copper export proteins ATP7A and ATP7B (Extended Data Fig. 2c). Nuclear magnetic resonance revealed that hyaluronate interacts with copper(ii) and that this interaction can be reversed by lowering the pH (Fig. 1h). Fluorescence microscopy showed that labelled hyaluronate colocalized with a lysosomal copper(ii) probe 26 in aMDMs (Fig. 1i). Cotreatment with hyaluronidase-which degrades hyaluronates-or knocking down CD44 reduced lysosomal copper(ii) staining ( Fig. 1i and Extended Data Fig. 2d). In aMDMs, the copper transporter CTR2 colocalized with the endolysosomal marker LAMP2, and CTR2 knockdown led to increased lysosomal copper(ii) staining (Extended Data Fig. 2e,f). Collectively, these data indicate that in aMDMs, CD44 mediates the endocytosis of specific metals bound to hyaluronate, including copper. Mitochondrial Cu(ii) regulates cell plasticity We evaluated the capacity of copper(i) and copper(ii) chelators, including ammonium tetrathiomolybdate (ATTM), d-penicillamine (D-Pen), EDTA and trientine to interfere with macrophage activation. h, Molecular structure of hyaluronate tetrasaccharide (top) and 1 H NMR spectra (bottom) of copper-hyaluronate complexation experiment, recorded at 310 K in D 2 O. i, Fluorescence microscopy of a lysosomal copper(ii) probe (Lys-Cu) and FITC-hyaluronate in aMDMs treated with hyaluronidase (HD). At least 30 cells were quantified per donor (n = 6 donors). Scale bar, 10 μm. Rel., relative. c,e,f,i, Two-sided Mann-Whitney test. d,g, Kruskal-Wallis test with Dunn's post test. In all box plots in the main figures, boxes represent the interquartile range, centre lines represent medians and whiskers indicate the minimum and maximum values. In graphs, each coloured dot represents an individual donor for a given panel. Article We also studied metformin, a biguanide used for the treatment of type-2 diabetes, because it can form a bimolecular complex 27 with copper(ii). Metformin partially antagonized CD86 upregulation, albeit at high concentrations, in contrast to the marginal effects of other copper-targeting molecules ( Fig. 2a and Extended Data Fig. 3a). To reduce the entropic cost inherent to the formation of bimolecular Cu(Met) 2 complexes 27,28 , we tethered two biguanides with methylene-containing linkers to produce the lipophilic copper clamps LCC-12 and LCC-4,4 ( Fig. 2b), which contain 12 and 4 linking methylene groups, respectively. LCC-4,4 displays distal butyl substituents to exhibit a lipophilicity similar to that of LCC-12. We compared simulated structures of copper(ii) complexes with the lowest energies using molecular dynamics and discrete Fourier transform with a Cu(Met) 2 complex using the crystal structure of the latter as benchmark 28 (Extended Data Fig. 3b). Cu-LCC-12 adopted a geometry similar to that of Cu(Met) 2 , whereas Cu-LCC-4,4 lacked bonding angle symmetry and exhibited imine-copper bonds out of plane. The calculated free energy of Cu-LCC-4,4 was 16.6 kcal mol −1 higher than that of Cu-LCC-12, suggesting that Cu-LCC-4,4 is a less stable copper(ii) complex. High-resolution mass spectrometry (HRMS) confirmed the formation of monometallic copper biguanide complexes, with Cu-LCC-12 being the most stable ( Fig. 2c and Extended Data Fig. 3c). LCC-12 did not form stable complexes with other divalent metal ions (Extended Data Fig. 3d). A reduction in the UV absorbance of LCC-12 upon addition of copper(ii) chloride indicated complex formation at low micromolar concentrations. This was confirmed by the appearance of coloured solutions characteristic of metal complexes (Extended Data Fig. 3e,f). Notably, even at a 1,000-fold lower dose, LCC-12 antagonized the induction of CD86 and CD80 in aMDMs more potently than metformin (Fig. 2d). The effect of LCC-4,4 used at 10 μM was moderate, consistent with the reduced capacity of this analogue to form a complex with copper(ii). As reported for metformin 29 , LCC-12 induced AMPK phosphorylation, albeit at a much lower concentration, suggesting that phenotypes induced by metformin are linked to copper(ii) targeting (Extended Data Fig. 3g). Next, we evaluated the effect of LCC-12 on other cell types that can upregulate CD44 upon exposure to specific biochemical stimuli. LCC-12 interfered with the activation of dendritic cells and T lymphocytes and the expression of several cell surface molecules on alternatively activated macrophages (Extended Data Fig. 4a,b). By contrast, LCC-12 did not interfere with the activation of neutrophils, a process that is In-cell labelling is performed with ascorbate and without added copper(ii). i, Fluorescence microscopy of labelled LCC-12,4 in aMDMs. In-cell labelling is performed in the presence or absence of ascorbate (asc) and without added copper(ii). j, ICP-MS of mitochondrial copper in MDMs (n = 6 donors). k, ICP-MS of mitochondrial copper in aMDMs under CD44-knockdown conditions (n = 6 donors). g-i, not marked by CD44 upregulation. Copper signalling has previously been linked to cancer progression [30][31][32] . Human non-small cell lung carcinoma cells and mouse pancreatic adenocarcinoma cells undergoing epithelial-mesenchymal transition (EMT)-a cell biology programme that can promote the acquisition of the persister cancer cell state and metastasis 12,33 -were characterized by CD44 upregulation and increased cellular copper. Consistently, LCC-12 interfered with EMT, as shown by the levels of the epithelial marker E-cadherin, mesenchymal markers vimentin and fibronectin, the EMT transcription factors Slug and Twist as well as the levels of pro-metastatic protein CD109 (Extended Data Fig. 4c,d). These data support a general mechanism involving copper that regulates cell plasticity. Nanoscale secondary ion mass spectrometry (NanoSIMS) imaging of aMDMs revealed a subcellular localization of the isotopologue 15 N, 13 C-LCC-12 that overlapped with the signals of 197 Au-labelled cytochrome c, suggesting that LCC-12 targets mitochondria (Extended Data Fig. 5a,b). Fluorescent in-cell labelling of the biologically active but-1-yne-containing analogue LCC-12,4 using click chemistry 34 gave rise to a cytoplasmic staining pattern that colocalized with cytochrome c (Fig. 2e,f). The mitochondrial staining of LCC-12,4 was reduced upon cotreatment with carbonyl cyanide chlorophenylhydrazone (CCCP), a small molecule that dissipates the inner mitochondrial proton gradient, indicating that LCC-12 accumulation in mitochondria is driven by its protonation state (Fig. 2g). Labelling alkyne-containing small molecules in cells requires a copper(i) catalyst generated in situ from added copper(ii) and ascorbate [34][35][36][37] . We investigated whether the mitochondrial copper(ii) content in aMDMs would allow in-cell labelling without the need to experimentally add a copper catalyst. Fluorescent labelling of LCC-12,4 used at a concentration of 100 nM, which is lower than the biologically active dose of LCC-12, occurred in aMDMs in the absence of added copper(ii) and a strong staining was observed only in aMDMs when ascorbate was used for labelling ( Fig. 2h,i). Furthermore, the fluorescence intensity of labelled LCC-12,4 was reduced when a 100-fold molar excess of LCC-12 was used as a competitor (Extended Data Fig. 5c). These data support the existence of a druggable pool of chemically reactive copper(ii) in mitochondria. Consistent with this, levels of copper increased in mitochondria upon macrophage activation together with those of manganese ( Fig. 2j and Extended Data Fig. 5d,e), whereas levels of copper in the endoplasmic reticulum and nucleus remained unaltered (Extended Data Fig. 5f,g). Notably, aMDMs were characterized by an increase of nuclear iron, hinting at an increased activity of αKG-dependent demethylases as previously shown in cancer cells undergoing EMT 19 (Extended Data Fig. 5f). LCC-12 treatment did not alter the total cellular and mitochondrial copper content of aMDMs, indicating that LCC-12 does not act as a cuprophore 38 (Extended Data Fig. 5h,i). By contrast, LCC-12 reduced the fluorescence of a mitochondrial copper(ii) probe 39 in aMDMs, supporting direct copper binding in mitochondria (Extended Data Fig. 5j). Notably, the mitochondrial metal transporters SLC25A3 and SLC25A37 were upregulated in aMDMs (Extended Data Fig. 5k). Knocking down the expression of these transporters or CD44 did not reduce labelled LCC-12,4 fluorescence (Extended Data Fig. 5l-n), whereas knocking down CD44 led to marked reduction of mitochondrial copper (Fig. 2k). This indicates that, unlike the proton gradient, mitochondrial copper does not drive mitochondrial accumulation of biguanides. As a control, labelling an alkyne-containing derivative of the copper(ii) chelator trientine, which did not exhibit a potent effect against macrophage activation, revealed nuclear accumulation, providing a rationale for the lack of biological activity of this and potentially other copper-targeting drugs in this context (Extended Data Fig. 5o,p). Cu(ii) regulates NAD(H) redox cycling Higher mitochondrial levels of manganese in aMDMs pointed to a functional role of the superoxide dismutase 2 (SOD2) in the context of macrophage activation. The amount of SOD2 protein increased in mitochondria upon activation, whereas the amount of catalase decreased (Fig. 3a,b). Mitochondrial hydrogen peroxide, a product of superoxide dismutase and substrate of catalase, increased accordingly ( Fig. 3c,d). In cell-free systems, copper(ii) can catalyse the reduction of hydrogen peroxide by various organic substrates 40,41 . In the presence of copper(ii), NADH reacted with hydrogen peroxide to yield NAD + , whereas the absence of copper(ii) yielded a complex mixture of oxidation products ( Fig. 3e and Extended Data Fig. 6a). Consistently, copper(ii) favoured the conversion of 1-methyl-1,4dihydronicotinamide (MDHNA), a structurally less complex surrogate of NADH, into 1-methylnicotinamide (MNA + ), whereas a product of epoxidation was formed preferentially in the absence of copper (Extended Data Fig. 6b,c). Thus, copper(ii) redirects the reactivity of hydrogen peroxide towards NADH. Under reaction conditions similar to those found in mitochondria, NADH was rapidly consumed to yield NAD + in the presence of copper(ii) (Fig. 3f and Extended Data Fig. 6d). This reaction was inhibited by LCC-12, whereas the effects of LCC-4,4 and metformin were marginal (Fig. 3f). Molecular modelling supported a reaction mechanism in which copper(ii) activates hydrogen peroxide, facilitating its reduction through the transfer of a hydride from NADH (Extended Data Fig. 6e). Copper(ii) acts as a catalyst that lowers the energy of the transition state with a geometry favouring this reaction. Molecular modelling also supported the inactivation of this reaction by biguanides through direct copper(ii) binding (Extended Data Fig. 6f). Mitochondrial NADH levels were higher and NAD + levels were lower in aMDMs compared with naMDMs, suggesting an enhanced activity of mitochondrial enzymes reliant on NAD + (Fig. 3g and Supplementary Table 1). Treating MDMs with LCC-12 during activation led to a reduction of NADH and NAD + (Fig. 3g and Supplementary Table 1). This suggests that copper(ii) catalyses the reduction of hydrogen peroxide by NADH to produce NAD + and that biguanides can interfere with this redox cycling, leading instead to other oxidation by-products (Fig. 3e). NADH and copper were found in mitochondria of aMDMs at an estimated substrate:catalyst ratio of 2:1, which is even more favourable for this reaction to take place than the 20:1 ratio used in the cell-free system (Extended Data Fig. 6g). Macrophage activation was accompanied by altered levels of several metabolites whose production depends on NAD(H) ( Fig. 3h and Supplementary Table 2). LCC-12-induced metabolic reprogramming of aMDMs was marked by a reduction of αKG and acetyl-coenzyme A (acetyl-CoA) (Fig. 3i). LCC-12 also caused a reduction of extracellular lactate and accumulation of glyceraldehyde 3-phosphate in aMDMs consistent with the reduced activity of NAD + -dependent glyceraldehyde 3-phosphate dehydrogenase (Extended Data Fig. 6h,i). Collectively, these data support the central role of mitochondrial copper(ii) in the maintenance of a pool of NAD + that regulates the metabolic state of inflammatory macrophages. Mitochondrial Cu(ii) regulates transcription Transcription is co-regulated by chromatin-modifying enzymes, whose expression levels and recruitment at specific genomic loci shape gene expression. The turnover of specific enzymes such as iron-dependent demethylases and acetyltransferases relies on αKG and acetyl-CoA 42 . The finding that LCC-12 interfered with the production of these metabolites and opposed macrophage activation pointed to epigenetic alterations that affect the expression of inflammatory genes. We analysed the transcriptomes of aMDMs versus those of naMDMs by RNA sequencing (RNA-seq) (Supplementary Table 3) and compared them to transcriptomics data obtained from bronchoalveolar macrophages of individuals infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) 43 and from human macrophages exposed in vitro to Salmonella typhimurium 44 , Leishmania major 45 or Aspergillus fumigatus 46 (Supplementary Table 4). Gene ontology (GO) analysis revealed three groups of GO terms comprising upregulated genes, belonging Article to inflammation, metabolism and chromatin (Fig. 4a). Notably, the GO terms of these genes included endosomal transport, cellular response to copper ion, response to hydrogen peroxide and positive regulation of mitochondrion organization. Similar signatures were obtained for macrophages exposed to distinct pathogens (Extended Data Fig. 7a,b and Supplementary Table 5), as defined by GO terms and increased RNA amounts for genes involved in inflammation ( Fig. 4b and Extended Data Fig. 7c). aMDMs exhibited upregulated genes encoding CD44, sorting nexin 9 (SNX9), a regulator of CD44 endocytosis, and metallothioneins (MT2A and MT1X) involved in copper transport and storage, whereas expression levels of ATP7A and ATP7B were downregulated (Supplementary Table 3). Genes involved in chromatin and histone modifications were upregulated in aMDMs and similar genes encoding iron-dependent demethylases and acetyltransferases were upregulated in bronchoalveolar macrophages from individuals infected with SARS-CoV-2, as well as in macrophages exposed to other pathogens ( Fig. 4c and Extended Data Fig. 7d). These data indicate that distinct classes of pathogens trigger similar epigenetic alterations 47 , leading to the inflammatory cell state. In aMDMs, variations in protein levels including increases in iron-dependent demethylases and acetyltransferases were consistent with the RNA-seq data (Extended Data Fig. 8a,b and Supplementary Table 6). Changes in levels of specific demethylases and acetyltransferases were associated with alterations of their targeted marks (Extended Data Fig. 8c,d). Chromatin immunoprecipitation sequencing (ChIP-seq) revealed a global increase of the permissive acetyl marks H3K27ac, H3K14ac and H3K9ac together Fig. 9c). LCC-12 treatment reduced H3K27ac, H3K14ac and H3K9ac and increased H3K27me3 and H3K9me2 levels (Extended Data Fig. 9d), which was associated with the downregulation of targeted inflammatory genes (Fig. 4g, Extended Data Fig. 9e,f and Supplementary Table 7). Thus, the LCC-12-induced decreases in αKG and acetyl-CoA were associated with a reduced activity of iron-dependent demethylases and acetyltransferases, respectively. Notably, knocking down expression of SOD2 or the mitochondrial copper transporter SLC25A3 reduced the inflammatory signature of macrophages (Extended Data Fig. 9g,h). Similarly, knocking out CD44 antagonized epigenetic programming of inflammation in aMDMs without adversely affecting the expression of other metal transporters (Fig. 4h, Extended Data Fig. 9i-k and Supplementary Table 9). Together, these data indicate that hydrogen peroxide is a driver of cell plasticity and that mitochondrial copper(ii) controls Regulation of cytokine production involved in immune response Regulation of cytokine-mediated signalling pathway Cytokine production involved in in ammatory response TLR3 TLR8 TLR7 CGAS CXCL11 STAT4 IRAK4 IFI6 IFIT3 CLU CXCL9 TLR7 MYD88 IL18 GCH1 CXCR3 IL6 CREM CGAS IL10RA JAK2 JAK3 CXCL10 ICAM3 ISG20 IL1RN STAT1 IFITM3 TRAF2 TLR8 TICAM2 CXCL16 GBP5 TLR3 CXCL10 CXCL9 CCL8 ISG20 IL6 IFITM3 SIGLEC1 IFIT2 IFIT3 CCR4 JAK3 CXCL16 CXCL11 IL18 JAK2 TLR3 TLR7 STAT1 STAT4 CREM GBP5 GCH1 IRAK4 TRAF2 TICAM2 IL10RA TLR8 IL1RN Article the availability of essential metabolic intermediates required for the activity of chromatin-modifying enzymes, which enables rapid transcriptional changes underlying the acquisition of distinct cell states. Cu(ii) inactivation reduces inflammation We investigated the role of copper signalling in well-established mouse models of acute inflammation: (1) endotoxaemia induced by lipopolysaccharide (LPS), reflecting our mechanistic model of macrophage activation; (2) cecal ligation and puncture (CLP), which recapitulates the pathophysiology of subacute polymicrobial abdominal sepsis occurring in humans 48 ; and (3) a model of viral infection, namely SARS-CoV-2. The inflammatory states of small peritoneal macrophages (SPMs) isolated from LPS and CLP mice models, and AMs isolated from SARS-CoV-2-infected mice, were characterized by upregulated CD44 and increased cellular copper (Fig. 5a-c). Effectors of the copper-signalling pathway, including HAS and SOD2 as well as specific epigenetic modifiers, were also upregulated in inflammatory macrophages (Extended Data Fig. 10a-c). Histone mark targets of these epigenetic modifiers were altered accordingly ( Fig. 5d and Extended Data Fig. 10b,c). Intraperitoneal administration of LCC-12 in LPS-treated mice caused reductions in H3K27ac, H3K14ac and H3K9ac and increases in H3K27me3 and H3K9me2 (Fig. 5d), which were associated with reduced inflammation ( of body temperature ( Fig. 5h and Extended Data Fig. 10d), performing better than high-dose dexamethasone, which is used for the clinical management of acute inflammation. In CLP-induced sepsis, LCC-12 also increased the survival rate ( Fig. 5i). LCC-12 administered by inhalation to SARS-CoV-2-infected K18-hACE2 mice altered the expression of genes involved in the regulation of chromatin and downregulated the expression of inflammatory genes (Extended Data Fig. 10e,f and Supplementary Tables 11 and 12). Together, these data indicate that targeting mitochondrial copper(ii) interferes with the acquisition of the inflammatory state in vivo and confers therapeutic benefits. Discussion CD44 has previously been linked to development, immune responses and cancer progression. Here we have shown that CD44 mediates the cellular uptake of specific metals, including copper, thereby regulating immune cell activation. We identified a chemically reactive pool of copper(ii) in mitochondria that characterizes the inflammatory state of macrophages. Our data support a cellular mechanism whereby the activation of hydrogen peroxide by copper(ii) enables oxidation of NADH to replenish the pool of NAD + . Maintenance of this redox cycling is required for the production of key metabolites that are essential for epigenetic programming. In this context, copper(ii) acts directly as a metal catalyst, in contrast to its dynamic metalloallosteric effect in other processes 30 . Transcriptomic shifts in macrophages exposed to distinct classes of pathogens substantiate the general nature of this mechanism. We designed a dimer of biguanides that is able to inactivate mitochondrial copper(ii), thereby triggering metabolic and epigenetic reprogramming that reduces the inflammatory cell state and increases survival in preclinical models of acute inflammation. We have thus illustrated the pathophysiological relevance of this copper(ii)-triggered molecular chain of events. Acute inflammation is therefore reminiscent of a metabolic disease that can be rebalanced by targeting mitochondrial copper(ii) to restrict the generation of key metabolites required to initiate and maintain the inflammatory state (Extended Data Fig. 10g). LCC-12 selectively targets mitochondrial copper(ii), which is more abundant in the disease inflammatory cell state than in the basal state. This drug also interfered with the process of EMT in cancer cells, supporting a wider role for this copper-signalling pathway in the regulation of transcriptional changes beyond inflammation. Thus, CD44 may be characterized as a regulator of cell plasticity. Metformin exhibits positive effects on human health and is being studied as an anti-ageing drug 49,50 . However, investigation of its mechanism of action is hampered by its poor pharmacology resulting in low potency, which necessitates the administration of high doses. Thus LCC-12-which we rename 'supformin'-exhibits improved biological and preclinical characteristics over metformin, making it a suitable drug-like small molecule for revealing novel mechanistic features of biguanides. Overall, our findings highlight the central role of mitochondrial copper(ii) as a regulator of cell plasticity and unveil a therapeutic strategy based on the control and fine-tuning of epigenetic cell states. Online content Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-023-06017-4. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ethics statement Peripheral blood samples were collected from 128 healthy donors at Etablissement Français du Sang (EFS). The use of EFS blood samples from anonymous donors was approved by the Institut National de la Santé et de la Recherche Médicale committee. Written consent was obtained from all the donors. Survival assessment using the LPS mouse model was conducted at Fidelta according to 2010/63/EU and national legislation regulating the use of laboratory animals in scientific research and for other purposes (Official Gazette 55/13). An institutional committee on animal research ethics (CARE-Zg) oversaw that animal-related procedures were not compromising the animal welfare. Flow cytometry, ICP-MS, western blotting and RNA-seq using the LPS mouse models were performed in accordance with French laws concerning animal experimentation (#2021072216346511) and approved by Institutional Animal Care and Use Committee of Université de Saint-Quentin-en-Yvelines (C2EA-47). All animal work using the CLP model was conducted in accordance with French laws concerning animal experimentation (#2021072216346511) and approved by Institutional Animal Care and Use Committee of Université de Saint-Quentin-en-Yvelines (C2EA-47). All animal work concerning RNA-seq on the SARS-CoV-2 model was performed within the biosafety level 3 facility of the Institut Pasteur de Lille, after validation of the protocols by the local committee for the evaluation of the biological risks and complied with current national and institutional regulations and ethical guidelines (Institut Pasteur de Lille/B59-350009). The experimental protocols using animals were approved by the institutional ethical committee Comité d'Ethique en Experimentation Animale (CEEA) 75, Nord-Pas-de-Calais. The animal study was authorized by the Education, Research and Innovation Ministry under registration number APAFIS#25517-2020052608325772v3. Animal work concerning cytometry, ICP-MS and western blotting on the SARS-CoV-2 model was performed within the biosafety level 3 facility of the University of Toulouse. This work was overseen by an Institutional Committee on Animal Research Ethics (license APAFIS#27729-2020101616517580 v3, Minister of Research, France (CEEA-001)), to ensure that animal-related procedures were not compromising the animal welfare. Antibodies Antibodies are annotated below as follows. WB, western blot; FCy, flow cytometry; FM, fluorescence microscopy; NS, NanoSIMS; ChIP, ChIPseq; Hu, used for human samples; Ms, used for mouse samples. Dilutions are indicated. Any antibody validation by manufacturers is indicated and can be found on the manufacturers' websites. Our antibody validation by knockdown (KD) and/or KO strategies as described here for relevant antibodies is indicated. Primary antibodies: ALKBH1 (Abcam, ab195376, clone EPR19215, lot GR262105-2, WB 1:1,000, Hu, Primary cells Peripheral blood samples were collected from 128 healthy donors at EFS. The use of EFS blood samples from anonymous donors was approved by the committee of INSERM (Institut National de la Santé et de la Recherche Médicale). Written consent was obtained from all the donors. Pan monocytes were isolated by negative magnetic sorting using microbeads according to the manufacturer's instructions (Miltenyi Biotec, 130-096-537) and cultured immediately in the presence of cytokines to trigger in vitro differentiation as described in 'Cell culture'. Cells were used fresh without prior freezing. Typically, cells were collected by incubation with 1× PBS with 10 mM EDTA at 37 °C and then scraped, unless stated otherwise. Primary non-small cell lung circulating cancer cells were obtained from Celprogen (36107-34CTC) and cultured as described in 'Cell culture'. Primary macrophages from in vivo mouse models (LPS, CLP and SARS-CoV-2) were isolated and processed as described in 'LPS-induced sepsis model', 'CLP-induced sepsis model' and 'SARS-CoV-2-induced acute inflammation model'. ICP-MS Glass vials equipped with Teflon septa were cleaned with nitric acid 65% (VWR, Suprapur, 1.00441.0250), washed with ultrapure water (Sigma-Aldrich, 1012620500) and dried. Cells were collected and washed twice with 1× PBS. Cells were then counted using an automated cell counter (Entek) and transferred in 200 μl 1× PBS or ultrapure water to the cleaned glass vials. The same volume of 1× PBS or ultrapure water was transferred into separate vials for the background subtraction, at least in duplicate per experiment. Mitochondria, nuclei and endoplasmic reticula were extracted as described in 'Isolation of mitochondria' from a pre-counted population of cells. Samples were lyophilized using a freeze dryer (CHRIST, 2-4 LDplus). Samples were subsequently mixed with nitric acid 65% and heated at 80 °C overnight in the same glass vials closed with a lid carrying a Teflon septum. Samples were then cooled to room temperature and diluted with ultrapure water to a final concentration of 0.475 N nitric acid and transferred to metal-free centrifuge vials (VWR, 89049-172) for subsequent mass spectrometry analyses. Amounts of metals were measured using an Agilent 7900 ICP-QMS in low-resolution mode, taking natural isotope distribution into account. Sample introduction was achieved with a micro-nebulizer (MicroMist, 0.2 ml min −1 ) through a Scott spray chamber. Isotopes were measured using a collision-reaction interface with helium gas (5 ml min −1 ) to remove polyatomic interferences. Scandium and indium internal standards were injected after inline mixing with the samples to control the absence of signal drift and matrix effects. A mix of certified standards was measured at concentrations spanning those of the samples to convert count measurements to concentrations in the solution. Values were normalized against cell number. Western blotting MDMs were treated as indicated and then washed with 1× PBS. For MDMs, proteins were solubilized in 2× Laemmli buffer containing benzonase (VWR, 70664-3, 1:100). Extracts were incubated at 37 °C for 1 h and heated at 94 °C for 10 min, and quantified using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific). SPMs and AMs from the LPS, CLP and SARS-CoV-2 mouse models were isolated by flow cytometry as described in the relevant flow cytometry section. Due to low cell number count, SPMs were collected in 1× PBS, which was subsequently freeze dried in Eppendorf tubes. Dried material was solubilized in 2× Laemmli buffer containing benzonase. AMs were pelleted and the cell pellets were solubilized in 2× Laemmli buffer containing benzonase. Extracts from SPMs and AMs were then incubated at 37 °C for 1 h and heated at 94 °C for 10 min. Proteins extracts from SPMs and AMs were quantified using a Qubit (Invitrogen) and a Qubit protein quantification assay (Invitrogen, Q33212). For the LPS model, SPMs were pooled for 8 sham mice, for 4 LPS-treated mice and for 6 LPS and LCC-12-treated mice. For the CLP model, SPMs were pooled from 8 sham mice and 7 mice subjected to CLP. For the SARS-CoV-2 model, AMs were pooled for 10 sham mice and for 10 SARS-CoV-2-infected mice. Protein lysates were resolved by SDS-PAGE (Invitrogen sure-lock system and NuPAGE 4-12% Bis-Tris precast gels). In a typical experiment from MDMs or cancer cells 10-20 μg of total protein extract was loaded per lane in 2× Laemmli buffer containing bromophenol blue. For in vivo isolated SPM and AMs where protein amounts were limited, typically 1 μg of total protein extract was loaded (more protein lysate was loaded in a new experiment if the antibody could not recognize a specific band at these protein amounts). On each gel a size marker was run (3 μl PageRuler or PageRuler plus, Thermo Scientific, 26616 or 26620 and 17 μl 2× Laemmli buffer) in parallel. Proteins were then transferred onto nitrocellulose membranes (Amersham Protran 0.45 μm) using a Trans-Blot SD semi-dry electrophoretic transfer cell (Bio-rad) using 1× NuPage transfer buffer (Invitrogen, NP00061) with 10% methanol. Membranes were blocked with 5% non-fat skimmed milk powder (Régilait) in 0.1% Tween-20/1× PBS for 20 min. Membranes were cut at the appropriate marker size to allow for the probing of several antibodies on the same membrane. Blots were then probed with the relevant primary antibodies in 5% BSA, 0.1% Tween-20/1× PBS or in 5% non-fat skimmed milk powder in 0.1% Tween-20/1× PBS at 4 °C overnight with gentle motion in a hand-sealed transparent plastic bag. Membranes were washed with 0.1% Tween-20/1× PBS three times and incubated with horseradish peroxidase conjugated secondary antibodies ( Jackson Laboratories) in 5% non-fat skimmed milk powder, 0.1% Tween-20/1× PBS for 1 h at room temperature and washed three times with 0.1% Tween-20/1× PBS. Antigens were detected using the SuperSignal West Pico PLUS (Thermo Scientific, 34580) and SuperSignal West Femto (Thermo Scientific, 34096) chemiluminescent detection kits. For blotting proteins on the same membranes, stripping buffer (0.1 M TRIS pH 6.8, 2% SDS w/v, 0.1 M β-mercaptoethanol) was used for 30 min and membranes were washed with 0.1% Tween-20/1× PBS and reblotted subsequently as described above. Signals were recorded using a Fusion Solo S Imaging System (Vilber). For histone marks, H3 was run as a sample processing control on a separate gel in parallel and is displayed in the respective panels. γ-tubulin served as loading control on the same gels and is not displayed in the respective panels. Band quantifications were performed with FIJI 2.0.0-rc-69/1.52n using pixel intensity normalized against the signal of γ-tubulin. All full scans of blots are displayed in the Supplementary Information. Fluorescence microscopy Isolated monocytes were plated on coverslips, differentiated and activated as described in 'Cell culture'. For fluorescent detection of hyaluronate and Lys-Cu, live cells were treated with hyaluronate-FITC (800 kDa, Carbosynth, YH45321, 0.1 mg ml −1 ) and Lys-Cu (in-house, 20 μM, 1 h) 26 for 1 h in the presence or absence of hyaluronidase (Sigma-Aldrich, H3884, 0.1 mg ml −1 ). Hyaluronate-FITC and hyaluronidase were solubilized together in medium for 2 h at 37 °C before adding to the cells. Cells were then washed three times with 1× PBS, fixed with 2% paraformaldehyde in 1× PBS for 12 min and then washed 3 times with 1× PBS. For antibody staining, cells were then permeabilized with 0.1% Triton X-100 in 1× PBS for 5 min and washed 3 times with 1× PBS. Subsequently, cells were blocked in 2% BSA, 0.2% Tween-20/1× PBS (blocking buffer) for 20 min at room temperature. Cells were incubated with the relevant antibody in blocking buffer for 1 h at room temperature, washed 3 times with 1× PBS and were incubated with secondary antibodies for 1 h. Finally, coverslips were washed 3 times with 1× PBS and mounted using VECTASHIELD containing DAPI (Vector Laboratories, H-1200-10). Fluorescence images were acquired using a Deltavision real-time microscope (Applied Precision). 40×/1.4NA, 60×/1.4NA and 100×/1.4NA objectives were used for acquisitions and all images were acquired as z-stacks. Images were deconvoluted with SoftWorx (Ratio conservative-15 iterations, Applied Precision) and processed with FIJI 2.0.0-rc-69/1.52n. Fluorescence intensity is displayed as arbitrary units (AU) and is not comparable between different panels. Colocalization quantification was calculated using FIJI 2.0.0-rc-69/1.52n. Histone quantification was performed using FIJI 2.0.0-rc-69/1.52n by delineating the nuclei using DAPI fluorescence, and calculating the mean fluorescence intensity normalized by area. Small molecule labelling using click chemistry aMDMs on coverslips were treated with LCC-12,4 (in-house, 100 nM, 3 h) in absence or presence of CCCP or LCC-12 competitor (10 μM, 3 h), fixed and permeabilized as indicated in the fluorescence microscopy paragraph. Mitotracker (Invitrogen, M22426) was added to live cells for 45 min before fixation. For in-cell-labelling of trientine, live cells were incubated with trientine alkyne (in-house, 10 μM, 3 h). The click reaction cocktail was prepared using the Click-iT EdU Imaging kit (Invitrogen, C10337) according to the manufacturer's protocol. In a typical experiment we mixed 50 μl of 10× Click-iT reaction buffer with 20 μl of CuSO 4 solution, 1 μl Alexa Fluor-azide, 50 μl reaction buffer additive (sodium ascorbate) and 379 μl ultrapure water to reach a final volume of 500 μl. For variations as indicated in the figures, reactions were performed with or without CuSO 4 and ascorbate. Coverslips were incubated with the click reaction cocktail in the dark at room temperature for 30 min, then washed three times with 1× PBS. Immunofluorescence was then performed as described in 'Fluorescence microscopy'. NanoSIMS imaging MDMs were grown on coverslips and activated to obtain aMDMs as described in 'Cell culture'. Cells were treated with 10 μM 15 N, 13 Coverslips with samples were washed three times for 10 min with Milli-Q water. Subsequently, cells were dehydrated sequentially with ethanol solutions for 10 min each: 50%, 70%, 2× 90%, 3× 100% (dried over molecular sieves, Sigma-Aldrich, 69833). Samples were then coated with a 1:1 mixture of resin (Electron Microscopy Sciences, dodecenylsuccinic anhydride, 13710, methyl-5-norbornene-2, 3-dicarboxylic anhydride, 19000, DMP-30, 13600 and LADD research industries: LX112 resin, 21310) and dry ethanol for 1 h. Then, samples were embedded in pure resin for 1 h. Embedding capsules (Electron Microscopy Sciences, 69910-10) were filled with resin, inverted onto the cover slides and placed in an oven at 56 °C for 24 h. Sections 0.2 μm in thickness were prepared using a Leica Ultracut UCT microtome. Sample sections were deposited onto a clean silicon chip (Institute for Electronic Fundamentals/CNRS and University Paris Sud) and dried upon exposure to air before being introduced into the NanoSIMS-50 ion microprobe (Cameca). A Cs + primary ion was employed to generate negative secondary ion from the sample surface. The probe steps over the image field and the signal of selected secondary ion species were recorded pixel-by-pixel to create 2D images. Image of 12 C 14 N − was recorded to provide the anatomic structure of the cells, while the one of 31 P − highlights the location of cell nucleus. The cellular distribution of 15 N label was imaged by measuring the excess in 12 C 15 N − to 12 C 14 N − ratio with respect to the natural abundance level (0.0037), and the one for antibody with gold staining targeting mitochondria was performed by detecting directly 197 Au − ions. When detecting 12 C 15 N − ion, appropriate mass resolution power was required to discriminate abundant 13 C 14 N − isobaric ions (with an M/∆M of 4,272). For each image recording process, multiframe acquisition mode was applied and hundreds of image planes were recorded. The overall acquisition time corresponding to the 15 N image was 12 h and 6 h 30 min for the 197 Au image. During image processing with FIJI (2.0.0-rc-69/1.52n), the successive image planes were properly aligned using TomoJ plugin 51 , so as to correct the slight primary beam shift during long hours of acquisition. A summed image was then obtained with improved statistics. Further, for the 12 C 15 N − to 12 C 14 N − ratio map, an HSI (Hue-Saturation-Intensity) colour image was generated using OpenMIMS for display with increased significance 52 . The hue corresponds to the absolute 15 N/ 14 N ratio value, and the intensity at a given hue is an index of the statistical reliability. RNA interference Human primary monocytes were transfected with Human Monocyte Nucleofector kit (Lonza, VPA-1007) according to the manufacturer's instructions. 5× 10 6 monocytes were resuspended in 100 μl nucleofector solution with 200 pmol of ON-TARGETplus SMARTpool siRNA or negative control siRNA (Qiagen, 1027310) before nucleofection with Nucleofector II (Lonza). Cells were then immediately removed and incubated overnight with 5 ml of prewarmed complete RPMI medium (Gibco). The following day, GM-CSF was added to the medium. The sequences of the SMARTpools used are detailed in the Supplementary Information. Genome editing CRISPR knockout was performed using the following strategy. Human primary monocytes were transfected with Human Monocyte Nucleofector kit (Lonza, VPA-1007). Five million monocytes were resuspended in 100 μl nucleofector solution with a 100 pmol CAS9 (Dharmacon, CAS12206)/200 pmol CD44 (Dharmacon, SQ-009999-01-0010) single guide RNA (sgRNA) mix. The Cas9-CD44 mix was incubated for 10 min at 37 °C before nucleofection with Nucleofector II (Lonza). Cells were then immediately removed and incubated overnight with 5 ml of prewarmed complete RPMI medium (Gibco) and the following day, GM-CSF was added to the medium. At day 5, the cells were activated with LPS (100 ng ml −1 , 24 h) and IFNγ (20 ng ml −1 , 24 h). At day 6, cells were sorted for CD44 − versus CD44 + populations with BD FACSAria. The sorting strategy and the sequences used for CD44 sgRNA (Edit-R Human Synthetic CD44, set of 3, target sequences) are detailed in the Supplementary Information. Bright-field microscopy and digital photographs Bright-field images were acquired using a CKX41 microscope (Olympus) and cellSens Entry imaging software (Olympus). Digital images were taken with an iPhone 11 Pro (Apple). Isolation of mitochondria Mitochondria were isolated using the Qproteome Mitochondria Isolation Kit (Qiagen, 37612) according to the manufacturer's protocol. Cells were washed and centrifuged at 500g for 10 min and the supernatant was removed. Cells were then washed with a solution of 0.9% NaCl (Sigma-Aldrich, S7653-250G) and resuspended in ice-cold lysis buffer and incubated at 4 °C for 10 min. The lysate was then centrifuged at 1,000g for 10 min at 4 °C and the supernatant carefully removed. Subsequently, the cell pellet was resuspended in disruption buffer. Complete cell disruption was obtained by using a dounce homogenizer (mitochondria for ICP-MS) or a blunt-ended needle and a syringe (mitochondria for metabolomics). The lysate was then centrifuged at 1,000g for 10 min at 4 °C and the supernatant transferred to a clean tube. The supernatant was then centrifuged at 6,000g for 10 min at 4 °C to obtain mitochondrial pellets. Isolation of nuclei Nuclei were isolated using the Nuclei EZ Prep (Sigma-Aldrich, NUC101-1KT) according to the manufacturer's instructions. In brief, cells were treated as indicated and collected upon scraping and counted. Subsequently, cells were washed twice with 1× PBS and lysed with 1 ml ice-cold Nuclei EZ lysis buffer for 5 min on ice. The suspension was centrifuged at 500g for 5 min at 4 °C. Resulting nuclei were washed with Nuclei EZ lysis buffer and centrifuged to generate a pellet of isolated cell nuclei. Isolation of endoplasmic reticula Endoplasmic reticula were isolated using the Endoplasmic Reticulum Enrichment Extraction Kit (Novus Biologicals, NBP2-29482) according to the manufacturer's instructions. In brief, 500 μl of 1× isosmotic homogenization buffer followed by 5 μl of 100× PIC were added to a pellet of 10 6 cells. The resulting suspension was centrifuged at 1,000g for 10 min at 4 °C. The supernatant was transferred to a clean centrifuge tube and centrifuged at 12,000g for 15 min at 4 °C. The floating lipid layer was discarded. The supernatant was centrifuged in a clean centrifuge tube using an ultracentrifuge at 90,000g for 1 h. The resulting pellet contained the total endoplasmic reticulum fraction (rough and smooth). Chemical synthesis Products were purified on a preparative HPLC Quaternary Gradient 2545 equipped with a Photodiode Array detector (Waters) fitted with a reverse phase column (XBridge BEH C18 OBD Prep column 5 μm 30×150 mm). NMR Spectra were run in DMSO-d 6 , methylene chloride-d 2 or methanol-d 4 at 298 K unless stated otherwise. 1 H NMR spectra were recorded on Bruker spectrometers at 400 or 500 MHz. Chemical shifts δ are expressed in ppm using residual non-deuterated solvent signals as internal standard. The following abbreviations are used: ex, exchangeable; s, singlet; d, doublet; t, triplet; td, triplet of doublets; m, multiplet. The 13 C NMR spectra were recorded at 100.6 or 125.8 MHz, and chemical shifts δ are expressed in ppm using deuterated solvent signal as internal standard. The purity of final compounds, determined to be >98% by UPLC-MS, and low-resolution mass spectra (LRMS) were recorded on a Waters Acquity H-class equipped with a Photodiode array detector and SQ Detector 2 (UPLC-MS) fitted with a reverse phase column (Acquity UPLC BEH C18 1.7 μm, 2.1x50 mm). HRMS were recorded on a Thermo Scientific Q-Exactive Plus equipped with a Robotic TriVersa NanoMate Advion. LCC-12,4. Bis-(cyanoguanidino)dodecane (227 mg, 0.60 mmol) and but-3-yne-1-amine hydrochloride (Enamine, EN300-76524, 126 mg, 1.20 mmol) were mixed together in a sealed tube and heated at 150 °C without solvent for 4 h. After cooling to room temperature, the mixture was taken up in ethanol and a large excess of ethyl acetate was added slowly. The white precipitate was filtered and purified by preparative HPLC (H 2 O:acetonitrile:formic acid, 95:5:0.1 to 40:60:0.1) to give LCC-12,4 di-formic acid salt as a white powder (102 mg, 30%). 1 The synthesis of bis-(cyanoguanidino)dodecane was adapted from a previously published procedure 53 . 1,12-diaminododecane (500 mg, 2.5 mmol) was dissolved in a mixture of water and methanol and stirred with 37% aq. HCl for 10 min at room temperature. The solvent was evaporated under reduced pressure and the resulting salt was suspended in butanol (2.5 ml) with sodium dicyanamide (444 mg, 5.0 mmol) and stirred at 140 °C overnight. After filtration the solid was washed with butanol and cold water and recrystallized from a mixture of water:ethoxyethanol (2:1) to give the bis-(cyanoguanidino)dodecane as a white powder (365 mg, 44%). 1 Trientine alkyne. Under argon atmosphere, trientine dihydrochloride (Santa Cruz Biotechnology, sc-216009, 0.050 g, 0.228 mmol) was dissolved in anhydrous methanol at 0 °C followed by the addition of 4-(prop-2-ynyloxy)-benzaldehyde 54 (0.073 mg, 0.456 mmol) and molecular sieves 4 Å. The mixture was stirred for 3 h at room temperature, prior to the addition of NaBH 3 CN (0.026 g, 0.684 mmol) and stirred overnight at room temperature. Next, the reaction mixture was filtered, the filtrate was evaporated under reduced pressure and Methylated hyaluronate. Methylated hyaluronate (meth-HA) was synthesized using modified published methods 25,55 . In brief, 1 g of sodium hyaluronate (600-1,000 kDa) was dissolved overnight in 200 ml of distilled water at room temperature. Then amberlite cation exchange resin (H + ) (10 g) was added to the solution and stirred for one day at room temperature. The resin was subsequently filtered off the solution. The resulting solution was then neutralized using tetrabutylammonium hydroxide (TBAOH) to obtain (tetrabutylammonium) TBA-hyaluronate as follows: TBAOH, diluted fivefold with water was added dropwise to the previously prepared hyaluronic acid solution until the pH reached 8. The solution was then freeze dried and the resulting TBA-hyaluronate was kept in a freezer until further use. 0.530 g of TBA-hyaluronate were dissolved in 10 ml of DMSO at 30 °C, then 1.5 ml of methyl iodide were added and the solution was kept at 30 °C overnight. The resulting mixture was slowly poured into 200 ml of ethyl acetate under constant agitation. The white precipitate obtained was filtered and washed four times with 100 ml of ethyl acetate and finally vacuum dried for 24 h at room temperature. Methylation of hyaluronate was confirmed by 1 H NMR spectroscopy and size-exclusion chromatography. 1 H NMR spectra were acquired on a Bruker 400 MHz spectrometer in D 2 O containing 0.125 M sodium deuteroxide to increase hyaluronate proton mobility, thus improving spectra resolution. 1 H NMR spectrum displays peaks of methyl NHCOMe at 1.79-1.86 ppm, as well as peaks of methyl groups OMe at 2.61, 2.78 ppm and COOMe at 3.22 ppm. Computational structural characterization of metformin-based Cu complexes The starting structure for Cu(Met) 2 was based on the published X-ray structure 56 . The starting geometries for the other copper(ii) complexes were obtained via molecular dynamics conformation search (Gabedit 57 , amber99 (ref. 58) potential). For each complex, the ten geometries with the lowest energies resulting from the molecular dynamics search were reoptimized using MOPAC2016 (PM7 (ref. 59), COSMO 60 water model). The geometries with the lowest energy for each complex were optimized at TPSSh/D3BJ/Def2-TZVP level using Orca 4. 2.1 (ref. 61). We performed a benchmark study based on the structure of Cu(Met) 2 using B3LYP 62 , M062X 63 , TPSSh, BHLYP 64,65 functionals with the Def2-TZVP 66 basis set using D3BJ 67 dispersion correction and the CPCM water solvation model. We have also used the BHLYP functional with the SVP basis set and the SMD water solvation model, which was recommended in the literature 68 for copper(ii) complexes. Energy calculation of copper(ii)-catalysed hydride transfer to H 2 O 2 from NADH The UBHLYP functional 64,65 was used associated with the SVP basis set 69,70 and the SMD solvation model 71,72 to represent an adequate method to describe the [Cu(H 2 O) 6 ] 2+ species 68 . Thus, all structures (minima and transition states) were optimized using the Gaussian 16 set of programs at the UBHLYP/SVP level for all atoms (doublet spin state). The SMD solvation model (water) was applied during the optimization process. Thermal correction to the Gibbs free energy was computed at 310.15 K. Single points at the UMP2/SVP level were performed. The results presented are ΔG 298 in kcal mol −1 . MDHNA was chosen as a NADH model to study the copper(ii)-catalysed hydride transfer to H 2 O 2 . UV titration experiments To a solution of LCC-12 (5 μM) in HEPES (10 mM), portions of 0.1 mol equivalent of a solution of CuCl 2 in HEPES (10 mM) were added up to 3 mol equivalent. UV spectra were recorded on an Analytik Jena UV/ VIS spectrophotometer specord 205 system at room temperature in the 200-1000 nm range using a micro cuvette (quartz Excellence Q 10 mm). All spectra were blanked against HEPES buffer. Copper-catalysed oxidation of NADH The oxidation kinetics of NADH (Sigma-Aldrich, N4505) were followed by measuring the absorbance at 340 nm using a Cary 300 UV-Vis spectrometer. The measurements were recorded at 37 °C controlled with a Pelletier Cary temperature controller (Agilent Technologies). Stock solutions of NADH (1 mM), imidazole (Sigma-Aldrich, 56750, 100 mM), CuSO 4 (Sigma-Aldrich, 451657, 500 μM), LCC-12 (10 mM or 1 mM), metformin·HCl (Alfa Aesar, J63361, 100 mM or 10 mM) and LCC-4,4 (10 mM) were prepared in a 10 mM sodium phosphate buffer adjusted to pH 8.0. The concentration of H 2 O 2 (Sigma-Aldrich, 16911, 32.3% wt in H 2 O) was determined by titration with KMnO 4 and diluted 100 times in the phosphate buffer. In disposable cuvettes, 1 ml of experimental solutions were prepared using sodium phosphate buffer and respective stock solutions to attain the final concentrations described Measurement of NADH concentrations NADH absolute concentrations were measured using a fluorometric assay (Abcam, ab176723) according to the manufacturer's protocol. At least 500,000 cells were collected per condition. Floating cells were collected and adherent cells were washed with 1× PBS. Adherent cells were incubated with 1× PBS with 10 mM EDTA and then scraped and pooled together with the collected floating cells. Cells were subsequently washed with ice-cold 1× PBS and counted, then centrifuged at 1,500 rpm for 5 min and the supernatant discarded. The pellet was then resuspended in 100 μl lysis buffer (kit component) and incubated at 37 °C for 15 min. NAD + and NADH extraction solutions as well as NAD + / NADH control solutions (kit components) were added and incubated at 37 °C for 15 min at a volume of 15 μl sample to 15 μl of the respective buffers (kit components). The reactions were stopped using 15 μl of respective buffers (kit components). Finally, 75 μl of NAD + /NADH reaction mixture (NAD + /NADH recycling enzyme mixture and sensor buffer, kit components) were added and the resulting mixtures incubated for 1 h at room temperature. Fluorescence intensities (excitation 540 nm; emission 590 nm) were recorded using a Perkin Elmer Wallac 1420 Victor2 Microplate Reader. Values were derived from the standard curve of each experiment and compared to the data obtained by mass spectrometry-based metabolomics, to calculate total and mitochondrial NADH concentrations. Quantitative metabolomics In a typical experiment, 1.5 million cells were used for total extracts and 15 million cells for mitochondrial extracts. Cells were collected and the supernatant removed to generate the corresponding cell pellets. Subsequently, pellets were dried and supplemented with 300 μl methanol, vortexed 5 min and centrifuged (10 min at 15,000g, 4 °C). Then, the upper phase of the supernatant was split into two parts: 150 μl were used for gas chromatography-mass spectrometry (GC-MS) experiment in microtubes and the remaining 150 μl were used for ultra high pressure liquid chromatography-mass spectrometry (UHPLC-MS). For the GC-MS aliquots, supernatants were completely evaporated from the sample. 50 μl of methoxyamine (20 mg ml −1 in pyridine) were added to the dried extracts, then stored at room temperature in the dark for 16 h. The following day, 80 μl of N-methyl-N-(trimethylsilyl) trifluoroacetamide was added and final derivatization occurred at 40 °C for 30 min. Samples were then transferred into vials and directly injected for GC-MS analysis. For the UHPLC-MS aliquots, 150 μl were dried in microtubes at 40 °C in a pneumatically-assisted concentrator (Techne DB3). The dried UHPLC-MS extracts were solubilized with 200 μl of MilliQ water. Aliquots for analysis were transferred into liquid chromatography vials and injected into UHPLC-MS or kept at -80 °C until injection. Widely-targeted analysis of intracellular metabolites gas chromatography coupled to a triple-quadrupole mass spectrometer (QQQGC-MS): the GC-MS/MS method was performed on a 7890A gas chromatography (Agilent Technologies) coupled to a triple-quadrupole 7000C (Agilent Technologies) equipped with a High sensitivity electronic impact source operating in positive mode 73 . Peak detection and integration of the analytes were performed using the Agilent Mass Hunter quantitative software (B.07.01). Targeted analysis of nucleotides and cofactors by ion pairing ultra high performance liquid chromatography (UHPLC) coupled to a Triple Quadrupole (QQQ) mass spectrometer: targeted analysis was performed on a RRLC 1290 system (Agilent Technologies) coupled to a Triple Quadrupole 6470 (Agilent Technologies) equipped with an electrospray source operating in both negative and positive modes. Gas temperature was set to 350 °C with a gas flow of 12 l min −1 . Capillary voltage was set to 5 kV in positive mode and 4.5 kV in negative mode. Ten microlitres of sample were injected on a Column Zorbax Eclipse XDB-C18 (100 mm × 2.1 mm particle size 1.8 μm) from Agilent technologies, protected by a guard column XDB-C18 (5 mm × 2.1 mm particle size 1.8 μm) and heated at 40 °C by a pelletier oven. The gradient mobile phase consisted of water with 2 mM of dibutylamine acetate concentrate (DBAA) (A) and acetonitrile (B). Flow rate was set to 0.4 ml min −1 and an initial gradient of 90% phase A and 10% phase B, which was maintained for 3 min. Molecules were then eluted using a gradient from 10% to 95% phase B over 1 min. The column was washed using 95% mobile phase B for 2 min and equilibrated using 10% mobile phase B for 1 min and the autosampler was kept at 4 °C. Scan mode used was the MRM for biological samples. Peak detection and integration of the analytes were performed using Agilent Mass Hunter quantitative software (B.10.1). Pseudo-targeted analysis of intracellular metabolites by UHPLC coupled to a Q-Exactive mass spectrometer. Reversed phase acetonitrile method: the profiling experiment was performed with a Dionex Ultimate 3000 UHPLC system (Thermo Fisher Scientific) coupled to a Q-Exactive (Thermo Fisher Scientific) equipped with an electrospray source operating in both positive and negative modes and full scan mode from 100 to 1,200 m/z. The Q-Exactive parameters were: sheath gas flow rate 55 au, auxiliary gas flow rate 15 au, spray voltage 3.3 kV, capillary temperature 300 °C, S-Lens RF level 55 V. The mass spectrometer was calibrated with sodium acetate solution dedicated to low mass calibration. 10 μl of sample were injected on a SB-Aq column (100 mm × 2.1 mm particle size 1.8 μm) from Agilent Technologies, protected by a guard column XDB-C18 (5 mm × 2.1 mm particle size 1.8 μm) and heated at 40 °C by a pelletier oven. The gradient mobile phase consisted of water with 0.2% acetic acid (A) and acetonitrile (B). The flow rate was set to 0.3 ml min −1 . The initial condition was 98% phase A and 2% phase B. Molecules were then eluted using a gradient from 2% to 95% phase B for 22 min. The column was washed using 95% mobile phase B for 2 min and equilibrated using 2% mobile phase B for 4 min. The autosampler was kept at 4 °C. Peak detection and integration were performed using the Thermo Xcalibur quantitative software (2.1.) 73 . Quantitative proteomics Cells were grown and treated as indicated. Whole-cell extracts were collected by scraping after incubation with 1× PBS with 10 mM EDTA at 37 °C. After centrifugation at 1,500g for 5 min at 4 °C, cells were washed twice with ice-cold 1× PBS and lysed using lysis buffer (8 M urea, 200 mM NH 4 HCO 3 , cOmplete) for 1 h at 4 °C on a rotary wheel. After centrifugation at 20,000g, 4 °C for 20 min, supernatants that contain proteins were used for the global proteome analysis. In brief, the global proteome was quantitatively analysed with a Orbitrap Eclipse mass spectrometer using a label-free approach. About 10 μg of total protein cell lysate were reduced by incubation with 5 mM dithiothreitol (DTT) at 57 °C for 30 min and then alkylated with 10 mM iodoacetamide for 30 min at room temperature in the dark. The samples were then diluted with 100 mM ammonium bicarbonate to reach a final concentration of 1 M urea and digested overnight at 37 °C with Trypsin:Lys-C (Promega, V5071) at a ratio of 1:50. Samples were then loaded onto a homemade C18 StageTips for desalting. Peptides were eluted from beads by incubation with 40:60 acetonitrile:water with 0.1% formic acid. Peptides were dried in a Speedvac and reconstituted in 10 μl 0.3% TFA prior to liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. Samples of 4 μl were chromatographically separated using an RSLCnano system (Ultimate 3000, Thermo Fisher Scientific) coupled online to an Orbitrap Eclipse mass spectrometer (Thermo Fisher Scientific). Peptides were first loaded onto a C18-trapped column (75 μm inner diameter × 2 cm; nanoViper Acclaim PepMap 100, Thermo Fisher Scientific), with buffer A (2:98 MeCN:H 2 O with 0.1% formic acid) at a flow rate of 3 μl min −1 over 4 min and then switched for separation to a C18 column (75 μm inner diameter × 50 cm; nanoViper C18, 2 μm, 100 Å, Acclaim PepMap RSLC, Thermo Fisher Scientific) regulated to a temperature of 50 °C with a linear gradient of 2 to 30% buffer B (100% MeCN and 0.1% formic acid) at a flow rate of 300 nl min −1 over 211 min. MS1 data were collected in the Orbitrap (120,000 resolution; maximum injection time 60 ms; AGC 4 × 10 5 ). Charges states between 2 and 5 were required for MS2 analysis, and a 45 s dynamic exclusion window was used. MS2 scans were performed in the ion trap in rapid mode with HCD fragmentation (isolation window 1.2 Da; NCE 30%; maximum injection time 60 ms; AGC 10 4 ). The identity of proteins was established from the UniProt human canonical database (UP000005640_9606) using Sequest HT through proteome discoverer (version 2.4) (Thermo Scientific). Enzyme specificity was set to trypsin and a maximum of two missed cleavage sites were allowed. Oxidized methionine, Methionine-loss, Methionine-loss-acetyl and N-terminal acetylation were set as variable modifications. Carbamidomethylation of cysteins were set as fixed modification. Maximum allowed mass deviation was set to 10 ppm for monoisotopic precursor ions and 0.6 Da for MS/MS peaks. The resulting files were further processed using myProMS v3.9.3 (https:// github.com/bioinfo-pf-curie/myproms) 74 . For the false discovery rate (FDR) calculation we used Percolator 75 and, this was set to 1% at the peptide level for the whole study. The label-free quantification was performed by peptide Extracted Ion Chromatograms (XICs) computed with MassChroQ version 2.2.21 (ref. 76). For protein quantification, XICs from proteotypic peptides shared between compared conditions (TopN matching) with up to two missed cleavages and carbamidomethyl modifications were used. Median and scale normalization was applied on the total signal to correct the XICs for each biological replicate. To estimate the significance of the change in protein abundance, a linear model (adjusted on peptides and biological replicates) was performed and P values were adjusted with a Benjamini-Hochberg FDR procedure with a control threshold set to 0.05 and the proteins should have at least 3 peptides 75 . Lactate quantification Extracellular lactate was quantified using a fluorometric lactate assay (Abcam, ab65330) according to the manufacturer's instructions. The culture media of cells was collected and centrifuged at 500g for 5 min. The supernatant was subsequently centrifuged at 20,000g for 10 min. The supernatant was then deproteinized using 10 kD spin columns (Abcam, ab93349). A lactate standard was prepared by adding 5 μl of the 100 nmol μl −1 lactate standard to 495 μl of lactate assay buffer. Subsequently, 1 ml of 0.01 nmol μl −1 lactate standard was produced by diluting 10 μl of 1 nmol μl −1 standard to 990 μl of lactate assay buffer. In a 96-well plate, standard samples of 0-0.1 nmol per well were added. A reaction mix of 46 μl lactate assay buffer, 2 μl probe and 2 μl enzyme mix was prepared for each well. For background measurements, 48 μl lactate assay buffer were mixed with 2 μl probe to obtain a background reaction mix. Fifty microlitres of each sample was added to a 96-well plate and either the reaction mix or the background reaction mix was added. Samples were incubated for 30 min at room temperature. Fluorescence intensities (excitation 540 nm; emission 590 nm) were recorded using a Perkin Elmer Wallac 1420 Victor2 Microplate Reader. Values were derived from the standard curve. Glyceraldehyde 3-phosphate quantification Glyceraldehyde 3-phosphate (GA3P) was quantified using a fluorometric glyceraldehyde 3-phosphate assay kit (Abcam, ab273344) adapting the manufacturer's instructions. Cells were washed twice with 1× PBS and then collected into a centrifugation tube in 100 μl GA3P Assay Buffer. Samples were kept on ice for 10 min and then centrifuged at 10,000× g for 10 min. The supernatant was collected and then deproteinized using 10 kD spin columns (Abcam, ab93349). For each test sample, 10 μl of sample were added into three parallel wells in a white, flat bottom 96-well plate. A sample background control, an un-spiked and spiked sample were added to these three wells. The spiked sample contained 200 pmol of GA3P standard. 50 μl of GA3P assay buffer was added per well. For the assay blank, 50 μl GA3P assay buffer was added per well. To each background well, 50 μl of a reaction mix consisting of 46 μl GA3P assay buffer, 2 μl GA3P enzyme mix and 2 μl GA3P probe was added. To each remaining well 50 μl of a reaction mix was added, which consisted of 44 μl GA3P assay buffer, 2 μl GA3P developer, 2 μl GA3P enzyme mix and 2 μl GA3P probe. Fluorescence intensities (excitation 540 nm; emission 590 nm) were recorded using a Perkin Elmer Wallac 1420 Victor2 Microplate Reader. Several readings were performed at 1 min intervals. The final GA3P concentrations were calculated by subtracting background sample values. Luminex immunoassay Cytokine levels were measured in cell culture supernatants using the V-Plex pro-inflammatory panel (MSD, K15049D-1). The kit was run according to the manufacturer's protocol and the chemiluminescence signal was measured on a Sector Imager 2400 (MSD). RNA-seq RNAs were extracted from MDMs using the RNeasy mini kit (Qiagen, 74104). RNA sequencing libraries were prepared from 1 μg total RNA using the Illumina TruSeq Stranded mRNA library preparation kit (Illumina, 20020594), which allows strand-specific sequencing. A first step of polyA selection using magnetic beads was performed to allow sequencing of polyadenylated transcripts. After fragmentation, cDNA synthesis was performed and resulting fragments were used for dA-tailing followed by ligation of TruSeq indexed adapters (Illumina, 20020492). Subsequently, polymerase chain reaction amplification was performed to generate the final barcoded cDNA libraries. Sequencing was carried out on a NovaSeq 6000 instrument from Illumina based on a 2× 100 cycles mode (paired-end reads, 100 bases). For RNA-seq on cells from the in vivo murine models, see the respective paragraphs. Raw sequencing reads were first checked for quality with Fastqc (0.11.8) and trimmed for adapter sequences with the trimGalore (0.6.2) software. Trimmed reads were then aligned on the human hg38 reference genome using the STAR mapper (2.6.1b), up to the generation of a raw count table per gene (GENCODE annotation v29). The bioinformatics pipelines used for these tasks are available online (rawqc v2.1.0: https:// github.com/bioinfo-pf-curie/raw-qc, RNA-seq v3.1.4: https://github. com/bioinfo-pf-curie/RNA-seq). The downstream analysis was then restricted to protein-coding genes. Data from the literature 77 were converted into bulk by keeping cells annotated as macrophages and then summing the counts for each sample. Counts data from the literature 44 were downloaded from GEO under accession number GSE73502. Raw data from the literature 45,46 were downloaded from the NCBI Short Read Archive under records PRJNA528433 and PRJNA290995 and processed as described above. For L. major, we used data at 4 h post-infection; for A. fumigatus, we used data at 2 h post-infection. Counts were normalized using TMM normalization from edgeR (v 3.30.3) 78 . Differential expression was assessed with the limma/voom framework (v 3.44.3) 79 . The intra-donor correlation was controlled by using the duplicateCorrelation from limma. Genes with an adjusted P value < 0.05 were labelled significant. Enrichment analysis from differentially expressed genes has been performed using the enrichGO function from clusterProfiler package v3.16.1. ChIP-seq Cells were grown and treated as described. Cells were centrifuged at 1,500g for 5 min at room temperature. The pelleted cells were resuspended in medium, counted and crosslinked with 1% formaldehyde for 10 min at room temperature. Then, 2.5 M glycine was added to a final concentration of 0.125 M and incubated for 5 min at room temperature followed by centrifugation at 1,500g for 5 min at 4 °C. The pelleted cells were washed twice with ice-cold 1× PBS and collected by centrifugation at 1,500g for 5 min at 4 °C. Pellets were resuspended in lysis buffer A (50 mM Tris-HCl pH 8, 10 mM EDTA, 1% SDS, cOmplete) and incubated for 30 min on a rotating wheel at 4 °C. Next, lysates were centrifuged at 1,500g for 15 min at 8 °C to prevent SDS precipitation, and the supernatants were discarded. Pellets were then sheared in buffer B (25 mM Tris-HCl pH 8, 3 mM EDTA, 0.1% SDS, 1% Triton X-100, 150 mM NaCl, cOmplete) to approximately 200-600 bp average size using a Bioruptor Pico (Diagenode). After centrifugation at 20,000g at 4 °C for 15 min, supernatants containing sheared chromatin were used for immunoprecipitation. Twenty-five microlitres (10%) of sheared chromatin was used as input DNA to normalize sequencing data. As an additional control for normalization, spike-in chromatin from Drosophila (Active Motif, 53083) and a spike-in antibody were used. Chromatin immunoprecipitation (1 million cells per ChIP condition) was carried out using sheared chromatin and antibodies against specific histone marks, which were subsequently complexed to either Dynabeads Protein G-coated magnetic beads (Invitrogen, 10003D) for H3K27ac, H3K14ac, H3K9ac, H3K27me3 and H3K9me2. In brief, each antibody was mixed with 1 μg of spike-in antibody. Then, 22 μl magnetic beads were washed three times in ice-cold buffer C (20 mM Tris-HCl pH 8, 2 mM EDTA, 0.1% SDS, 1% Triton X-100, 150 mM NaCl, cOmplete) and incubated with the mixture of antibodies for 4 h, at room temperature on a rotating wheel in buffer C (494 μl). After spinning and removal of supernatants, beads were resuspended in 50 μl buffer C. This suspension was subsequently incubated with 250 μl sheared chromatin previously mixed with 50 ng of spike-in chromatin from Drosophila (250 μl chromatin of interest: 2.5 μl spike-in chromatin) at 4 °C on a rotating wheel overnight (16 h). After a spinning, supernatants were discarded and beads were successively washed in buffer C (twice), buffer D (20 mM Tris-HCl pH 8, 2 mM EDTA, 0.1% SDS, 1% Triton X-100, 500 mM NaCl), buffer E (10 mM Tris-HCl pH 8, 0.25 M LiCl, 0.5% NP-40, 0.5% sodium deoxycholate, 1 mM EDTA) and once in buffer F (10 mM Tris-HCl pH 8, 1 mM EDTA, 50 mM NaCl). Finally, input and immunoprecipitated chromatin samples were resuspended in a solution containing TE buffer/1% SDS, de-crosslinked by heating at 65 °C overnight and subjected to both RNase A (Invitrogen, 12091-039, 1 mg ml −1 ) and Proteinase K (Thermo Scientific, EO0491, 20 mg ml −1 ) treatments. Input and immunoprecipitated DNA extraction: after reverse crosslinking, input and immunoprecipitated chromatin samples were treated with RNase A and proteinase K, and glycogen (Thermo Scientific, R0561, 20 mg ml −1 ) was added. Samples were incubated at 37 °C for 2 h. DNA precipitation was carried out using 8 M LiCl (final concentration 0.44 M) and phenol:chloroform:isoamyl alcohol. Samples were vortexed and centrifuged at 20,000g for 15 min at 4 °C. The upper phase was mixed with chloroform by vortexing. After centrifugation at 20,000g for 15 min at 4 °C, the upper phase was mixed with -20 °C absolute ethanol by vortexing and stored at -80 °C for 2 h. Next, samples were pelleted at 20,000g for 20 min at 4 °C. The pellets were washed with ice-cold 70% ethanol and centrifuged at 20,000g for 15 min at 4 °C. The supernatants were discarded and pellets were dried at room temperature, dissolved in nuclease-free water and quantified using a Qubit fluorometric assay (Invitrogen) according to the manufacturer's protocol. Library preparation and sequencing: Illumina compatible libraries were prepared from input and immunoprecipitated DNAs using the Illumina TruSeq ChIP library preparation kit according to the manufacturer's protocol (IP-202-1012). In brief, 4 to 10 nanograms of DNA were subjected to end-repair, dA-tailing and ligation of TruSeq indexed Illumina adapters. After a final PCR amplification step (with 15 cycles), the resulting barcoded libraries were equimolarly pooled and quantified by quantitative PCR using the KAPA library quantification kit (Roche, 07960336001). Sequencing was performed on the NovaSeq 6000 (Illumina), targeting 75 million clusters per sample and using paired-end 2× 100 bp. ChIP-seq data processing and quality controls have been performed with the Institut Curie ChIP-seq Nextflow pipeline (1.0.6) available at https:// github.com/bioinfo-pf-curie/ChIP-seq. In brief, reads were trimmed for adapter content, and aligned on the Human reference genome hg38 with BWA-mem. Low-quality mapped reads, reads aligned on ENCODE blacklist regions, reads aligned on the spike-in genome and reads marked as duplicates were discarded from the analysis. Bigwig tracks were then generated with deeptools and normalized to 1 million reads to account for differences in sequencing depth. In order to integrate the histone mark enrichments with the gene expression data (RNA-seq), the ChIP-seq signal has been counted either at the transcription start site level (±2 kb) for permissive histone marks or at the gene body for repressive histone marks. Coding genes from Gencode v34 have been used for the annotations. ChIP-seq counts data have then been filtered to remove low counts, and normalized using the TMM methods (edgeR R package). Fold-changes have been then calculated for all genes and donors. In vivo animal studies Survival assessment using the LPS mouse model was conducted at Fidelta (now Selvita) according to 2010/63/EU and National legislation regulating the use of laboratory animals in scientific research and for other purposes (Official Gazette 55/13). An institutional committee on animal research ethics (CARE-Zg) monitored animal-related procedures to ensure they were not compromising animal welfare. Experiments were performed on eight-week-old male BALB/c mice. Other experiments involving LPS and CLP mouse models were performed in accordance with French laws concerning animal experimentation (#2021072216346511) and approved by the Institutional Animal Care and Use Committee of Université de Saint-Quentin-en-Yvelines (C2EA-47). These LPS experiments were performed on eight-week-old male BALB/c mice and five-week-old male SWISS mice and experiments involving the CLP model were performed on nine-week-old male BALB/c mice. Mice were housed in a state-of-the-art animal care facility (2CARE, prefectural number agreement: A78-322-3, France). For experiments involving SARS-CoV-2, 8-week-old male K18-human ACE2-expressing C57BL/6 mice were used. A SARS-CoV-2 mouse model was used within the biosafety level 3 facility of the Institut Pasteur de Lille, after validation of the protocols by the local committee for the evaluation of the biological risks and complied with current national and institutional regulations and ethical guidelines (Institut Pasteur de Lille/ B59-350009). The experimental protocols using animals were approved by the institutional ethical committee 'Comité d'Ethique en Experimentation Animale (CEEA) 75, Nord-Pas-de-Calais'. The animal study was authorized by the 'Education, Research and Innovation Ministry' under registration number APAFIS#25517-2020052608325772v3. A SARS-CoV-2 mouse model was used within the biosafety level 3 facility of the University of Toulouse. This work was overseen by an Institutional Committee on Animal Research Ethics (License APAFIS#27729-2020101616517580 v3, Minister of Research, France (CEEA-001)), to ensure that animal-related procedures were not compromising the animal welfare. Four mice per cage were housed in in a ventilated rack with a media enrichment element. Mice in all animal facilities were housed in ventilated cages (temperature 22 °C ± 2 °C, humidity 55% ± 10%) with free access to water and food on a 12 h light/ dark cycle. Male littermates were randomly assigned to experimental groups throughout. Quantification, statistical analysis and reproducibility Results are presented as mean ± s.e.m or mean ± s.d. as indicated. In box plots, boxes represent interquartile range and median, and whiskers indicate the minimum and maximum values. A specific colour of dots on a box plot represents a distinct donor only within a given figure panel. Each donor or mouse represents an independent biological sample. Prism 8.2.0 software was used to calculate P values using a two-sided Mann-Whitney test, two-sided unpaired t-test, Kruskal-Wallis test with Dunn's post test, two-way ANOVA or Mantel-Cox log-rank test as indicated. Prism 8.2.0 software or the R programming language was used to generate graphical representations of quantitative data unless stated otherwise. Exact P values are indicated in the figures. Sample sizes (n) are indicated in the figure legends. All immunofluorescence experiments were repeated with at least n = 3 donors with similar results. Western blotting on macrophages isolated from mice was performed on pooled samples and performed once per pool. Morphological changes observed between naMDMs and aMDMs were observed in n = 128 donors and representative images of n = 1 donor are displayed in Extended Data Fig. 1c. NanoSIMS imaging was performed on n = 1 donor and a representative image is displayed in Extended Data Fig. 5b. Materials availability In-house reagents can be made available under a material transfer agreement with Institut Curie. Inquiries should be addressed to R.R. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability RNA-seq and ChIP-seq data are available at the Gene Expression Omnibus with accession reference GSE160864. The mass spectrometry proteomics raw data have been deposited to the ProteomeXchange Consortium via the PRIDE 80 partner repository with the dataset identifier PXD038612. The donor number corresponds to the order of blood collection. Source data are provided with this paper. Code availability Analysis scripts for RNA-seq and ChIP-seq data are available at https:// github.com/bioinfo-pf-curie/MDMmetals. Fig. 5 | Detection of a druggable pool of copper(II) in mitochondria. a, Molecular structure of isotopologue 15 N, 13 C-LCC-12. b, NanoSIMS image of 15 N and 197 Au in aMDM of n = 1 donor. c, Fluorescence microscopy of labelled LCC-12,4 (100 nM) in aMDM. In-cell-labelling performed without added copper(II) and using LCC-12 as a competitor. Representative of n = 3 donors. d, ICP-MS of metals in mitochondria of MDM (n = 6 donors). e, Comparison of the total metal contents in cells and mitochondria of MDM determined by ICP-MS. f, ICP-MS of metals in nuclei isolated from MDM (n = 6 donors). g, ICP-MS of metals in endoplasmic reticula (ER) isolated from MDM (n = 6 donors). h, ICP-MS of the total cellular copper content in MDM treated with LCC-12 (n = 6 donors). i, ICP-MS of mitochondrial copper in MDM treated with LCC-12 (n = 6 donors). j, Flow cytometry of a mitochondrial copper(II) probe (M Cu -2) in MDM treated with LCC-12 (n = 10 donors). k, Western blots of mitochondrial metal transporters in MDM (n = 8 donors). l, m, n, Fluorescence microscopy of labelled LCC-12,4 in aMDM under gene knockdown conditions as indicated (n = 4 donors). o, Top: Structure of trientine alkyne. Bottom: Picture of aq. solutions of trientine alkyne, CuSO 4 and corresponding mixtures. p, Fluorescence microscopy of labelled trientine alkyne in aMDM. For b, c, l -n, p scale bar, 10 μm. For c two-sided unpaired t-test, representative of n = 3 donors. Mean ± s.d. For d, f, g, k, l -n, two-sided Mann-Whitney test. For h -j Kruskal-Wallis test with Dunn's post-test. Box plots: boxes represent interquartile range and median, and whiskers indicate the minimum and maximum values. Each colored dot represents a distinct donor for a given panel.
17,981.6
2023-04-26T00:00:00.000
[ "Medicine", "Chemistry" ]
Isolation of Monomeric Human VHS by a Phage Selection* Human VH domains are promising molecules in applications involving antibodies, in particular, immunotherapy because of their human origin. However, they are, in general, prone to aggregation. Therefore, various strategies have been employed to acquire monomeric human VHs. We had previously discovered that filamentous phages displaying engineered monomeric VH domains gave rise to significantly larger plaques on bacterial lawns than phages displaying wild type VHs with aggregation tendencies. Using plaque size as the selection criterion and a phage-displayed naïve human VH library we identified 15 VHs that were monomeric. Additionally, the VHs demonstrated good expression yields, good refolding properties following thermal denaturation, resistance to aggregation during long incubation at 37 °C, and to trypsin at 37 °C. These 15 VHs should serve as good scaffolds for developing immunotherapeutics, and the selection method employed here should have general utility for isolating proteins with desirable biophysical properties. gation. Synthetic libraries built on these V H s as library scaffolds should serve as a promising source of therapeutic proteins. Camelization (7,8) as well as llamination, 3 which involves incorporating key solubilizing residues from camelid sdAbs into human V H s, have been employed to generate monomeric human V H s. Synthetic sdAb libraries constructed based on these V H s and generated by complementarity determining region (CDR) randomization were shown to yield binders to various antigens (8,10). In another approach, fully monomeric human V H s were isolated from human synthetic V H libraries without resorting to engineering of the sort mentioned above. In one experiment a monomeric human V H was discovered when a human V H library was panned against hen egg white lysozyme (11). More recently, a selection method based on reversible unfolding and affinity criteria yielded many monomeric V H s from synthetic human V H phage display libraries (12). This finding underlined the fact that an appropriate selection method is key to efficient capturing of rare human V H s with desirable biophysical properties. Here, we provide yet another approach for obtaining monomeric human V H s. We report the isolation of 15 different V H s originating from germlines DP-38, DP-47, V3-49, V3-53, YAC-5, and 8-1B from a phagedisplayed naïve human V H repertoire by a selection method that is based on phage plaque size. The V H s, by and large, are also refoldable, retain their native fold following exposure to trypsin at 37°C or long incubation at 37°C, and are expressed in good yields in the Escherichia coli. When used as scaffolds, the diversity of the selected V H s should allow for construction of more comprehensive libraries and provide flexibility in terms of choosing an optimal V H scaffold for humanizing therapeutic camelid V H H binders. The current selection method permits high throughput identification of proteins with good biophysical properties by the naked eye, is very simple, eliminates affinity or stability selection steps, and is of general utility. MATERIALS AND METHODS Phage Display Library Construction and Panning-cDNA was synthesized from human spleen mRNA (Ambion Inc., Austin, TX) using random hexanucleotide primers and First Strand cDNA kit (GE Healthcare, Baie d'Urfé, QC, Canada). Using the cDNA as template, V H genes with flanking C H sequences were amplified by polymerase chain reaction in nine separate reactions using V H framework region 1 (FR1)specific primers and an immunoglobulin M-specific primer (13). The products were gel-purified and used as the template in the second round of PCR to construct V H genes using the FR1-and FR4-specific primers (13) that also introduced flanking ApalI and NotI restriction sites for cloning purposes. The resultant V H repertoire DNA was cloned into fd-tetGIIID phage vector and a V H phage display library was constructed (8). Panning against protein A (GE Healthcare) was performed as described (8). Germline sequence assignment of the selected V H s was performed using DNAPLOT software version 2.0.1 and V BASE version 1.0. 4 Llama V H Hs H11C7, H11F9, and H11B2 were isolated from a llama V H H phage display library by panning against H11 scFv as described (5). Protein Expression and Purification-Single-domain antibodies were cloned into pSJF2 expression vector by standard cloning techniques (14). Periplasmic expression of sdAbs and subsequent purification by immobilized metal affinity chromatography were performed as described (15). Protein concentrations were determined by A 280 measurements using molar absorption coefficients calculated for each protein (16). Gel filtration chromatography of the purified sdAbs was performed on a Superdex 75 column (GE Healthcare) as described (17). Binding and Refolding Efficiency Experiments-Equilibrium dissociation constants (K D s) and refolding efficiencies (REs) of V H s/V H Hs were derived from surface plasmon resonance (SPR) data collected with the BIACORE 3000 biosensor system (Biacore Inc., Piscataway, NJ). To measure the binding of V H s to protein A, 2000 resonance units of protein A or a reference antigen-binding fragment (Fab) were immobilized on research grade CM5 sensor chips (Biacore Inc.). Immobilizations were carried out at concentrations of 25 g/ml (protein A) or 50 g/ml (Fab) in 10 mM sodium acetate buffer, pH 4.5, using the amine coupling kit provided by the manufacturer. To measure the binding of the antiidiotypic llama V H Hs to H11 scFv (18), 4100 resonance units of 50 g/ml H11 scFv or 3000 resonance units of 10 g/ml Se155-4 immunoglobulin G (19) were immobilized as described above. In all instances, analyses were carried out at 25°C in 10 mM HEPES, pH 7.4, containing 150 mM NaCl, 3 mM EDTA, and 0.005% P20 surfactant at a flow rate of 40 l/min, and surfaces were regenerated by washing with the running buffer. To determine the binding activities of the refolded proteins, V H s or V H Hs were denatured by incubation at 85°C for 20 min at 10 g/ml concentrations. The protein samples were then cooled down to room temperature for 30 min to refold and were subsequently centrifuged in a microcentrifuge at 14,000 ϫ g for 5 min at room temperature to remove any protein precipitates. The supernatants were recovered and analyzed for binding activity by SPR as described above. For both folded and refolded protein, data were fit to a 1:1 interaction model simultaneously using BIAevaluation 4.1 software (Biacore Inc.) and K D values were subsequently determined. REs were determined from RE ϭ (K D n/ K D ref) ϫ 100, where K D n is the K D of the native protein and K D ref is the K D of the refolded protein. Tryptic Digest Experiments-Three l of a freshly prepared 0.1 g/l sequencing grade trypsin (Hoffmann-La Roche Ltd., Mississauga, ON, Canada) in 1 mM HCl was added to 60 g of V H in 100 mM Tris-HCl buffer, pH 7.8. Digestion reactions were carried out in a total volume of 60 l for 1 h at 37°C and stopped by adding 5 l of 0.1 g/l trypsin inhibitor (Sigma). Following completion of digestion, 5 l was removed and analyzed by SDS-PAGE; the remaining was desalted using ZipTip C4 (Millipore, Ontario, Canada), eluted with 1% acetic acid in 50:50 methanol:water and subjected to V H mass determination by matrix-assisted laser desorption ionization mass spectrometry. Protein Stability Experiments at 37°C-Single-domain antibodies at 0.32-3.2 mg/ml concentrations were incubated at 37°C in phosphatebuffered saline buffer for 17 days. Following incubation, the protein samples were spun down in a microcentrifuge at maximum speed for 5 min even in the absence of any visible aggregate formation. The samples were then applied onto a Superdex 75 size exclusion column and the monomeric peaks were collected for SPR analysis of binding to protein A. SPR analyses were performed as described above except that 500 resonance units of protein A or reference Fab was immobilized and immobilizations were carried out at a concentration of 50 g/ml. NMR Experiments-V H samples for NMR analysis were dissolved in 10 mM sodium phosphate, 150 mM NaCl, 0.5 mM EDTA, and 0.02% NaN 3 at pH 7.0. The protein concentrations were 40 M to 1.0 mM. All NMR experiments were carried out at 298 K on a Bruker Avance-800 or a Bruker Avance-500 NMR spectrometer. One-dimensional 1 H NMR spectra were recorded with 16,384 data points and the spectral widths were 8,992.81 Hz at 500 MHz and 17,605.63 Hz at 800 MHz, respectively. Two-dimensional 1 H-1 H NOESY spectra of 2,048 ϫ 400 data points were acquired on a Bruker Avance-800 NMR spectrometer with a spectral width of 11,990.04 Hz and a mixing time of 120 ms. In all NMR experiments, water suppression was achieved using the WATERGATE method implemented through the 3-9-19 pulse train (20,21). NMR data were processed and analyzed using the Bruker XWINNMR software package. All PFG-NMR diffusion measurements were carried out with the water-suppressed LED sequence (22), on a Bruker Avance-500 NMR spectrometer equipped with a triple-resonance probe with threeaxis gradients. One-dimensional proton spectra were processed and analyzed using Bruker Xwinnmr software package. NMR signal intensities were obtained by integrating NMR spectra in the methyl and methylene proton region (2.3 to Ϫ0.3 ppm) where all NMR signals were attenuated uniformly at all given PFG strengths. RESULTS During the course of the construction of fully human and llaminated human V H libraries, 3 we learned that the phages displaying monomeric llaminated V H s formed larger plaques on bacterial lawns than phages displaying fully human V H s with aggregation tendencies. We thus decided to use plaque size as a means of identifying rare, naturally occurring monomer V H s from the human V H repertoire ( Fig. 1). To this end, a phage library displaying human V H s with a size of 6 ϫ 10 8 was constructed and propagated as plaques on agar plates. On the titer plates, the library consisted essentially of small plaques interspersed with some large ones. PCR on 20 clones revealed that the small plaques corresponded to the V H -displaying phages, whereas the large ones represented the wild type phages, i.e. phages lacking V H sequence inserts. None of the V H -displaying phages were found with large plaque morphology. This was not unexpected because of the paucity of the monomeric V H s in the human repertoire and the large size of the library. To facilitate the identification of monomeric V H s, it was decided to reduce the library size to a manageable one and remove interfering wild type phage with large plaque-size morphology by panning the library against protein A, which binds to a subset of human V H s from the V H 3 family. Following a few rounds of panning, the library became enriched for phage producing large plaques, and PCR and sequencing of more than 110 such plaques showed that all had complete V H open reading frames. The size of the large plaques that were picked for analysis is represented in Fig. 1. Sequencing revealed 15 different V H s that belonged to the V H 3 family and utilized DP-38, DP-47, V3-49, V3-53, YAC-5, or 8-1B germline V segments (TABLE ONE; Fig. 2). The DP-38 and DP-47 germline sequences have been previously implicated in protein A binding (12,23). In addition, all V H s had a Thr residue at position 57 ( Fig. 2), consistent with their protein A binding activity (24,25). The most frequently utilized germline V segment was DP-47, which occurred in over 50% of the V H s, but the most frequent clone (i.e. HVHP428, relative frequency 46%) utilized the V3-49 germline V segment. HVHP429 with a DP-47 germline sequence was the second most abundant V H with a relative frequency of 21% (Fig. 2). The V H CDR3 lengths ranged from 4 amino acids for HVHB82 to 16 amino acids for HVHP430, with HVHP430 having a pair of Cys residues in CDR3. Amino acid mutations with respect to the parental germline V segment (residues 1-94) and FR4 (residues 103-113) sequences were observed in all V H s and ranged from two mutations for HVHP44 (L5V and Q105R) and HVHB82 (E1Q and L5Q) to 16 mutations for HVHP426 (TABLE ONE). Mutations were concentrated in the V segments; only two mutations were detected in all the 15 FR4s, at positions 105 and 108. HVHP44 and HVHB82 differed from other V H s in that they both had a positively charged amino acid at position 105 instead of a Gln (TABLE ONE, Fig. 2). However, whereas the positively charged amino acid in HVHP44 was acquired by mutation, the one in HVHB82 was germline-encoded. Except for HVHP423 and HVHP44B, the remaining V H s had the germline residues at the key solubility positions (4): 37V/44G/45L/47W or 37F/44G/45L/47W (HVHP428), HVHP423 and HVHP44B had a V37F mutation. Mutations at other positions, which have been shown or hypothesized to be important in V H solubility, included seven E6Q, three S35T/H, one R83G, one K83R, one A84P, one T84A, and one M108L mutation (5,11,26). Frequent mutations were also observed at positions 1 and 5 that included 11 E1Q, eight L5V/Q, and one V5Q mutations. All V H s except HVHP44B, which was essentially the same as HVHP423, were expressed in 1-liter culture volumes in E. coli strain TG1 in fusion with a c-Myc-His 5 tag and purified to homogeneity from periplasmic extracts by immobilized metal affinity chromatography. The expression yields ranged from 1.8 to 62.1 mg of purified protein per liter of bacterial culture in shake flasks with the majority having yields of several milligrams (TABLE TWO). In the instances of HVHP423 and HVHP430, another trial under "apparently" the same expression conditions gave yields of 2.4 and 6.4 mg as opposed to 62.1 and 23.7 mg, respectively. This implies that for many of the V H s described here, optimal expression conditions should be achieved, without much effort, resulting in expression yields significantly higher than the values reported in TABLE TWO. As expected, all the V H s bound to protein A in SPR analyses, with K D values of 0.2-3 M, a range and magnitude comparable with affinities reported previously for llama V H H variants with protein A binding activity (24). None of the V H s bound to the Fab reference surface. The aggregation tendency of the human V H s was assessed in terms of their oligomerization state by gel filtration chromatography and NMR (TABLE TWO). All V H s were subjected to Superdex 75 gel filtration chromatography. Similar to a llama V H H, H11C7, all V H s gave a symmetric single peak at the elution volume expected for a monomer, and were essentially free of any aggregates (see the example for HVHP428 in Fig. 3A). In contrast, a typical human V H (i.e. BT32/A6) formed a considerable amount of aggregates. For three of the V H s, a minor peak with a mobility expected for a V H dimer was also observed. SPR analyses of the minor peaks gave off-rate values that were significantly slower than those for the monomer V H s, consistent with them being V H dimers. The dimer peak was also observed in the case of the llama V H H, H11C7. The folding and oligomerization states of the V H s at high concentrations were further studied by NMR spectroscopy. As shown in TABLE TWO, all the V H proteins studied appeared to be relatively soluble and assumed a well folded three-dimensional structure. One-dimensional NMR spectra of the V H fragments (Fig. 3B) showed structure folds characteristic of V H domains. The state of protein aggregation was also assessed by use of an PFG-NMR diffusion experiment for the HVHP414 fragment and two isoforms, VH14 and VH14-cMyc, with and without the c-Myc sequence, of the HVHP414. VH14 is a modified version of HVHP414 with a c-Myc N132E mutation and with an additional methionine residue at the N terminus. In brief, the PFG-NMR data (not shown) indicated that all the protein samples had expected monomeric molecular weights even at the relatively high protein concentrations used for NMR experiments. We further investigated the stability of the V H s in terms of their resistance to trypsin at 37°C and integrity following long incubations at 37°C. Trypsin cleaves polypeptide amide backbones at the C terminus of an Arg or a Lys residue. There are 9 -13 Arg and Lys residues in the human V H s (Fig. 2). There is also an additional Lys residue in the C-terminal c-Myc tag (27), which is susceptible to digestion by trypsin. Fig. 4A is an SDS-PAGE analysis of HVHP414 during trypsin digestion. Within 1 h the original band was completely converted to a single product that had a mobility expected for the V H without the c-Myc-His 5 tag. The same result was obtained for 12 other V H s following a 1-h incubation with trypsin. Mass spectrometry on a randomly selected sample of the trypsin-treated V H s (HVHP414, HVHP419, HVHP420, HVHP423, HVHP429, HVHP430, and HVHM81) confirmed that in every case the molecular mass of the digested product corresponded to a V H with the c-Myc Lys as the C-terminal residue (Fig. 4B). HVHM41 gave a significantly shorter fragment than the rest upon digestion, and in this case mass spectrometry experiments mapped the cleavage site to Arg 99 in CDR3 (data not shown). The photo depicts a part of the bacterial lawn agar plate that was magnified to enhance plaque visualization. Although the plate contained an equal number of each of the two plaque types, the photo essentially shows only the large, HVHP428 plaques. The majority of the BT32/A6 plaques were too small to produce clear, well defined images in the photo. The plaques marked by arrows, thus, represent a minor proportion of BT32/A6 phages that were large enough to be visible in this image. Asterisks mark representative plaque sizes for HVHP428 phages. The identities of plaques were determined by DNA sequencing. Eleven V H s ranging in concentration from 0.32 (HVHP428) to 3.2 (HVHP420) mg/ml were incubated at 37°C for 17 days. Their stability was subsequently determined in terms of oligomerization state and protein A binding. As shown by gel filtration chromatography, treatment of V H s at 37°C did not induce any aggregate formation; all V H s gave chromatographic profiles that were virtually identical to those of untreated V H s and remained essentially as monomers (see the example for HVHP420; Fig. 4C). To ensure that the V H s maintained their native fold following 37°C treatment, two V H s, namely, HVHP414 (1.2 mg/ml) and HVHP420 (3.2 mg/ml), were selected at random and the K D values for binding to protein A were determined by SPR (data shown for HVHP420, Fig. 4C, inset) and compared with the K D values obtained for untreated V H s (TABLE TWO). The calculated K D values for the treated V H s were 1.4 and 1.0 M for HVHP414 and HVHP420, respectively. These values are essentially identical to the values for the corresponding untreated V H s (TABLE TWO), demonstrating that 37°C treatment of V H s did not affect their native fold. The possibility that the V H s may have been in a less compact, non-native fold during the 37°C incubation periods and resumed their native fold upon return to room temperature during gel filtration and SPR experiments is unlikely in light of the fact that the V H s were resistant to trypsin at 37°C (see above), a property typically associated with well folded native proteins. We also investigated the RE of the human V H s by comparing the K D values for the binding of the native (K D n) and heat-treated, refolded (K D ref) V H s to protein A (5). If a fraction of the V H is inactivated by heat treatment, the measured K D would increase, because this parameter is based on the concentration of folded, i.e. active, antibody fragment. Thus, the ratio of K D n to K D ref gives a measure of V H RE. Fig. 5 compares sensorgrams for HVHP423 binding to immobilized protein A in native (thick lines) and refolded (thin lines) states at several selected V H concentrations. As can be seen, binding of the refolded V H to protein A is less in all instances, indicating that the unfolding is not fully reversible. For each of the 14 V H s, protein A binding in both native and refolded states was measured at several concentrations and the K D values, and subsequently the REs, were determined (TABLE TWO, K D ref values are not shown). The K D values and REs of two anti-idiotypic llama V H Hs, H11F9 and H11 B2, which were used as references, were also determined. Four V H s had REs in the range of 92-95% and were similar to the REs for H11F9 and H11B2, which were 95 and 100%, respectively. Another five had REs in the range of 84 -88% and three were over 70%. Only two had significantly lower REs, HVHP413 (52%) and HVHP421 (14%). Several llama V H Hs examined previously had REs of ϳ50% (2). (28). Previously reported fully human V H s with favorable biophysical properties were based on one V germline sequence, DP-47 (11,12). The observation that the monomeric human V H s in this study stem from six different germline sequences, including DP-47, demonstrates that stable V H s are not restricted in terms of germline gene usage. In fact, it is very likely that we would have isolated monomeric V H s with family and germline origins different from the ones we describe here had we not restricted our selection to a subset of V H 3 family V H s with protein A binding activity. The appearance of DP-47 germline in over 50% of the V H s may be because of its over-representation in the expressed V H repertoire (29). It is not possible to pinpoint amino acid mutations (TABLE ONE) responsible for the observed biophysical behavior of the present V H s because of the occurrence of multiple mutations in the V H s and the fact that CDR3 is also known to be involved in shaping the biophysical profiles of sdAbs (30 -32). It may be, however, that mutations at positions known to be important for sdAb stability and solubility, e.g. V37F in HVHP423 and HVHP44B, or mutations occurring multiple times at the same position, e.g. L5V/Q and V5Q in nine V H s, have a role in determining V H s biophysical properties. In terms of library construction, it would be desirable that the monomericity of the present V H s not be dependent on CDRs, in particular CDR3, so that CDR randomization be performed without the worry of jeopardizing library stability. In this regard, the V H s with smaller CDR3s, e.g. HVHB82, may be preferred scaffolds because there would be less dependence on CDR3 for stability (30,31). DISCUSSION The diversity of the present V H s in terms of overall sequence and CDR3 length should allow the construction of better performing libraries. Synthetic V H libraries have been constructed on single scaffolds (8,10,12,33). Such an approach to repertoire generation is in sharp contrast to the natural, in vivo "approach" that utilizes a multiplicity of scaffolds. Based on the sequences reported here one can take advantage of the availability of the diverse set of V H s to create libraries that are based on multiple V H scaffolds. Such libraries would be a better emulation of in vivo repertoires and, therefore, would have a more optimal complexity. Of the three CDRs in sdAbs, CDR3 generally contributes most significantly to repertoire diversity and for this reason CDR3 randomization has always been included in library construction strategies. CDR3 randomizations on sdAb scaffolds are typically accompanied by concomitant varying CDR3 lengths. Whereas this significantly improves library complexity, it may also compromise library stability by disrupting the length of the parental scaffold CDR3 (5). The heterogeneity of our V H s in terms of CDR3 length will permit us to create librar- ies with both good complexity and good stability. Such a library would consist of sublibraries, where each sublibrary is created by CDR3 randomization (and CDR1 and/or CDR2 randomization, if desired) on a single V H scaffold without disrupting the parental CDR3 length. The versatility of the present V H s is also beneficial in terms of choosing an optimal V H framework for humanizing well characterized camelid V H H binders against therapeutic targets (34 -36). High affinity camelid V H Hs against therapeutic targets can be obtained from immune V H H libraries with relative ease (4,37) and be subsequently subjected to humanization to remove possible V H H immunogenicity, hence providing an alternative to human V H library approach for the production of therapeutic V H s. Generating high affinity therapeutic V H Hs by the latter approach may often require additional tedious and time consuming in vitro affinity maturation of lead binder(s) selected from the primary synthetic human V H libraries. A number of evolutionary approaches for the selection of proteins with improved biophysical properties have been described (12, 38 -41). Typically, stability pressure is required to ensure preferential selection of stable variants over unstable or less stable ones from a library population. For example, in a related study, heat treatment of V H phage display libraries was required to select for aggregation resistant V H s (12). Examples of evolutionary selection approaches involving phage FIGURE 3. Aggregation tendencies of the human V H s. A, gel filtration chromatograms comparing the oligomerization state of a human V H isolated in this study (HVHP428) to that of a llama V H H (H11C7) and a typical human V H (BT32/A6). The peak eluting last in each chromatogram corresponds to monomeric V H . The dimeric H11C7 peak is marked by an arrow. B, one-dimensional 1 H NMR spectra of HVHP414 at 800 MHz (i), HVHP423 at 500 MHz (ii), and HVHP428 at 800 MHz (iii). The spectra in the left panel are scaled up by a factor of two to enable better viewing of low-intensity signals. The mass spectrometry profile of the treated V H is superimposed onto that for the untreated one to provide a better visual comparison. The experimental molecular mass of the untreated V H is 14,967.6 Da, which is essentially identical to the expected molecular mass, 14,967.7 Da. The observed molecular mass of the trypsin-treated V H (13,368.5 Da) indicates loss of 13 amino acids at the C terminus by cleavage at Lys in the c-Myc tag to give an expected molecular mass of 13,368.0 Da. The trypsin cleavage site is shown by a vertical arrow above the amino acids sequence of HVHP414. C, gel filtration chromatograms comparing the oligomerization state of the 37°C-treated HVHP420 V H (upper profile) to that of untreated V H (lower profile). The chromatograms were shifted vertically because they were indistinguishable when superimposed. The major and minor peaks in each chromatogram correspond to monomeric and dimeric V H s, respectively. The dimeric V H constitutes 3% of the total protein. The inset shows the sensorgram overlays for the binding of 37°C-treated HVHP420 to protein A at various concentrations. The V H s used for temperature stability studies were from stocks that had already been at 4°C for several months. display include conventional phage display, selectively infective phage, and the proteolysis approaches. In the first two approaches affinity selection is used to select stable species from a library, based on the assumption that stable proteins possess better binding properties for their ligand than unstable ones. However, even with the additional inclusion of a stability selection step, these approaches may primarily enrich for higher affinity rather than for higher stability (39). A binding step requirement also limits the applicability of these approaches to proteins with known ligands. The third, proteolysis selection, approach is based on the fact that stable proteins are generally compact and therefore are resistant to proteases, whereas unstable ones are not. The phage display format is engineered in such a way that the protease stability of the displayed protein translates to phage infectivity. Thus, when a variant phage display library is treated with a protease, only the phages displaying stable proteins retain their infectivity and can subsequently be selected by infecting an E. coli host. Because this approach is independent of ligand binding, it has general utility. However, even stable and well folded proteins have protease-sensitive sites, e.g. loops and linkers, and this could sometimes hinder the selection of stable species in a proteolysis approach (42). In the present evolutionary approach, proteins with superior biophysical properties are simply identified by the naked eye. The approach does not require ligand binding, proteolysis, or destabilization steps and, thus, avoids complications that may be encountered in previous reported selection approaches. No requirement for a binding step also means that our approach has general utility. As an option, a binding step may be included to ensure that the selected proteins are functional. However, the dependence of the present approach on plating (for plaque visualization) introduces a possible logistical limitation in terms of the number of plates that can be handled and thus limits its application to smaller libraries. Nonetheless, the utility of the current approach can be extended to large libraries, if the library size is first reduced by, for example, incorporating a step that removes a large population of unstable species, e.g. library adsorption on a protein A surface as described here, or on a hydrophobic interaction column to remove poorly folded proteins (41). Here, our approach was used to select V H s of good biophysical properties from a background of very unstable V H s. However, it may be more difficult to select the "best" species from a mutant library that is populated with proteins with reasonably good stabilities. In this case, the best variants may be identified based on the rate of plaque formation by using shorter incubation times, or plaque size and frequency criteria. The present selection approach can be extended to the identification of stable and well folded antibody fragments such as human V L s, scFvs, Fabs, with the optional inclusion in the selection system of a binding step involving protein L, protein A, or other ligands, and non-antibody scaffolds. Moreover, the observed correlation between phage plaque size and V H expression yield means that one can utilize the present approach for acquiring high-expressing versions of proteins with poor or unsatisfactory expression from mutant phage display libraries. This application would be particularly appealing where boosting expression of therapeutic proteins or expensive poor-expressing protein reagents would significantly reduce protein production cost.
7,156.8
2005-12-16T00:00:00.000
[ "Medicine", "Biology" ]
A Comprehensive RNA Expression Signature for Cervical Squamous Cell Carcinoma Prognosis Clinicopathological characteristics alone are not enough to predict the survival of patients with cervical squamous cell carcinoma (CESC) due to clinical heterogeneity. In recent years, many genes and non-coding RNAs have been shown to be oncogenes or tumor-suppressors in CESC cells. This study aimed to develop a comprehensive transcriptomic signature for CESC patient prognosis. Univariate, multivariate, and Least Absolute Shrinkage and Selection Operator penalized Cox regression were used to identify prognostic signatures for CESC patients from transcriptomic data of The Cancer Genome Atlas. A normalized prognostic index (NPI) was formulated as a synthetical index for CESC prognosis. Time-dependent receiver operating characteristic curve analysis was used to compare prognostic signatures. A prognostic transcriptomic signature was identified, including 1 microRNA, 1 long non-coding RNA, and 6 messenger RNAs. Decreased survival was associated with CESC patients being in the high-risk group stratified by NPI. The NPI was an independent predictor for CESC patient prognosis and it outperformed the known clinicopathological characteristics, microRNA-only signature, gene-only signature, and previously identified microRNA and gene signatures. Function and pathway enrichment analysis revealed that the identified prognostic RNAs were mainly involved in angiogenesis. In conclusion, we proposed a transcriptomic signature for CESC prognosis and it may be useful for effective clinical risk management of CESC patients. Moreover, RNAs in the transcriptomic signature provided clues for downstream experimental validation and mechanism exploration. INTRODUCTION Cervical cancer (CC) is still the fourth most common cancer in women (Ferlay et al., 2015). Despite developed countries are low epidemic areas of CC by virtue of easier accessibility of routine screening test and human papillomavirus (HPV) vaccination, CC is still the second leading cause of cancer death among women aged 20-39 years in the United States in 2015 (Siegel et al., 2018). At present, clinical stage is the leading predictive characteristic for CC prognosis, although useful, significant variability is observed and the 5 years survival rate is still poor for women with advanced CC (30-40% for stage III and 15% for stage IV). Theoretically, clinicopathological characteristics are macroscopic emergence of molecules (e.g., genes, proteins) and CC patients with homogeneous clinical status may have completely diverse molecular patterns. Therefore, identification of robust and accurate molecular biomarkers for CC patient prognosis is valuable and in urgent need. By comprehensively characterizing various molecules (DNAlevel, RNA-level, protein-level) in 100s of CC samples, The Cancer Genome Atlas (TCGA) has provided a comprehensive way to understand CC (The Cancer Genome Atlas Research Network et al., 2017). Enormous multiple omics data make the discovery of potential biomarkers for CC diagnosis, treatment and prognosis possible. Several studies have investigated the molecular signatures for CC prognosis based on the expression of CC genome. Hu et al. (2010) profiled 96 cancer-related microRNAs (miRNAs) in 102 CC samples and firstly proposed a two-miRNA expression signature for predicting the overall survival (OS) of CC patients. How et al. (2015) measured the miRNA omics of CC samples by miRNA arrays and proposed a prognostic nine-miRNA expression signature in their training set. However, the prognostic value of the nine-miRNA expression signature could not be validated in an independent cohort (How et al., 2015). Liu et al. (2016), Liang et al. (2017), Ma et al. (2018), and Ying et al. (2018) proposed a seven-miRNA expression signature, a three-miRNA expression signature, a three-miRNA expression signature, and a 2 two-miRNA expression signature for CC prognosis based on TCGA miRNA sequencing data, respectively. Huang et al. (2012) profiled 1440 human tumor related gene transcripts using custom oligonucleotide microarrays in 100 CC samples and identified a prognostic seven-gene expression signature. Based on TCGA gene sequencing data, Li et al. (2017b proposed a two-histone family gene signature and further proposed another independent gene signature to predict the OS of CC patients. However, some limitations should be noticed: (1) Previous studies focused on single omics independently, and there lacks a whole transcriptomic analysis which may provide more comprehensive and robust discovery (Hu et al., 2010;Huang et al., 2012;How et al., 2015;Liu et al., 2016;Li et al., 2017bLiang et al., 2017;Ma et al., 2018;Ying et al., 2018). (2) Prognostic miRNA signatures identified based on the same data source without cross-references are very different (Liu et al., 2016;Liang et al., 2017;Ma et al., 2018;Ying et al., 2018). (3) For prognostic miRNAs, previous studies did not distinguish miRNA isoforms (3p-arm or 5p-arm). Thus, it is unclear which isoform should be further investigated by experiment (Hu et al., 2010;How et al., 2015;Liu et al., 2016;Liang et al., 2017;Ma et al., 2018;Ying et al., 2018). (4) Pathologically, CC includes cervical squamous cell carcinoma (CESC) and cervical adenocarcinoma (CADC). Because there are significant differences in prognosis between CESC and CADC (Jung et al., 2017), it is not appropriate to mix them for identification of prognostic biomarkers. (5) Semi-parametric survival analysis method such as Cox regression analysis is loosely used without checking the proportional hazards (PH) assumption (Hu et al., 2010;Huang et al., 2012;How et al., 2015;Liu et al., 2016;Li et al., 2017bLiang et al., 2017;Ma et al., 2018;Ying et al., 2018). (6) To identify prognostic signatures from high-dimensional omics data, classical multivariate Cox regression analysis (MCA) is usually impeded by the "curse of dimensionality" (i.e., low sample size and large number of variables), which leads to over-fitting and unstable estimation of regression coefficients. To address these limitations, we submit the transcriptomic data of CESC patients to a Least Absolute Shrinkage and Selection Operator (LASSO) penalized MCA (Simon et al., 2011) to identify a transcriptomic signature for CESC prognosis. Data Acquisition Level 1 clinical data, level 3 transcriptomic sequencing data, and the corresponding metadata of CCs were retrieved and downloaded from TCGA 1 repository in January 2018. Search strategies can be obtained in Section I of the Supplementary Material. CC patients were included in this study by the following criteria: (1) CC patients diagnosed as CESC; (2) CESC patients with at least 60 days follow-up; (3) CESC patients have both clinical data, gene sequencing data, and miRNA isoform sequencing data; (4) CESC patients do not have prior other malignancies; and (5) CESC patients do not receive any preoperative neoadjuvant therapy. Data Preprocessing Clinical eXtemsible Markup Language files were parsed by R "XML" package and the R code can be achieved in Section II of the Supplementary Material. Details on sequencing data preprocessing can be also available in Section II of the Supplementary Material. Hierarchical clustering was used to cluster samples to detect sample outliers and guided principal component analysis (gPCA) (Reese et al., 2013) was adopted to evaluate batch effects of the sequencing data. Identification of Prognostic Demographic and Clinicopathological Characteristics Kaplan-Meier (KM) survival analysis with log-rank test was applied to evaluate the prognostic effects of age at diagnosis, clinical stage, menopause status, ethnicity, birth control pill usage, tobacco usage, and lymphovascular invasion for CESC. Furthermore, MCA with demographic and clinicopathological characteristics as covariates was adopted to evaluate their independence for CESC prognosis. Univariate Survival Analysis of RNAs The miRNA isoform sequencing data only include miRNAs while the gene sequencing data include both messenger RNAs (mRNAs) and long non-coding RNAs (lncRNAs). For convenience of description, we termed mRNA and lncRNA as gene and further termed miRNA, mRNA and lncRNA as RNA. Associations between OS and RNA expression profiles were preliminarily evaluated by univariate Cox regression analysis (UCA). The proportional hazards (PH) assumption was tested by Schoenfeld residual (Grambsch and Therneau, 1994) and a unified multiple testing (Strimmer, 2008) was applied for tail area-based false discovery rate (FDR) estimation. RNAs with FDRs < 0.1 and PH assumption test P values >0.1 (Kleinbaum and Klein, 2012) were considered to be preliminarily associated with OS of CESC patients. Furthermore, RNAs with hazard ratios (HRs) > 1 were defined to be risky for CESC prognosis, and those with HRs < 1 were considered as protective. Multivariate Analysis of Preliminarily Survival Associated RNAs For miRNAs, stepwise MCA was applied to preliminarily survival associated miRNAs to construct an independent miRNAonly expression signature for CESC prognosis. The Bayesian information criterion (BIC) (Schwarz, 1978) was adopted for model selection. Because the number of preliminarily survival associated genes was comparable to the number of samples, a LASSO penalized MCA with 10-fold cross validation and 1000 iterations was adopted to select genes by penalizing low regression coefficients exactly to zero. To alleviate the local minimum problem, we repeated the LASSO penalized MCA 10 times with different initializations and the model that achieved the minimal partial likelihood was adopted. Furthermore, stepwise MCA was applied to genes selected by the LASSO penalized MCA to construct an independent gene-only expression signature for CESC prognosis. Finally, a transcriptomic signature for CESC prognosis was constructed by stepwise MCA with gene-only signature and miRNA-only signature as covariates. Risk Score A normalized prognosis index (NPI) defined as the standard form of a linear combination of the observed values weighted by the regression coefficients was adopted as a synthetical index for CESC prognosis. Specifically, where PI is a prognostic index vector and the jth element of PI is the prognostic index of the jth patient, i.e. β i is the regression coefficient of the ith variable (in this context, the ith gene/miRNA); G ij is the observed value of the ith variable in the jth sample (in this context, the expression of the ith gene/miRNA in the jth sample). For miRNA-only signature, gene-only signature, the integrated RNA signature (i.e., transcriptomic signature), and the previously identified prognostic signatures, we termed the corresponding NPI as miRNA-NPI, gene-NPI, RNA-NPI, and pre-NPI, respectively. Model Evaluation and Comparison CESC patients were stratified into a high-risk group (NPI > 0) or a low-risk group (NPI < 0) based on NPI. OS between the high-risk group and the low-risk group was compared by KM survival analysis. MCA was used to evaluate the independence of various NPIs and clinical factors. The abilities of various NPIs to predict CESC patient survival outcome were assessed and compared by calculating the area under the curve (AUC) of the time-dependent receiver operating characteristic (ROC) at 3, 5, and 10 years, respectively. Gene Ontology and Pathway Enrichment Unlike other mRNA target prediction software just based on sequence alignment, miRTarBase provided experimentally validated mRNA targets of miRNA. Both strongly and weakly validated mRNA targets of the prognostic miRNA were obtained from miRTarBase (version 7.0) (Chou et al., 2018). Metascapae (Tripathi et al., 2015) 2 was adopted for gene ontology and pathway enrichment of the prognostic mRNAs and targets of the prognostic miRNA. Statistical Analysis Tools P-value less than 0.05 or adjusted P-value less than 0.1 was considered to be significant. All analyses were performed by R software. Non-parametric survival analysis, semi-parametric survival analysis, and PH assumption test were performed by survival and survminer packages. LASSO penalized MCA was conducted by textitglmnet package. Multiple test correction was conducted by fdrtool package. Time-dependent ROC analysis was conducted by timeROC package. Available Data TCGA CC dataset included 307 CC patients who had generated 312 samples for miRNA sequencing (including 307 primary CC samples, 2 metastatic CC samples, and 3 normal samples) and 309 samples for gene sequencing (including 304 primary CC samples, 2 metastatic CC samples, and 3 normal samples). Due to small number of metastatic and normal CC samples, we only analyzed the primary CC samples. Based on the inclusion criteria and lowexpressed RNA filtering (Supplementary Material Section III), 214 CESC samples covering 401 miRNAs and 13631 genes (mRNAs and lncRNAs) were retained. Hierarchical clustering showed that there existed a miRNA sample outlier and seven gene sample outliers. After removing sample outliers and scaling the expressions of miRNAs and genes to zero sample mean and standard deviation, 206 primary CESC samples were included for identification of prognostic signatures. Batch effect analysis showed that there was no obvious separation on the first two guided principal components for both miRNA isoform sequencing data and gene sequencing data (Figures 1A,C) with permutation test P-values of 0.598 and 0.947 (Figures 1B,D), respectively. These results indicated that there was no significant batch effect in the sequencing data. Prognostic Demographic and Clinicopathological Characteristics Of the 206 patients in TCGA-CESC cohort, 55 patients were deceased and 151 were alive at the time of last follow-up. The median OS time was 3097 days (95% CI: 2859-NA days). Demographic and clinicopathological characteristics for TCGA-CESC cohort are summarized in Table 1. Age at initial diagnosis (HR = 1.81, 95% CI: 0.92-3.58), clinical stage (HR = 2.14, 95% CI: 1.12-4.09), tobacco usage (HR = 2.36, 95% CI: 1.24-4.48), and lymphovascular invasion (HR = 13.70, 95% CI: 5.64-33.31) were negatively associated with OS of CESC patients revealed by KM survival analysis (Table 1 and Supplementary Figure S1). MCA revealed that only lymphovascular invasion was an independent predictor for CESC prognosis (Table 1). However, information of lymphovascular invasion was not available in more than half of the CESC patients (n = 106). Considering age at initial diagnosis, clinical stage, and tobacco usage as covariates (i.e., without lymphovascular invasion), MCA revealed that only clinical stage was an independent prognostic clinicopathological characteristic ( Table 1). To exclude potential effects of mutations on CESC prognosis, we also investigated the mutational patterns of the genes in the identified transcriptomic signature using OncoPrinter. Mutations of ZIC2, MTMR11, EGLN1, and TPST1 were found in two, five, one, and two of total 206 CESC samples, and none of these mutations were found in 197 CESC samples. MCA further showed that the identified RNAs were independent predictors for CESC prognosis in the non-mutated samples ( Supplementary Table S3). Thus, the prognostic roles of the identified RNAs were merely caused by their expressions. Moreover, MCA revealed that clinical stage, RNA-NPI, and pre-NPI were independent predictors for CESC prognosis (Table 3). However, when considering lymphovascular invasion as covariate, RNA-NPI was the only independent prognostic index ( Table 3). These results demonstrated that the transcriptomic signature was a better predictor for CESC prognosis compared with clinicopathological characteristics and previous proposed signatures. Angiogenesis Related Functions and Pathways Sixty-four genes were validated as targets of hsa-miR-532-5p. Gene ontology and pathway enrichment analyses indicated that targets of hsa-miR-532-5p and the prognostic mRNAs in the transcriptomic signature were mainly associated with angiogenesis ( Figure 5). Angiogenesis is a main hallmark of tumor progression and may be an independent prognostic factor in CC (Bremer et al., 1996). In this study, EGLN1 (Egl-9 family hypoxia inducible factor 1; also known as PHD2) is a risky gene (HR = 1.79, 95% CI: 1.45-2.21) and it is closely related with angiogenesis by regulating the stability of HIF1 in non-CESC cancers (Chan and Giaccia, 2010;Lu et al., 2013). Targets of hsa-miR-532-5p such as runt-related transcription factor-3 (Peng et al., 2006;Kim et al., 2016), insulin-like growth factor-binding protein-5 (Rho et al., 2008;Lee et al., 2016), and von Hippel-Lindau (Kong et al., 2011(Kong et al., , 2014Lu et al., 2013) were proved to be associated with angiogenesis in many types of cancer. These results prompted that further experiments aimed to explore functional mechanisms of the transcriptomic signature could be focused on angiogenesis related pathways. DISCUSSION In this study, a novel transcriptomic signature for CESC patient prognosis was identified. Our proposed transcriptomic signature includes two non-coding RNAs (hsa-miR-532-5p and lncRNA DLEU1) and six mRNAs (RBM38, CXCL2, ZIC2, MTMR11, EGLN1, and TPST1). It is natural to wonder if there are any mRNA targets of the non-coding RNAs in the transcriptomic signature. Interestingly, CXCL2 was reported to be a direct target of hsa-miR-532-5p in hepatocellular carcinoma and this miRNA-gene interaction inhibited hepatocellular carcinoma cell proliferation and metastasis (Song et al., 2015). However, correlation analysis revealed that no significant correlation between hsa-miR-532-5p and CXCL2 was observed. Forty-six genes were predicted to be targets of lncRNA-DLEU1 by starBase v2.0 (Li et al., 2014), but none of them were in the transcriptomic signature. For identification of prognostic biomarkers, it is expected to construct signatures that include as many independent biomarkers as possible. Due to the independence among the biomarkers in the transcriptomic signature, it is hard to find possible biological interactions among them. To find possible mechanisms for the independent RNAs in the transcriptomic signature, it is wise to explore possible correlations between the independent RNAs and the remaining RNAs of CESC transcriptome. Thus, the transcriptomic signature could just provide the initial molecules rather than complete biological mechanisms for further experimental exploration. For the protective RNAs in the transcriptomic signature, hsa-miR-532-5p was shown to be involved in many cancers either as a tumor suppressor or an oncogenic-miRNA (Song et al., 2015;Zhang J. et al., 2018). However, the role of hsa-miR-532-5p in CESC remains unknown, and our results prompted that it may exert a tumor suppressor role in CESC due to its positive correlation with OS of CESC patients. RNA-binding protein 38 (RBM38) was originally recognized as an oncogene and it was frequently found to be amplified in prostate, ovarian and colorectal cancer, chronic lymphocytic leukemia, colon carcinoma, esophageal cancer, dog lymphomas and breast cancer (Ding et al., 2015). But recently, more and more studies suggested that RBM38 might act as a tumor suppressor (Feldstein et al., 2012;Xue et al., 2014). Ding et al. found that the association between the expression of RBM38 and cancer prognosis varied from cancers and databases (Ding et al., 2015). These studies suggested that the function of RBM38 might be multidimensional in cancers. However, no study was performed to investigate the possible roles of RBM38 in CESC, and our analysis prompted to assume that RBM38 may be a tumor suppressor in CESC. Zic family member 2 (ZIC2) was shown to be oncogenic in many cancers such as ovarian cancer (Marchini et al., 2012) and hepatocellular carcinoma (Lu et al., 2017). In cervical cancer, ZIC2 was rarely investigated. Chan et al. (2011) demonstrated that ZIC2 was up-regulated in CC cell lines and the up-regulation of ZIC2 may enhance the activity of the Hedgehog signaling pathway through nuclear retention of Gli1. Although ZIC2 may be a risk factor that was deduced from the results of Chan et al., the prognostic role of ZIC2 in CESC patients was not investigated. Our analysis showed that ZIC2 may be a protective factor, and further studies are needed to elaborate on the disputable roles of ZIC2 in CESC. Among the risky RNAs in the transcriptomic signature, Tyrosylprotein sulfotransferase 1 (TPST1) is an enzyme responsible for catalysis of tyrosine sulfation. Previous studies revealed that TPST1 could sulfate the tyrosine of C-X-C motif chemokine receptor (CXCR4) (Seibert et al., 2008;Xu et al., 2013) and the tyrosine sulfation might contribute to nasopharyngeal carcinoma metastasis (Xu et al., 2013). However, expressions, functions, and mechanisms of TPST1 in CESC are not clear. Consistent with these previous studies, our analysis revealed that TPST1 was harmful for CESC prognosis. C-X-C motif chemokine ligand 2 (CXCL2) was demonstrated to be upregulated in many types of cancer such as chronic lymphocytic leukemia and bladder cancer. The up-regulation of CXCL2 could enhance the cell survival of lymphocytic leukemia (Burgess et al., 2012) and it was correlated with poor prognosis of bladder cancer (Zhang et al., 2016). Recently, Zhang et al. revealed that AKP1 could promote angiogenesis and tumor growth by up-regulating CXCL1, CXCL2, and CXCL8 in CC cells . Our results revealed that CXCL2 was a risk factor for CESC prognosis, which is in line with these previous experimental studies in non-CESC cancers. Egl-9 family hypoxia inducible factor 1 (EGLN1) was a key cellular oxygen sensor, which played important roles in tumor angiogenesis (Chan and Giaccia, 2010) and tumor metastasis (Kuchnio et al., 2015). Both tumor-promoting and suppressive roles of EGLN1 have been reported in different types of cancer (Chan and Giaccia, 2010). Although EGLN1 was shown to be low-expressed in advanced CC (Roszak et al., 2011;Kuchnio et al., 2015), in our analysis, the expression of EGLN1 was a risk factor for CESC patient survival. Thus, further functional mechanisms of EGLN1 in CESC cells should be carried out to illustrate the worse prognostic effect of EGLN1. Myotubularin related protein 11 (MTMR11) was rarely reported in cancer. The lncRNA DLEU1 played multiple roles in different cancers. Previous studies revealed that DLEU1 could promote progression of ovarian carcinoma and gastric cancer (Li et al., 2017a), while Balas and Johnson (2018). showed that DLEU1 may be a tumor suppressor. There are versatile forms of interaction for lncRNA. DLEU1 could exert its functions by binding to proteins (Li et al., 2017a;Liu et al., 2018) or miRNAs . Moreover, the up-regulated DLEU1 was shown to be associated with the survival of gastric cancer by promoting proliferation of gastric cancer cells (Li et al., 2017a). However, the potential roles of DLEU1 in CESC remain unclear, and our analysis revealed that DLEU1 may also exert as a tumor suppressor in CESC. Some limitations of the current study should be noticed. (1) For PH assumption test, it is difficult to estimate the type II error (i.e., the false negative rate). Thus, it is hard to choose a threshold of PH assumption test P-value in multiple testing. (2) The transcriptomic signature was identified based on TCGA datamining and further independent validations and mechanism explorations are in need. (3) Complete mechanisms could not be revealed by the transcriptomic signature itself. However, the transcriptomic signature may be of potential applications for clinical management of CESC patients. Specifically, we can measure the expressions of the transcriptomic signature in CESC patients and calculate the NPIs for the patients based on the measured expression values. Furthermore, CESC patients can be stratified as high-risk and low-risk based on their NPIs. Finally, for high-risk patients, more aggressive therapies such as high-dose chemoradiotherapy may be given. Moreover, hsa-miR-532-5p, RBM38, EGLN1, and DLEU1 were demonstrated as both oncogenes and tumor-suppressors in non-CESC cancers and their roles in CESC were unclear. This study provided experimental directions for these novel genes and miRNA. CONCLUSION We have identified a novel transcriptomic signature for CESC prognosis, including 1 miRNA, 1 lncRNA and 6 mRNAs. The transcriptomic signature was more comprehensive and predictive than the miRNA-only, the gene-only, and the previously identified signatures for CESC prognosis. AUTHOR CONTRIBUTIONS JX conceived and designed the analysis. JX, SG, YS, LG, and ZB performed the analysis. JX wrote the paper. FUNDING This work was supported by the Postdoctoral Science Foundation of Central South University. ACKNOWLEDGMENTS We thank The Cancer Genome Altas project made the genomic data of CESC available. All data obtained from TCGA keep to the rules for usage and publication of TCGA. We wish to thank the editors and reviewers for helpful comments. In addition, we also thank Zheng Yu from University of Washington for English advice. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fgene. 2018.00696/full#supplementary-material FIGURE S1 | Kaplan-Meier plots for age at diagnosis (A), tobacco usage (B), clinical stage (C), and lymphovascular invasion (D). FIGURE S2 | The first row shows the densities, the second the distribution function and the last row the local and tail area-based false discovery rates of miRNAs. FIGURE S4 | The first row shows the densities, the second the distribution function and the last row the local and tail area-based false discovery rates of genes. TABLE S1 | MCA of RNA-NPI and miRNA-NPI. METHODS | Details on data retrieval and preprocessing.
5,392.8
2019-01-04T00:00:00.000
[ "Medicine", "Biology" ]
Quantum anomalous Hall effects and various topological mechanisms in functionalized Sn monolayers The topological behaviors of Sn monolayers partly passivated with H atoms are explored based on first-principles calculations. Obvious magnetism can be induced in the Sn monolayer due to the passivation. And the magnetism strength is found to be determined by the number difference of the H atoms bonding to the two sublattices of stanene. Quantum anomalous Hall (QAH) effects are found appearing easily in the systems with one of the sublattices fully passivated and the other not. The origin of the topological states can be ascribed to the coupling of the magnetism, the lattice symmetry (C3v), and the H-atom concentrations, which forms various mechanisms of the topological states. Particularly, band inversions are found playing completely different roles in forming the QAH effects for the different functionalized Sn monolayers. Introduction Quantum anomalous Hall (QAH) effects [1][2][3], owning one type of novel quantum topological states, are signaled by quantized charge Hall conductance. Despite the insulating bulk states, the chiral edge states can conduct the charge current without dissipation since the back scattering of electrons is totally held back in the system. The transport behaviors are also very robust against disorders and perturbations. Owing to these intriguing properties, the QAH effect has attracted considerable research interest recently and the concept of the topological states has deepened our understandings of condensed matter physics and material science. The QAH effect is generally identified and characterized by Chern numbers [4,5] as shown below. The materials possessing the QAH effect can, thus, be called Chern insulators. Unlike quantum Hall states, QAH states can appear without the requirement of a strong magnetic field applied to the system. The internal magnetization is, however, essential to produce the QAH effect in a two-dimensional (2D) material. Thus, to achieve the effect, how to introduce long-range ferromagnetism in the 2D material systems needs to be considered first although the special band structures and strong spin-orbital coupling (SOC) interactions are also of significance to the effect. Up to now, a series of theoretical predictions of QAH effects have been proposed in 2D material systems, including Hg 1−y Mn y Te quantum wells [6,7], magnetized Bi 2 Te 3 films etc [8], graphene [9][10][11], silicone [12][13][14], and other 2D thin films [15,16]. Among them, only the magnetized Bi 2 Te 3 films etc, concretely Cr-doped [17][18][19][20] or V-doped [21][22][23] (Bi, Sb) 2 Te 3 have been reported hosting the QAH effect in experiments. The experimental observation conditions of the QAH effect are also very harsh, including very high quality of the crystals fabricated and an extremely low temperature (milli-Kelvin) [17][18][19][20][21][22][23], hindering seriously the development of the field. Recently, a theoretical study has shown that halogen-half-passivated stanene/ germanene, with one sublattice of Sn/Ge fully saturated by halogen atoms and the other not, can exhibit a QAH effect with a pretty large energy band gap of 0.34/0.06 eV and high Curie temperature of 243/509 K [24]. The origin of the topological state was ascribed to band inversion of the Sn/Ge spin-down s-p z bands [24]. In experiments, Sn ultrathin films with a buckled honeycomb lattice have been fabricated with molecular beam epitaxy techniques [25][26][27], making the experimental observation of the QAH effect in the 2D Sn films step forward. Thus, the systems, especially stanene, should be excellent candidates to be employed to observe QAH effects in experiments at a relatively high temperature. Some important problems, however, exist in the system and need to be solved in advanced. For stanene, if the passivating halogen atoms are replaced with hydrogen atoms, our initial calculations indicate that the hydrogen-half-passivated Sn monolayer also presents the QAH effect. Oppositely, the hydrogen-full-passivated stanene was reported to be a trivial insulator due to the absence of band inversion [28]. How the topological phase transition happens with the decreasing of the H concentration is worthwhile to be studied. Additionally, the perfect half-or full-passivated Sn monolayers mentioned above are generally hard to be realized in experiments since the cases with more or less Sn atoms saturated may more easily appear in experiments. Under the same saturation concentration, different passivation patterns may also occur, which can lead to different topological electronic states. Therefore, it is full of theoretical and practical significances to explore the electronic states and topological behaviors for the imperfect passivated 2D systems. In this work, based on density functional theory (DFT), we investigate the electronic states and topological properties of Sn monolayers partly passivated with H atoms. We find that the H concentration, the induced magnetism, and the lattice symmetry together determine the electronic structures and topological properties of the functionalized Sn monolayer systems. The induced magnetic strength of the system is associated with the number difference of passivated Sn atoms in the two sublattices. QAH effects are found appearing easily in the systems with one of the sublattices fully passivated and the other not. Various forming mechanisms are proposed to understand the QAH effects achieved. For the systems with a C 3v lattice symmetry and strong magnetism, the nontrivial topology originates from the lifting of energy degeneracy due to the SOC. The band inversions appear in both of the systems with or without the C 3v symmetries, whose roles are, however, completely different in forming the topological states. Models and methods A 2×2 stanene supercell, as shown in figure 1, is built to investigate the electronic and topological behaviors of the Sn monolayer partly saturated with H atoms. The two triangular sublattices in stanene are labeled as t (top) and b (bottom), respectively (see figures 1(b) and (d)). The numbers of the saturated Sn in the two sublattices are employed to distinguish the different saturated stanene systems. For example, the t3b1 means that three Sn atoms in the top sublattice are passivated by H atoms and one Sn atom in the bottom sublattice is passivated (see figures 1(a) and (b)). In this work, we primarily focus on the cases with half or more than half Sn atoms in the system saturated with H atoms. The systems investigated in this work are given in table 1. Note that t3b1 is equivalent to t1b3 because of the symmetry of the top and bottom sublattices. The behavior keeps for the other passivation schemes. The geometry optimization and electronic structures are calculated by using a first-principles method based on the framework of DFT, as implemented in the Vienna ab initio simulation package [29] with projector augmented wave method [30]. The exchange correlation potential is described by using Perdew-Burke-Ernzerhof functional [31] within generalized gradient approximation [32]. Plane waves with a kinetic energy cutoff of 500 eV are adopted as basis set and all atoms are allowed to relax along any directions until the force on each atom is less than 0.01 eV Å −1 . The SOC interaction is calculated self-consistently by solving the generalized Kohn-Sham equations in the relativistic DFT [33]. To prevent interlayer interactions between the adjacent slabs, a 20 Å vacuum buffer space is applied between the neighboring layers. The 9×9×1 gamma central Monkhorst-Pack grids are set to integrate in the first Brillouin Zone. Results and discussion For half-saturated stanene systems, i.e. with four H atoms in the 2×2 supercell, if the saturated Sn atoms all belong to the same sublattice (namely t4b0 or t0b4), the case is named as ideal half-saturated systems. The ideal halogen-half-passivated Sn monolayers were explored previously [24] and a QAH effect was predicted in the systems. For an ideal half-passivated stanene with H atoms, our calculations predict a QAH effect in it, similar to those ideal iodine-half-passivated Sn monolayers. Thus, for the half-passivated Sn sheets, we mainly consider the cases with both of the Sn sublattices passivated with H atoms, such as t3b1. In table 1, the systems of the t2b2 and t3b3 are also not included due to the absence of the magnetism in both of the cases. Thus, there are totally five schemes for the Sn monolayers, where half or more than half Sn atoms are passivated, which are deeply explored in this work (table 1). Due to the relative locations of the passivated Sn atoms and translational symmetry of the lattice, three types of nonequivalent passivation patterns are found for both t3b1 and t3b2, as indicated by stars in table 1. For the left three schemes listed in table 1, there are only one nonequivalent passivation pattern. The lattice constant of the pristine stanene (without the H atoms) is first optimized. The obtained value of 4.68 Å is consistent with the previous result of 4.65 Å [28]. The pristine stanene is a slightly buckled honeycomb lattice with in-plane sp 2 hybridization and out-of-plane p z dangling bonds. The latter forms the Dirac cone around the Fermi level (E F ) [34]. When the dangling bonds of one of the sublattice are all passivated by H or I atoms, very strong ferromagnetism is introduced in the system [24], one key condition to produce the QAH effect. The equilibrium lattice constants and the atomic positions for the systems listed in table 1 are all optimized. As an example, the lattice constant, buckled height between the two sublattices, and the average Sn-Sn bond length for t3b1 shown in figures 1(a) and (b) are 9.45 Å, 0.77 Å, and 2.86 Å, respectively. In table 1, only t4b1 and t4b3 own the C 3v lattice symmetry, which can result in very exotic topological mechanisms, to be discussed. Since the long-range magnetism is essential to produce the QAH effect, we first explore the magnetism in the stanene partly passivated with the H atoms. The magnetization strength of each system studied is given in the third row of table 1. Very interestingly, we find that the magnetic momentum per supercell, induced by the H atoms, is exactly determined by the number difference of saturated Sn atoms in the top and bottom sublattices. For t3b1 and t3b2, the different passivation patterns all give the same magnetization strength. As shown in table 1, among the five studied systems, t4b1 has the strongest magnetism while t3b2 and t4b3 have the weakest magnetism. This trend can be ascribed to the counteracting effect of the dangling bonds from the top and bottom sublattices, which can also explain why the ideal half-passivated stanene (t4b0) has the largest magnetic momentum per supercell (4 μ B ). As mentioned above, the last three systems (t4b1, t4b2, and t4b3) in table 1 all have only one nonequivalent passivation, while t3b1 and t3b2 systems have three types of nonequivalent passivation patterns, due to the complex relative locations of the passivated Sn atoms in the top and bottom sublattices in t3b1 and t3b2. The electronic structures of these systems with the consideration of the possible nonequivalent passivation patterns are all calculated. The results show that although magnetism is induced in all the t3b1/t3b2 systems with the Table 1. Several properties of the investigated systems are given. The first line (δ) indicates the concentration of the H atoms added into the stanene. The 'Y' and 'N' in the second line give the lattices with and without the C 3v symmetry, respectively. M and C denote the magnetic moment per supercell and the Chern number, respectively. The values in the last line (Δ) show the band gaps obtained without and with the consideration of the SOC. The stars in the second and third columns express more than one passivation patterns existing in the passivation schemes. Scheme t3b1 * t3b2 * t4b1 t4b2 t4b3 148/20 278/120 0/180 64/121 187/8 different passivation patterns, only 33%/67% of them own the QAH effect with Chern numbers of −1. Among the three types of t3b1 passivation patterns, only one has the topologically nontrivial band gap. Its total energy is, however, the highest, compared to those of the other two patterns. For t3b2, two passivation patterns own the topologically nontrivial band gaps, with the lowest and highest total energies, respectively. Therefore, the QAH effect is relatively easier to be observed experimentally in t3b2 than t3b1. Note that the Chern numbers (C) and the band gaps (Δ) shown in table 1 for t3b1/t3b2 are only the results of the systems with the passivation patterns illustrated in figure 1. The last three systems (t4b1, t4b2, and t4b3) all possess the QAH effect, which will be discussed in detail. Thus, QAH effects appear easily in the systems with one of the sublattices fully passivated and the other not, which provides important guidance to the experimental observation of the QAH effect in the Sn monolayers passivated with H atoms. We now focus on the electronic structures and topological behaviors of following four typical systems: t3b1, t4b1, t4b2, and t4b3. For t3b1, the passivation pattern displayed in figures 1(a) and (b) is chosen. The band structures of these systems without and with the SOC are displayed in figure 2. Obvious spin polarization is observed for each case without the consideration of the SOC. Except t4b1, all other systems (t3b1, t4b2, and t4b3) possess sizable band gaps at the E F before the SOC is taken into account. In t4b1, the lowest conduction band and the highest valence band touch at the Γ point exactly at the E F . When the SOC is considered, a very large band gap of 180 meV is opened (figure 2(f)), while for the left three systems (t3b1, t4b2, and t4b3), the band gap becomes small if the original band gap is large and vice versa. For example, without the SOC, the band gap for t4b3 is the largest (187 meV). It becomes the smallest (8 meV) with the consideration of the SOC. Therefore, we can tentatively expect that the topological mechanisms for t4b1 and the other three systems in figure 2 must be different. Figure 3 illustrates the orbital components for the four systems. Explicitly, the bands near the E F are primarily contributed by Sn s and p xy orbitals. For the Sn atoms passivated with H atoms, their p z orbitals will interact with H s orbitals. Thus, bonding and antibonding states are formed by the two orbitals far away from the E F , at about −1.4 and 2.5 eV, respectively. These behaviors lead to the breakdown of the linear Dirac bands around the E F , from the pristine stanene at the K point. Since in each case some Sn atoms are not passivated, nearly flat bands contributed chiefly by Sn p z orbital are observed in figure 3 (the orange color curves). And the numbers of such flat bands for spin-up or spin-down components are precisely equal to the number difference of the passivated Sn atoms in the two sublattices, namely the value of the magnetic moment per supercell in the system. Besides, the spin-up and spin-down flat bands are located below and above the E F , respectively, indicating the magnetism of these Sn monolayers originates mainly from the dangling Sn p z orbitals, consistent with above analysis. For t4b1 and t4b3 without the SOC, degenerate bands composed of Sn p x and p x orbitals (green and blue hollow curves) appear near the E F due to the C 3v lattice symmetry owned by the lattices. Concretely, in t4b1 ( figure 3(b)), the spin-down unoccupied p x band (green hollow curve) touch the spin-down occupied p y band (blue hollow curve) exactly at E F . And in t4b3 ( figure 3(d)), the spin-down p x and p y orbitals degenerate at the top of the valence bands. For t4b1, another exotic aspect is the spin-down Sn s band (red hollow curve) located below the E F , while for the left three systems in the upper panel of the figure 3, they are all located above the E F . The schematic diagrams of band characteristics of these systems around the E F without the SOC are illustrated in figure 4 (the left panel for each case). The inverted band order of the spin-down s orbital in t4b1 can be ascribed to the very strong exchange interactions induced in the Sn s orbitals in this system, leading to the spin-down p x orbitals (green dotted curve) becoming unoccupied (figures 3(b) and 4(c)) due to the conservation of Sn electron numbers. When the SOC is considered, the degeneracy around the E F in t4b1 is lifted and one large band gap is opened. The trends for the other three systems are, however, different. For them, band inversions between the Sn spin-down s and p x /p y occur around the E F (figures 3(e), (g), and (h)), which are illustrated schematically in figures 4(b), (f), and (h). Since band inversions generally can induces topologically phase transitions [6], nontrivial topological states may appear in these three systems. Note that the Sn spin-down s, p x , and p y bands To identify the topological properties of the band gaps opened by SOC in the systems shown in figure 3, the Berry curvatures and the chiral edge states are calculated, displayed in figure 5. For QAH systems, conducting edge states are characterized by quantized charge Hall conductance, σ xy =Ce 2 /h, where C is a topological invariant, called Chern number. It is proved to be equal to the number of chiral edge states, which can be calculated by formula: The Ω(k) is the momentum-space Berry curvatures summed over all occupied valence bands [4,35,36]: figure 5 give a nonzero Chern number C=−1, manifesting all these systems have QAH effects and each of them owns only one nontrivial edge state. By constructing maximally localized Wannier functions and surface Green's functions of the corresponding semi-infinite systems, we calculate the edge states of these four systems shown in figure 3. Topologically nontrivial edge states, which connect the conduction and valence bands, can be easily recognized in figures 5(e)-(h). As expected from the Chern number (C=−1) obtained, there is only one nontrivial edge state per edge for each of these systems considered. Some topologically trivial edge states, connecting two valence bands or two conduction bands, can also be seen in figures 5(e)-(h), owing to the existence of the Sn p z dangling bonds in the systems as indicated by the flat bands in figure 3. As discussed above, the band structures of t3b1, t4b2, and t4b3 possess SOC-induced band inversions as illustrated in figures 4(b), (f), and (h) while t4b1 just opens an energy gap when SOC is included ( figure 4(d)). By orbital component analyses shown in figure 3, we find the band inversions all occur between Sn spin-down s and p x /p y orbitals for the former three systems (figures 4(b), (f), and (h)). Such nontrivial topology origin caused by the band inversion is expected to be similar to the mechanism proposed by Bernevig, Hughes, and Zhang [37], in which pseudospin of two relevant orbitals forms a skymion with unit topological charge in the momentum space. For t4b1, the band inversion between Sn spin-down s and p x orbitals has already existed before the SOC is considered, similar to the trend in iodine-half-passivated stanene in [24]. The effect of SOC is merely to lift the degeneracy of the p orbitals at the E F and opens a topologically nontrivial band gap. Thus, the topological mechanism in t4b1 is completely different from those in other systems shown in figure 4 and also in magnetized Bi 2 Te 3 films, where the band inversion plays an important role [8]. Due to the quadratic band dispersion near the Γ point, the QAH effect in t4b1 can be ascribed to the exotic two-meron structure of the pesudospin texture, as discussed in magnetized MoS 2 and Cu 2 S systems [38,39]. Thus, the topological origin is also not similar to that of the Dirac type bands, as proposed in seminar researches by Haldane [40] or Kane and Mele [41], where a couple of merons with each contributing 1/2 topological charge are found at the K and K′ points in the momentum space. Therefore, the QAH effect obtained in t4b1 is very unique, caused by the strong magnetism as well as the C 3v symmetry owned by the lattice. Note that there is a band inversion of the Sn spin-down s orbital in t4b1, compared to those in other systems in figures 4(a), (e), and (g), which, however, does not directly lead to the nontrivially topological states in the system instead of the Sn p x orbital varying from the occupied state to the unoccupied state. Besides the t4b1 system, the t4b3 case also possesses the C 3v rotation symmetry (table 1), guaranteeing the existence of the degenerate point of the Sn p x and p y orbitals at the Γ point, without the consideration of the SOC ( figure 4(g)). The band degeneracy in t4b3 occurs at the top of valence bands ( figure 4(g)), different from the case in t4b1 ( figure 4(c)), due to the absence of the band inversion of the Sn spin-down s orbital in t4b3. The SOC in t4b3 not only lifts the double degeneracy at the top of the valence bands, as in the case of t4b1, but also causes the band inversion of Sn spin-down s and p y orbitals around the E F , as in t3b1 and t4b2 ( figure 4(h)). It is interesting to explore intensively whether the band inversion in t4b3 plays the same role as those in t3b1 and t4b2. To deeply explore the SOC effect on the topology in t4b3, we calculate the band structures of t4b3 with various SOC strengths. To compare directly, the corresponding bands of t3b1 are also calculated. Figures 6(a) and (b) give the bands of t3b1 and t4b3, respectively, with the SOC strengths of λ=0.6 λ 0 and λ 0 , where λ 0 indicates the real SOC strength of the corresponding system. With the increase of the λ from 0 to λ 0 , the band gaps at the E F of the both systems undergo a closing and reopening transition. For t3b1, with λ=0.6 λ 0 , the band inversion does not happen yet. The system is topologically trivial, proved by the Chern number calculated ( figure 6(a)). When the SOC is continued increasing, the band gap is closed and then reopened, accompanied by a band inversion. As shown in figure 6(a) for λ=λ 0 , the system evolves into a topologically nontrivial state with C=−1. Thus, the band inversion in t3b1 is the most crucial factor to induce the topological state. The role of the band inversion in t4b3, however, is different from that in the case of t3b1. In figure 6(b) with λ=0.6 λ 0 , the p xy degeneracy at the top of the valence bands has been lifted while the band inversion does not appear yet. The energy gap opened between the first (the blue curve) and the second (the green curve) valence bands is found to be nontrivial with C=−1. The corresponding Berry curvature distribution is displayed in figure 6(c). The band gap around the E F in the system is, however, still trivial with C=0 (see the left panel of figure 6(b)). When λ=λ 0 , the band inversion occurs and the band gap at the E F becomes topologically nontrivial. The Berry curvature distribution of the valence bands without the topmost valence band for t4b3 with λ=λ 0 (figure 6(d)) also gives C=−1. Hence, the topology of the QAH effect obtained in t4b3 essentially comes from the lifting of the p xy degenerate bands by the SOC, not from the band inversion of Sn s and p y orbitals. The role of the band inversion in t4b3 is merely to extend the topologically nontrivial energy region and making the band gap at the E F also present the QAH effect, distinct from those in t3b1 and t4b2. Therefore, various topological mechanisms are found in t3b1 (t4b2), t4b1, and t4b3, associated with the lattice symmetry, the induced magnetism strength, and the passivation concentration of the H atoms in the system. For t3b1 and t4b2, the band inversion, one classical topological mechanism, plays a key role in generating the topological state, similar to that of many other 2D topological systems [6,8,11,13,37]. The topological mechanism found in t4b1, however, is completely different from that of t3b1 and t4b2. The topology comes from the lifting of Sn p x and p y band degeneracy around the E F due to the SOC interaction. The nontrivial band gap opened through this mechanism is usually very large (180 meV for t4b1). It is the atomic SOC rather than Rashba SOC that opens the band gap. This mechanism is very exotic and probably promotes quick developments of topological states in future applications. The topological mechanism of t4b3 may be regarded as a combination of those of t3b1 (t4b2) and t4b1 since it owns C 3v symmetry and band inversion simultaneously. The role of the band inversion in t4b3, however, only extends the nontrivial energy region to the area around the E F , instead of producing the topology as in t3b1 (t4b2). With these three types topological mechanisms, the topology of stanene passivated partly with H atoms can be understood well, which can also be applied to comprehend the topologies of other systems or design new topological materials. Note that disordered arrangements of H or Sn atoms in the lattice may affect the band dispersion. If the disorder effect is not drastic, however, the topological behaviors predicted in the studied stanene systems should be kept. In experiments, the Sn films are usually required to fabricate on an insulating substrate which is expected to keep the nontrivial features of the freestanding Sn films intact. For stanene studied, we find the hydrogenterminated SiC (111) surface is a suitable substrate. As shown in figures 7(a) and (b), the t4b1 sample is deposited on the SiC (111) substrate modeled by a slab of three atomic layers of SiC, with the bottom layer fixed to mimic a semi-infinite solid. To eliminate the dangling bonds, both sides of the SiC slab are hydrogenated. The lattice mismatch between the stanene sample and the substrate is about 1.95%. The relaxed distance between the neighboring two H planes in the interface of the system is about 2.47 Å, indicating the van der Waals bonds formed between the sample and the substrate. The obtained band structures of t4b1 and t4b3 on the SiC (111) surfaces are displayed in figures 7(c)-(f), from which one can see that the Sn p z orbitals (the flat bands) move down in energy due to the interactions from the substrate. The band trends around the E F are, however, kept well. For example, for t4b1 on the SiC substrate, the degenerate bands of Sn spin-down p x and p y orbtails at the E F is lifted after the SOC is included, similar to the case in t4b1 without substrates. The band gap opened in t4b1 with the substrate is also topological nontrivial. From the component analysis, the band inversion is also happened in t4b3 with the SiC substrate. Therefore, to observe experimentally the QAH effect in the stanene studied, the SiC (111) slab is a suitable substrate. Conclusion We theoretically investigated electronic and topological properties of the Sn monolayers passivated with H atoms. We find 33% and 67% patterns of t3b1 and t3b2 as well as all t4b1, t4b2, and t4b3 functionalized Sn monolayers present the QAH effect. Generally, the systems with one of the sublattices fully passivated and the other not are easy to present the topological states. Whether the lattice owns the C 3v rotation symmetry plays a key role in forming the topological states. For the systems possessing the C 3v symmetry (t4b1 and t4b3), the nontrivial topology essentially comes from the lifting of the p xy degenerate bands, due to the SOC. For the other systems without the protection of the C 3v symmetry, the band inversions induced by the SOC are the topology origin. Especially, despite the band inversion also appearing in t4b3, its effect is completely different from the traditional one in the other systems (t3b1 and t4b2). The hydrogen-terminated SiC (111) surface is found to be a suitable substrate for the stanene to observe experimentally the QAH effect.
6,581.4
2019-02-28T00:00:00.000
[ "Physics", "Materials Science" ]
Transmit Energy Efficiency of Two Cognitive Radar Platforms for Target Identification Cognitive radar (CRr) is a recent radar paradigm that can potentially help drive aerospace innovation forward. Two specific platforms of cognitive radar used for target identification are discussed. One uses sequential hypothesis testing (SHT) in the receiver processing and is referred to as SHT-CRr and the other one uses maximum a posteriori (MAP) and is referred to as MAP-CRr. Our main goal in this article is to make a practical comparison between SHT-CRr and MAP-CRr platforms in terms of transmission energy efficiency. Since the performance metric for the SHT-CRr is the average number of illuminations (ANI) and the performance metric for MAP-CRr is the percentage of correct decisions (\(P_{cd}\)), a direct comparison between the platforms is difficult to perform. In this work, we introduce a useful procedure that involves a metric called total transmit energy (TTE) given a fixed \(P_{cd}\) as a metric to measure the transmit energy efficiency of both platforms. Lower TTE means that the platform is more efficient in achieving a desired \(P_{cd}\). To facilitate a robust comparison, a transmit-adaptive waveform that consistently outperforms the pulsed waveform in terms of both \(P_{cd}\) and ANI is needed. We show that a certain adaptive waveform called the probability weighted energy signal-to-noise ratio-based (PWE-SNR) waveform outperforms the pulsed wideband waveform (i.e., flat frequency response) in terms of ANI and \(P_{cd}\) for all ranges of transmit waveform energy. We also note that the \(P_{cd}\) performance of SHT-CRr can be drastically different from the probability threshold (i.e., the probability value that is used to stop radar illumination for the purposes of classification), which is critically important for CRr system designers to realize. Indeed, this fact turns out to be key in accomplishing our goal to compare SHT-CRr and MAP-CRr in terms of transmit energy efficiency. Introduction The use of radar in aerospace engineering is very widespread and has a very long history.It is a futile task to cite all relevant works in various applications, but we will mention a few good ones for the novice and interested reader.For example, radar is used in navigation [1,2].Of course, one major contribution of radar is in air traffic control [3,4].Moreover some planes (commercial or military) have radars used for safety and/or aviation, i.e., radars that warn pilots of other planes, weather and/or targets [5].Other radars installed in airborne applications are used for imaging, such as synthetic aperture radar (SAR) [6,7].Others are used for remote sensing [8,9].The list goes on and on.Our interest here is a radar application that is used for target identification.More specifically, we are interested in a closed-loop radar that is able to dynamically change its waveform for the purposes of target identification.Such a radar uses information extracted from previously received signals to adaptively modify its waveform to efficiently identify the present target in an identification scenario.This radar is an example of a closed-loop radar system, also known as cognitive radar (CRr) [10,11].A knowledge-aided approach for CRr is presented in [12], and a knowledge-aided waveform and receiver filter design is presented in [13]. In [14], two types of cognitive radar (CRr) platforms were introduced for target identification with the use of adaptive transmit waveforms.These two platforms were extended for stochastic targets in [15] for the purposes of target classification, i.e., the radar is used to classify which target class a particular target belongs to.These platforms were extended in the case of signal-dependent interference [16,17].The two CRrs are different (from the receiver signal processing point of view) since each uses a different metric to measure the CRr's performance.The first type of CRr assumes a fixed probability threshold and finds the average number of illuminations (ANI) or transmissions to meet that threshold as a function of transmit waveform energy level (which can easily be translated to the power constraint).Sequential hypothesis testing (SHT) is used in the receiver, and as such, we label this radar as SHT-CRr.The second CRr does not use a probability threshold.Instead, it assumes a fixed number of transmissions.The metric used is the probability of correct identification or the percentage of correct decision (P cd ).Here, maximum a posteriori (MAP) is used to decide which target is present in a target identification scenario.As such, we call this platform MAP-CRr. Our goal in this article is to make a practical comparison between SHT-CRr and MAP-CRr platforms.Because the performance metrics are different, it is difficult to make a direct comparison between the two.A recent and important push in electronic and aerospace systems is that a system or subsystem be energy efficient, also known as a green system or technology.Therefore, one of the more important contributions of this paper is to introduce a useful procedure and energy metric, such that that energy efficiency for both waveforms can be quantified, and thus, a comparison can be made as to which platform is more transmit energy efficient.However in most target recognition applications, P cd is an important requirement in practice.Thus, we will compare the two CRrs in terms of energy efficiency given a fixed P cd .In [14][15][16], various adaptive waveforms performed better in terms of ANI and P cd compared to the wideband waveform, but interestingly, not in all ranges of transmit energy.To facilitate a robust comparison, a waveform that consistently outperforms the classical pulsed wideband waveform as a function of transmit energy per pulse is needed.In [18], a certain probability-weighted energy (PWE) signal-to-noise ratio-based waveform was proposed.This PWE-SNR waveform showed good preliminary results in terms of P cd , but no results were shown for ANI, which we need for our comparison study.Therefore, to compare SHT-CRr and MAP-CRr, another contribution of this paper is to produce performance results in terms of both P cd and ANI for the PWE-SNR waveform and to show that it indeed performs better than the wideband waveform for all transmit energy levels.We also report in this work that the resulting P cd of SHT-CRr can be drastically different from the probability threshold used (as a function of transmit energy).This result is important to report, so that designers in the radar community may know at what levels of energy are or what power is needed when the resulting P cd is much lower than the probability threshold used.If we indeed require that P cd match (or be even greater than) the probability threshold used, then we need the transmit energy levels (i.e., SNR range) in which this is true, such that we can meet this P cd requirement. This paper is organized as follows.Section 2 introduces the need, procedure and metric, such that two types of cognitive radar for target recognition may be compared.Section 3 provides a review of the two radars, which are named SHT-CRr and MAP-CRr.Section 3.1 introduces the notion of the matched illumination waveform, called the eigen-waveform.Section 3.2 frames the target recognition problem in terms of multiple hypothesis testing (MHT).Section 3.3 discusses how a transmit adaptive waveform can be formed.Section 3.4 discusses how initial priors are updated via past and current measurements.Section 3.5 discusses how SHT-CRr and MAP-CRr differ in terms of signal processing and performance metrics.Section 4 discusses the two metrics of interest: P cd and ANI.Section 5 shows the performance results and compares the radars in terms of the metric total transmit energy (TTE).Section 6 concludes the paper. Procedure and Metric to Compare MAP-CRr and SHT-CRr While it is clear in [14][15][16][17][18] that the two CRr platforms are better than conventional systems, it is not clear which one of the two CRr platforms is better and how to even compare them.This is because a direct comparison of their metrics is difficult since the performance metrics are inherently different.In a multiple target hypothesis testing problem, the MAP-CRr tries to identify the correct hypothesis after a fixed number of illuminations instead of using a probability threshold.The resulting performance metric is called the probability of correct identification or the percentage of correct decisions (P cd ).To generate performance results, we utilize Monte Carlo simulations to calculate the percentage of correct decisions, and thus, we will refer to the P cd metric for MAP-CRr.The SHT-CRr does not limit the number of illuminations.Instead, it uses a probability threshold to stop the transmissions.The random nature of the noise in the receiver makes the number of illuminations different from experiment to experiment, i.e., random.Therefore, the performance metric is the average number of illuminations (ANI).In a Monte Carlo target identification simulation, ANI refers to the mean number of transmissions it takes for a hypothesis probability to cross a given probability threshold (since there is a vast number of experiments used). The receiver nature of the two platforms may be different, but in the end, an important requirement (if not the most important) in target identification is the eventual P cd .In other words, we need not only find the ANI for SHT-CRr, but the actual P cd corresponding to the ANI.Recall that the SHT-CRr uses a probability threshold to stop illumination and makes a decision as to which target is present.Unfortunately, the probability threshold does not necessarily yield the P cd desired, as will be shown in this work.Thus, to make a fair comparison, we propose to compare the total transmit energy (TTE) spent on yielding that P cd .Total transmit energy is needed, because each platform uses multiple transmissions.In other words, the less total transmit energy spent in producing that P cd , the more efficient that type of radar is.For the MAP-CRr, TTE is the number of transmission times the energy level used (given a certain P cd , number of transmissions and waveform type).For the SHT-CRr, the "average" total transmit energy (TTE) is the corresponding ANI times the energy level per transmission pulse (again for a certain P cd ) given a fixed probability threshold. Unlike the number or illuminations or probability threshold, it is difficult to fix the P cd in a Monte Carlo simulation.Therefore, we propose the following "procedure" in order to find TTE and, thus, perform the energy efficiency comparison: (1) set up a Monte Carlo target identification simulation using SHT-CRr given a probability threshold to plot ANI vs. transmit energy results; (2) calculate and plot the corresponding P cd vs. transmit energy for that experiment; (3) set up a Monte Carlo simulation using MAP-CRr to produce P cd vs. the transmit energy plot using various numbers of transmissions; (4) from the SHT-CRr's P cd vs. transmit energy plot, pick a specific transmit energy level (this may start with a low energy level); note the corresponding P cd (to be used for comparison); note the resulting ANI; then calculate the SHT-CRr's TTE for that P cd ; (5) now, using the same energy level (used in SHT-CRr), find the number of transmissions that has the closest P cd noted above and calculate MAP-CRr's TTE; (6) decide that whichever radar has the lower TTE is the the more efficient type; and (7) repeat the process for medium and high energy levels to see if one type is consistently more efficient than the other.To illustrate the procedure, example results will be analyzed in Section 5 (Comparing TTE). Brief Review of the Two Types of CRr Both SHT-CRr and MAP-CRr are used for the target recognition problem.They share common features, as shown in Figure 1.Notice that what makes a radar system cognitive is the the closed-loop nature of the system.Notice that both types of radar update the prior probabilities of the target hypotheses.These updated probabilities (calculated from the latest measurements) are passed from the receiver to the transmitter portion of the radar.Due to the Bayesian nature of updating these probabilities, these probability updates also incorporate prior measurements (which means that prior knowledge is retained, making the system cognitive).These probability updates are used in generating the next waveform to illuminate the radar scene, thereby closing the loop.Since the updated probabilities are different, the new waveform is also different, making it truly transmit adaptive.To describe this closed-loop system in detail, we need a brief review of the signal processing models used in Figure 1. Matched Waveform to a Target Response For convenience, we use the discrete-time model, where the sampling instant is normalized, such that T s = 1.It is sufficient to illustrate both types of CRr by using a deterministic target response.This is because the CRr can be extended for various target types (deterministic or stochastic) by consulting [16].Our focus here is not on target types, but rather on CRr types.First, we review the notion of a transmit waveform matched to an extended target response.If a target has a known response, it turns out that there is such a waveform that maximizes the received signal energy or power out of a matched filter receiver given a transmit energy constraint [19].In other words, this waveform also maximizes the received signal-to-noise ratio (SNR). Let h be the complex-valued target response, and let x be an arbitrary complex-valued transmit waveform.Let the complex-valued vector w be the additive white Gaussian noise from the receiver hardware.Thus, the received signal plus noise is y = s + w where s = h * x, and ( * ) designates the convolution operation.For convenience, we can specifically describe h = √ E h h, such that E h is the target response energy and h is a unit energy vector.It follows that we can let x = √ E x x, such that E x is the transmit waveform energy and x is a unit energy vector.Then, If we let H be the target response convolution matrix [14], then: where w being a complex-valued white Gaussian noise process has a covariance matrix given by σ 2 I.We recall that σ 2 is the variance of one noise sample.Using the proper matched filter to the received signal plus noise, i.e., (Hx) † y, then the received energy due to the echo s = h * x is given by: where † represents the conjugate-transpose or Hermitian operation.If we let R = H † H be the autocorrelation of the target convolution matrix, then the received energy due to the target echo for any transmit waveform is given by: Using eigenvalue decomposition, we realize where E s,λ corresponds to a particular eigenvalue λ and its corresponding unit-energy eigenvector q.Thus, we can maximize the received energy E s by choosing the eigenvector corresponding to the maximum eigenvalue to be our transmit waveform.Thus, the maximum received echo energy is given by: where the matched transmit waveform that maximizes the received echo energy is clearly x = √ E x q max , i.e., x = q max , which is sometimes referred to as the eigen-waveform. Multiple Hypothesis Testing While it is useful to know that an eigen-waveform exists for a specific target response, our goal in this work is to determine which target is present from among M known alternatives (deterministic responses).Again, extension to stochastic targets is straightforward via [15,16].There are M hypotheses for the target channel, and each hypothesis is characterized by a target response and a prior probability of that hypothesis being true.Our goal is to identify the correct hypothesis as accurately as possible with a single or multiple energy-limited transmissions.We will assume equal prior probabilities for each target (initially when no transmission has been sent), but we show how other priors can be incorporated.A Bayesian representation of the channel is formulated where the target hypotheses are denoted by H 1 , H 2 , ..., H M with corresponding prior probabilities P 1 , P 1 , ..., and P M .The i-th hypothesis is characterized by a target response s i with corresponding target convolution matrix H i , i = 1, 2, ..., M . The recognition or identification hypotheses are: With w ∼ CN (0, σ 2 I), the corresponding pdfs are given by: where N is the length of the received measurement. Transmit Adaptive Waveform In the multiple hypothesis testing (MHT) identification problem described above, the radar tries to figure out which target is present among the target alternatives.In other words, we cannot simply use one of the eigen-waveforms since we do not know which target is present a priori.Various adaptive waveforms were used in our previous works [14][15][16], and most of them performed well compared to simply using a (non-adaptive) pulsed wideband waveform.It was difficult to ascertain which specific waveform performed the best, both in terms of ANI or P cd , since some waveforms performed well in high SNR, but not necessarily low SNR, while others performed well in low SNR and not necessarily high SNR.Furthermore, a waveform scheme may perform well in terms of P cd , but not necessarily ANI.In this work, it would be best to find a waveform that consistently performs well, both in terms of P cd and ANI for all transmit energy constraints.As such, we will propose one.When the target alternatives have stochastic responses, a particular proposed adaptive waveform is based on scaling each eigen-waveform matched to each target alternative and summing them while meeting the transmit energy constraint [18].This waveform was called the probability weighted (PWE) SNR-based waveform since the scaling used is based on the prior (or updated) probabilities of the the hypotheses corresponding to the target alternatives.In [18], preliminary results suggest that the PWE-SNR waveform performs consistently well in terms of P cd against the wideband waveform for all transmit energy constraints.However, it was not addressed in that paper how the PWE-SNR waveform would perform in terms of ANI, which we need to accomplish our goal of comparing the two radar platforms.Thus, we have extended our simulations and shown that it also performed well in terms of ANI vs. the wideband waveform for all of the transmit energy levels.As such, we will use this waveform in comparing how SHT-CRr and MAP-CRr perform.Initially, the waveform is formed via: where q m corresponds to the unit-energy eigen-waveform corresponding the m-th target hypothesis, and P m corresponds to the initial probability.Since the energy in Equation ( 8) may not result to unity, we can meet the transmit energy constraint by: where √ E xt is the resulting energy of the non-unit energy waveform, while the scaling √ E x ensures that the transmit energy constraint is met. Probability Updating One of the components that makes a radar cognitive is its ability to extract or use knowledge from previous received measurements.For the CRr mentioned in our previous works, this comes in the form of the updated probabilities for all of the target hypotheses via the Bayesian framework.In other words, the updated probability after first transmission for any of the hypothesis in Equation ( 7) is given by: where the "1" in P m,1 signifies the updated probability after the first transmission is received and processed by the receiver.By virtue of total probability, the sum of the updated probabilities is one, and thus, the denominator in Equation ( 10) may be replaced by a scaling that ensures that the sum leads to one.In other words, the updated probability for any number of transmission k + 1 is given by: where β ensures unit total probability.Note that the updated probability in Equation ( 11) is dependent on the latest measurement y k+1 , as it should be.However, it is also dependent on the prior update P m,k , which was dependent on prior measurement y k .In other words, the prior probability update information is kept and still used in the latest probability update.Thus, the next (k + 2)-th transmit waveform is given by: where Equation ( 9) is needed to meet the transmit energy constraint. MAP-CRr and SHT-CRr Looking at Figure 1, we finally arrive at a point where we can differentiate MAP-CRr and SHT-CRr in terms of how to terminate transmission and deciding which target is present.The MAP-CRr is used when the number of transmissions is constrained.For example, if the number of transmission is constrained to k + 1, then the previously-updated probabilities are again updated after the processing of (k + 1)-th received signal as dictated by Equation (11).Then, the radar makes a decision.To make a decision, the receiver looks at all of the latest probabilities and notes the one with the highest update.The receiver decides that the target present (whether true or not) corresponds to the hypothesis with that highest update, which leads to the name MAP-CRr. Another way to operate a CRr is to not constrain the number of transmissions.To decide if a target is present, the CR uses a probability threshold, which when met by one of the updated hypothesis probabilities, triggers the CR to stop transmission.For example, a probability threshold desired could be 0.9, and the radar does not stop transmission until this threshold is crossed by one of the updated probabilities via Equation (11), as shown in Figure 1.The radar decides that the hypothesis whose probability crosses the threshold is the target present (whether true or not).We name this radar SHT-CRr. P cd and ANI Our focus here is not to design more adaptive waveform schemes.As mentioned before, our goal is to compare the two types of cognitive radar used for target identification.In order to compare SHT-CRr and MAP-CRr, we do need to use some waveforms for the purposes of illuminating the target scene.As mentioned before, we will use the SNR-PWE waveform, as well as the wideband waveform (to ensure that the SNR-PWE waveform consistently outperforms the wideband waveform in terms of P cd and ANI for all transmit energy levels). Let us briefly describe the CRr simulation setup that will produce the P cd and ANI that we need for CRr comparison.We set up a target recognition experiment where we assume that a deterministic extended target is present from four known alternatives, i.e., M = 4.For the sake of completeness, such that this research can be reproduced by an interested reader, we include the four target responses used in our experiments.Although not necessary, the target responses were generated to have unit energy.Recall that our targets are complex valued, and thus, the real and imaginary responses are shown in Figure 2. Various target responses can be used in the Monte Carlo experiment.Here, the targets were generated such that they are resonant in certain frequency bands (to simulate extended targets exhibiting such properties).The corresponding magnitude spectra for these targets are shown in Figure 3, which shows the resonant nature of these targets.The CRr forms the PWE-SNR waveform by scaling the four SNR-based waveforms (also known as eigen-waveforms) matched to those four targets.The scaling is a function of the four prior probabilities as dictated by Equations ( 10)- (12).The waveform-target convolution echo is added to Gaussian noise in the receiver.The measurement is processed and used to update the prior probabilities via Bayes theorem Equation (10).For the SHT-CRr, if the probability threshold is not met, then a new PWE-SNR waveform is adaptively designed as dictated by Equations ( 8) and ( 9) and transmitted.For MAP-CRr, if the number of transmission thresholds is not exceeded, then the transmission continues with a new waveform, as dictated by Equations ( 8) and (9).For the results shown in this paper, each ANI or P cd curve is a result of 100,000 Monte Carlo trials where a target is randomly chosen from the alternatives.First, let us look at the performance of both the PWE-SNR waveform and the pulsed wideband waveform in terms of ANI.In Figure 4, we show the resulting ANI vs. transmit energy for the PWE-SNR and wideband waveforms with SHT-CRr using a probability threshold of 0.90.Notice that the PWE-SNR (labeled "PWE") waveform outperforms the wideband (labeled "WI") for all transmit energy values.In other words, the ANI needed to support the 0.90 probability threshold is smaller for the PWE than the wideband waveform.In all of our various experiments, the PWE waveform consistently performed better than wideband for all ranges of transmit energy in terms of ANI for various probability thresholds used.For brevity, we only include results using a probability threshold of 0.90 here, since we will use these particular results when we finally compare SHT-CRr and MAP-CRr, which is the main objective of this paper. With the same simulation set (using MAP-CRr this time), we show the resulting P cd vs. transmit energy with various numbers of transmissions in Figure 5 for PWE and wideband waveforms.Notice that the PWE waveform consistently outperforms the wideband waveform for all ranges of energy given a fixed number of transmissions (labeled "NTR"). .P cd vs. transmit energy in dB units for various numbers of transmissions (labeled "NTR") with MAP-CRr using PWE-SNR (labeled "PWE") and wideband (labeled "WI") waveforms. Comparing Total Transmit Energy For SHT-CRr, it would seem that if we calculate the P cd of a Monte Carlo trial that it would be close if not equal to the probability threshold used.After all, this is the reason why a probability threshold would be used.While we were aware that that the probability threshold may have been different from the eventual P cd , this fact was never explored from our previous works [12][13][14]16].We found out in this work that the probability threshold could be drastically different from the eventual P cd .This is because in an experiment, while SHT-CRr does ensure that transmission stops when the probability update threshold is met, it does not mean a correct decision is made.In other words, while the updated probability of one of the target hypotheses may actually cross the probability threshold, it does not necessarily mean that the target present in the actual experiment corresponds to that hypothesis.It turns out that the discrepancy is a function of the transmit energy.Indeed, the P cd for the experiment corresponding to the results in Figure 4 is shown in Figure 6b, while we repeat Figure 4 in Figure 6a for convenience.We notice that the P cd increases (and not a constant, like the 0.90 probability threshold) as a function of increasing transmit energy.Conversely, the lower the transmit energy is, the lower is the P cd despite the fact that the ANI is higher.This is important to recognize.In other words, limiting the transmit energy used per pulse and simultaneously allowing the number of transmissions to be non-limited does not actually increase the P cd , which is a profound observation (at least for the two waveforms used here and the various various waveforms we tested so far).It is not until high transmit energy is used (here, between 0 and 5 dB energy units) that both PWE-SNR and wideband waveforms actually meet the 0.90 probability threshold.This is a critical and important result for system designers to realize. Recall that SHT-CRr and MAP-CRr clearly have different metrics in terms of performance (ANI and P cd for MAP-CRr).Thus, in order to compare SHT-CRr and MAP-CRr, we have to formulate a metric that effectively multiplies the number of transmissions or ANI to transmit energy level (per pulse) to produce a metric called total transmit energy (TTE) for a given P cd .We mentioned that fixing a P cd is difficult to implement in the simulation.Therefore, we recall the procedure discussed in Section 2 that allows us to perform the comparison.We will use both the PWE-SNR and wideband waveforms to illustrate the comparison. Comparison with the PWE Waveform From Figure 6, we see that at −15 dB energy units (low energy case), ANI ≈ 73.4 and P cd ≈ 0.34 using SHT-CRr.With NTR = 2 at −15 dB energy units in Figure 5 using MAP-CRr, P cd ≈ 0.39.In other words, the same performance is easily met by MAP-CRr by limiting the transmissions to a fixed number of two (instead of an ANI of 73.4).Since we defined TTE to be the number of transmissions times transmit energy, then for SHT-CRr to meet P cd ≈ 0.34, TTE = 3.7 dB energy is needed.Using MAP-CRr with NTR = 2, then TTE = −12 dB energy (which is a difference of 15.7 dB), which means MAP-CRr is more efficient in using energy than SHT-CRr (with threshold 0.90) at low transmit energies.At −5 dB energy units (medium energy), ANI ≈ 8.0 and P cd ≈ 0.59 using SHT-CRr.With NTR = 2 at −5 dB energy using MAP-CRr, P cd = 0.74 (Figure 5).Thus, the the performance is again exceeded by MAP-CRr by limiting the transmissions to two.Using SHT-CRr, TTE = 4 dB energy units, and using MAP-CRr, TTE = −2 dB units for a difference of 6 dB.In other words, the performance difference Figure 6.(a) ANI vs. transmit energy (E s ) in dB (E s dB) for PWE and wideband waveforms and (b) P cd vs. transmit energy (E s ) in dB (EsdB) for PWE and wideband waveforms using a probability threshold of 0.90 with the SHT-CRr platform. Comparison with the Wideband Waveform Mirroring the analysis above, from Figure 6, we see that at −15 dB energy units (low energy case), ANI ≈ 106.5 and P cd ≈ 0.33 using SHT-CRr with the wideband waveform labeled "WI".With NTR = 2 at −15 dB energy units in Figure 5 using MAP-CRr, P cd ≈ 0.33.In other words, the same performance is easily met by MAP-CRr by limiting the transmissions to a fixed number of two (instead of an ANI of 106.5).Thus, the total effective energy for SHT-CRr to meet P cd ≈ 0.33 is TTE = 5.5 dB energy (this is more than the 3.7 dB needed for the PWE waveform, as expected, since PWE actually changes waveform every illumination, making for a true feedback system).As before with MAP-CRr with NTR = 2, then TTE = −12 dB energy (which is a difference of 17.5 dB), which again shows that MAP-CRr is more efficient at using energy than SHT-CRr (using threshold 0.90) at low transmit energies.At −5 dB energy units (medium energy), ANI ≈ 12.0, and P cd ≈ 0.52 using SHT-CRr.With NTR = 4 at −5 dB energy using MAP-CRr, P cd = 0.6.Thus, the the performance is again exceeded by MAP-CRr by limiting the transmissions to four instead of ANI of twelve.Using SHT-CRr, TTE = 3.8 dB energy units, and using MAP-CRr, TTE = 1 dB unit, for a difference of 2.8 dB.Just like what is observed with the PWE waveform, the performance difference begins to lessen as transmit energy is increased.At 5 dB of energy units (large energy), ANI ≈ 1.42 and P cd ≈ 0.95 using SHT-CRr.In other words, it only takes a little over one transmission on average to get a P cd of 0.95.Again, since this is a non-adaptive waveform, its P cd performance is lower than that of PWE, which is close to one.With NTR = 1 at 5 dB energy units using MAP-CRr, the P cd ≈ 0.95, which illustrates that MAP-CRr TTE is just slightly lower than SHR-CRr TTE.In other words, even with the use of the wideband waveform, it is clear that from very low to high energy transmission, the effective transmit energy is better for MAP-CRr than SHT-CRr, which makes it more efficient. It should be mentioned that we also used other reasonable probability thresholds (e.g., 0.95) in our Monte Carlo experiments.In the interest of brevity, we do report that the TTE is better for MAP-CRr than SHT-CRr for both waveforms, which lessens as transmit energy is increased.In other words, the same conclusions are observed despite changing the probability threshold. Conclusions In this paper, we set out to compare two CRr platforms for target recognition, called SHT-CRr and MAP-CRr, in terms of transmit energy efficiency.A direct comparison was difficult, since the performance metrics are different: ANI for SHT-CRr and P cd for MAP-CRr.In target identification, however, the probability of correct classification is one of the utmost requirements.Thus, we looked at the eventual SHT-CRr P cd .We reported that the P cd performance of SHT-CRr could be drastically different from the probability threshold used. We proposed a procedure for how to compare the two radars.The procedure basically tries to find comparable P cd 's from both radar Monte Carlo results.We find two P cd 's that are as close as possible.Then, we compare the metric called total transmit energy for those P cd 's.Since the transmissions are multiple, TTE is the total effective energy used.We needed a waveform that consistently outperforms the wideband waveform in terms of P cd and ANI for all possible transmit energy levels.We showed that a waveform called the PWE-SNR waveform outperforms the wideband waveform in terms of ANI and P cd for all ranges of transmit waveform energy.We used both (PWE-SNR and wideband) waveforms to facilitate the comparison of SHT-CRr and MAP-CRr.It is shown that at very low to high transmit energy, the TTE for MAP-CRr is better than SHT-CRr (which means more energy efficiency), but the efficiency difference lessens as the transmit energy is increased. Author Contributions Dr. Ric Romero authored this article.The paper was based off the Master's thesis work (in Electrical Engineering) by his student LT Emmanouil Mourtzakis of the Hellenic Navy at the Naval Postgraduate School in Monterey, CA. Figure 1 . Figure 1.Block diagram of sequential hypothesis testing (SHT)-cognitive radar (CRr) and MAP-CRr for target recognition or identification application. Figure 2 . Figure 2. Extended target responses used in the Monte Carlo target identification experiments (real and imaginary portions). Figure 3 . Figure 3. Magnitude spectra of the four target responses showing resonant bands.
7,750.4
2015-06-26T00:00:00.000
[ "Computer Science" ]
Spatial production distribution, economic viability and value chain features of teff in Ethiopia: Systematic review Abstract Teff is the most preferred and most commercialized cereal crop in Ethiopia. it’s one of the underutilized crops. It’s nutrient-dense and well-suited to Ethiopia’s growing conditions, but little has been invested to tap into its domestic and international markets. This review focus on the regional teff production distribution, economic significance, and value chain of teff in Ethiopia. The country has a chance in teff specialization and value-added products will likely contribute to generating more incomes, reduce poverty and sustain growth. But unable to overcoming of constraints and seizing opportunities associated with the value chain and its components. Mainly due to teff producing and value addition practice is insufficient and generally depends on conventional practices, and its marketplace is restricted local and the government imposes an export ban on it to limit the upward pressure on domestic grain prices and address local food security. Hence, in a nutshell, value chain development aimed at stimulating economic growth and increasing competitiveness is a vital issue. To take advantage of growing domestic and international demand for teff, the domestic teff industry must invest heavily in improving teff production methods, opening up and expanding its international market to ensure its status as a superglobal food and a contributor to global food security gains. This needs simultaneously investments and policy support networks, which are coordinated with all relevant stakeholders. PUBLIC INTEREST STATEMENT Teff is one of the stable and important cereal crops in Ethiopia. It is relatively risk-free, resistant to dry season, water-logging, and common pests and diseases, and favored by a large number of local smallholder farmers. It is produced as a purpose of consumption and is eminent for cash income sources. Locally teff is used for preparing foods like Injera (staples, a fermented locally processed flatbread), tella (local beer type), and bread, etc. Injera is the most likely food type in Ethiopian dishes. Consumers' preference of teff increased towards with the expansions of urbanization and the rise of healthy food consumers due to teff is nutritionally rich and high in carbohydrates. It is glutenfree and can easily be tolerated by patients suffering from celiac disease. However, teff's economic viability in Ethiopia is underutilized due to low productivity resulting in lack of improved variety of seed, infrastructure, large-scale processing and purchasing to capture economies of scale. Introduction and background Agriculture remains a key to Africa's future (Tadele, 2021). In Ethiopia, cereal crop production and marketing are the main means of livelihood for millions of smallholder households, where, teff (Eragrostis Teff) is the most widely important cereal grain and its origin is thought to be Ethiopia. It has emerged as a grain crop for human consumption between 4000 B.C. and 1000 B.C. (Getahun, 2018). As the most preferred cereal crop, especially in urban areas, teff fetches a relatively high price in the market, making it an attractive crop for farmers. Being marked as one of the most recent superfoods of the 21st century like the old Andean grain quinoa, its worldwide acceptance is quickly rising (Collyns, 2013). In Ethiopia, teff is a major staple food. It is the most important crop in terms of cultivation area and production value . Ethiopia is not only the biggest teff-producing nation but also the only nation to have adopted teff as a staple crop. Teff contains a high nutritive value and has unique dietary benefits due to its being gluten-free and is typically preferred by healthconscious consumers. Because of its tiny grain size, the word teff likely originated from the Amharic word "tefa" which means lost, which is difficult to find once it is dropped while other evidence states that it was derived from the Arabic word tahf, a name given to a similar wild plant used by Seimites of south Arabia during the time of food insecurity (Abraham et al., 2018). Now, Ethiopia is the largest teff producing country, and the only country to have adopted teff as a staple crop and the country produces more than 90% of the teff in the world (Anadolu Agency, 2017). Teff is the only cereal crop the country has a comparative trade advantage. According to "FAO" (2018), 6.5 million smallholder farmers grow teff, and it is indispensable to the livelihoods of many Ethiopians. Today, teff fills in as a staple food for more than 50 million individuals in the Horn of Africa (Tadele and Assefa, 2012). Teff is a significant staple food crop in Ethiopia, generally used to get prepare "enjera 1 ", the primary public dish. It is one of the main yields for farm income and food security 2 in Ethiopia. It has high-income elasticity, mirroring that Teff demand rises when income rises (Berhane et al., 2011). As urbanization is expanding and earnings are expected to rise in Ethiopia (as appeared by a national household survey). Notwithstanding being a staple food for some people in Ethiopia and Eritrea for quite a long time, teff has just picked up prominence as a food crop in different parts of the world as of late (Araya et al., 2010;Gressel, 2008;Teferra et al., 2000). Despite teff is produced by millions of producers and the largest production volume, Ethiopia's significant poverty, and food insecurity, along with the fact that agriculture is the primary source of income for the vast majority of Ethiopians, make agricultural transformation a critical development aims for the country. Increased production of teff, a calorie-and nutrient-dense but lowyielding staple, is one prospective improvement . Moreover, the country is not capitalizing on its crop in the international market (FAO, 2015). On the other side, other countries are actively involving in teff production and marketing to capture its global rising markets. This is due to the teff production and value chain largely relies on traditional practices, and the teff market is limited by the government's export ban. Instead, other countries such as USA are increasingly participating in the teff market (Lee, 2018). Tefera,M.M.(2011). Land use/land-cover dynamics in Nonno District, Central Ethiopia. Journal of Sustainable Development in Africa 13, 123-141. One of the pressing issues with this underutilized crop of teff is that, its inadequately recorded and frequently ignored by mainstream research because of way that they are not economically significant in the worldwide market (Naylor et al., 2004). Agricultural research has usually centered around major crops, explicitly, maize, wheat, and rice, with generally little consideration put resources into orphan crops, particularly among researchers in developed nations (Ji et al., 2013). Teff research has been initiated in Ethiopia as early as the late 1950s. The research programs focused on breeding teff varieties to enhance production while not much emphasis is given to the value chain analysis and its economic outcomes. Likewise, it is well adapted to the growing conditions in Ethiopia, but little has been invested to improve the crop's productivity or to expand domestic or international markets. Our main concern and initiative on review on value chain, economic viability and its spatial distribution of teff are motivated by the fact that researchers have predicted that this commodity will become a new super-crop, increasing demand on a global scale (Crymes, 2015). Teff may be the next super-grain, and Injera may be the next super-food worldwide . In this view, we emphasize the economic viability of teff and its value chain that can help familiarize with the products feature and the preferences of producers and consumers towards adding value on teff for its economic significance. Hence, this review gives an overview of the foundation and teff production potential, challenges, and overall economic prospective of teff in Ethiopia and demand for future innovative research into the world's tiniest grain, teff; that is right now riding on the gluten-free wave as the next super grain in the market. Specifically, to achieve the following objectives • To review production status and spatial distribution of teff in Ethiopia • To review economic and multiple roles of teff along in the value chain in Ethiopia • To review supply gap and demand simulation of teff in Ethiopia Review Methodology A comprehensive review was carried out of empirical literature on the theories and empirical findings applied for teff production, supply chain, demand, and related actors. It has been used both temporal and spatial dimensions that able to filter information's focused on recent works that reflect countrywide verdicts. Search engine used The systematic review was carried out based on empirical findings. Both temporal and spatial dimensions by filtering information focusing on recent works that reflect countrywide verdicts were used. The study uses an integrative concept-centric technique that relies on the analysis of current literature and deductive logical reasoning to generate a new comprehensive scientific understanding about a topic that can be informative. The information offered by google using major indexers of Web of Science and Scopus and many others typing basic words like "teff, teff value chain, Economics of teff, teff supply chain in Ethiopia, teff demand" etc. More than 105 academic journals, books, reports, proceedings, and thesis work, and international agencies were browsed and around 75 materials were filtered specifically to this title by using keywords fit for purpose of this review study. Different inclusion and exclusion criteria were applied and filtered pertinent to this study. Such as focus on recent studies, and written only English language, focus on Ethiopia, original studies, peer-reviewed, and grey literature. Teff production and spatial distribution Ethiopia has diverse agro-ecology zones; such as from extremely lowland up to highland throughout all regions. As can be seen from Figure 1; Oromia and Amhara regions represent the largest teff producing regions; these accounted for about 87.8% of the national teff production volume and 85.5% of the area cultivated during the 2010/2011 cropping season. The third-largest teff producing region in the Southern Nations; Nationalities; and Peoples' (SNNP) region . The production coverage of teff accounted for about 24% of the nationwide grain-cultivated area, and nearly half of the smallholder farmers grew it between 2004 and 2014 (Central Statistical Agency, 2017;F. N. Bachewe et al., 2015). Hence, it is the most important cereal in Ethiopia in terms of agricultural land use and total value. Similarly, teff is most important crop by both volume of production ad area planted, and the second most important cash crop after coffee. Economic viability of teff is equal to the three other main cereals combined in the country (wheat, sorghum, and maize; Abeje et al., 2019). The crop is critical for income and food and nutrition security across the country and is grown by 6.5 million resource-poor smallholder farmers ("FAO" 2018). This implies that 43% of all Ethiopian farmers grow teff. Therefore; this sector is the most important in Ethiopia's agricultural economy; which accounts for 72% of all cultivated land. There is a significant difference in the production and productivity of teff in Ethiopia . Even if there is spatial production variation across the regions of the country, there is no observable difference in productivity across the region. This is due to the smallholder production system and approach used across the region are similar and there is no substantial difference in the productivity of teff in Ethiopia as can be seen from Figure 1. Moreover, the production of teff in the country is spatially distributed, and looking into more detail the regional variations of teff production as can be seen from Figure 2, one can observe that within the region there are potential areas specific to teff production. This can be linked with agroecological advantage and land allocation for the teff product. It is adapted to a wide range of environments and is presently cultivated under diverse agro-climatic conditions. Moreover, due to this spatial heterogeneity; there is great variation in the production and productivity of teff within this growing condition area. Particularly, Amhara and Oromia are the two major regions, and collectively, the two regions account for 85.5% of the teff area and 87.8% of the teff production (Lee, 2018). Teff is most commonly grown in the Ethiopian highlands, it is now being cultivated to grow in a wider range of conditions, from marginal soils to flood conditions. Particularly, East and West Gojjam of Amhara and East and West Shoa of Oromia are mainly known potential teff producing areas in the country (Demeke & Di Marcantonio, 2013). Type and user preference of teff for home consumption On the consumption side, teff is more readily eaten by urban households (61 kg/person/year) than by rural households (20 kg/person/year; "FAO," 2018). As state that teff has a significant role in Ethiopian agriculture; food; and trade sectors. Therefore, Ethiopia has a great chance to assure food security by boosting teff production and exporting. As a result, the demand for teff incredibly increases throughout the world. Additionally, with has numerous benefits. Teff is the most labor-intensive crop and its cost of production is relatively high compared to other cereal crops (Abraham, 2015). As a result, a comprehensive strategy for largescale development, adoption, and maintenance of farm tools must be developed at the national level. Efficient and effective farm technology may be able to address the most difficult and timeconsuming aspects of teff farming, as well as the high cost of production. Over time, there are changes in the sorts of grown teff. The increase in white-colored teff at the expense of red and mixed-colored teff is a significant development. In 2012, white teff accounted for 69.6% of total teff production, up from 48.2% in 2002. During the same time, however, the share of red teff fell from 36 percent to 19.7 percent . Teff grains are graded for quality and price based on their type, which ranges from white to mixed to red. The white has the greatest price, while the red has the lowest. A sub-type of the white, the very white charges an even greater premium price (FAO, 2015). Consumption and its income elasticity of teff in Ethiopia According to FAO (2015), teff is the only cereal crop that Ethiopia has a comparative trade advantage. It can be grown in a large part of the country. A country grows more than 90% of the teff in the world. Therefore, the role of teff is vital for Ethiopian agriculture; food; and trade sectors. As a result, Ethiopia stands a good chance of ensuring food security by increasing teff production and exports. Despite its largest production volume, the country is not capitalizing on its crop in the international market. Indeed, within the country, urban households more readily eat teff than rural households (Minten et al., 2013). Urban consumers use 81 kilograms of teff per year, more than three times the amount consumed in rural areas. This is partly due to the high price of teff relative to other crops, the urban affluent consumers consume relatively more teff than the rural poor (Zhu, 2018). Teff is, therefore, an economically superior good that is relatively more consumed by richer than by the poor. According to (F. Bachewe et al., 2019), stated that teff shows income elasticities, i.e. 1.2 in rural and 1.1 in urban areas, which means that doubling of income responses to rising expenditure by 120% and 110%, respectively. Other crops like sorghum even have negative elasticity of income in the urban area, it indicates an inferior good in the urban environment. When the consumer becomes richer, the consumption of such goods is reduced. The importance of sorghum as food is, therefore, likely to reduce, and the importance of tef is likely to increase with the rise in income over time, as Ethiopia becomes wealthier and more urbanized. In addition, the consumer in the cities prefers mixed teff types (28 kilos out of 81 kilos), whereas rural consumers prefer red teff (9 kilograms out of 24 kilograms). In urban areas, red teff accounts for 11 percent of all teff consumption expenditures, compared to 27 percent in rural areas. In urban regions, the consumption of injera and mixed teff is higher than in rural regions, with injera accounting for 9.1% of all food expenditures (Hassen et al., 2018). Moreover, besides home consumption, teff has been shipped to a different country abroad. According to , teff export has varied, with a higher volume shipped in 1995-1997, 2001, and 2005, but exports have been decreased since January 2006, owing to high domestic pricing and a government ban on exporting unprocessed teff grain. The goal of the prohibition 3 is to lower the domestic price of teff to the level that people could afford and address local food security. However, real-time experience at the time of writing this script the average price of 1 kg raw teff product surpasses 1 USD dollar and it is an indication of price skyrocketed without lifting export ban. This highly reflects that restricting export of the product alone might not bring such huge impact on the price of the product, rather enhancing production and productivity of the teff would lower domestic price and favor the consumers at different levels. On the other side, scholars argued that, the removing the export restrictions would very certainly raise the price of teff in the local market to a higher international level (Abraham, 2015). Nevertheless, it would be detrimental to domestic customers in the country though not uniform across regions due to spatial production capacity and time value of the product. In addition, export prohibition protects teff producers from price fluctuation in the international market and deters multinational businesses from acquiring the local teff business. Otherwise, like with quinoa in Bolivia, their takeover would certainly drive smallholder farmers out of the teff market and lead to land conflicts. Likewise, exporting teff could compromise the nutritional status of Ethiopia. Poorer Ethiopians may be compelled to switch to less nutritional substitutes such as sorghum, barley, or wheat if teff becomes less plentiful and expensive (Crymes, 2015). There is a possibility to wipe out teff consumption in certain regions of the country (for example, the moist lowlands). This happens because these regions then have an alternative crop like wheat, barley, or sorghum, which contributes to the utility of the consumer without forcing them to pay significantly higher prices. A country grows more than 90% of the teff in the world. Despite its largest production volume, the country is not capitalizing on its crop in the international market as indicated by the Central Statistical Agency of Ethiopia (2015). Economic role Teff is Ethiopia's most important crop by area planted and value of production, and the second most important crop in generating income (after coffee), generating about $500 million per year for local farmers . According to studies, Injera exports in 2015 were estimated to be worth around ten million dollars Hassen et al., 2018). The commercial surplus of teff is equal to the commercial surplus of the three other main kinds of cereal (sorghum, maize, and wheat) combined in the country. Likewise, teff is an economically superior commodity in Ethiopia. It often commands a market price two to three times higher than maize, the commodity with the largest production volume in the country (FAO, 2015) thus making teff an important cash crop for producers (Abraham, 2015). Nevertheless, teff has shortcomings to become an income-generating global commodity for Ethiopian producers. Some of the shortcomings are low yields compared to other major cereals, high labor-input requirement, lack of infrastructure, and limited or inefficient market (Amentae et al., 2016;Cheng et al., 2017). Similarly, the crop is being successfully introduced and cultivated in many other parts of the world including Australia, Cameroon, Canada, China, India, Netherlands, South Africa, the UK, Uganda, and the USA (Abraham, 2015). However, comprehensive statistics on its production, utilization, and trade are little available in those countries. 4 Overall, the teff market lacks large-scale processing and purchasing to capture economies of scale. Little value is added to teff, and a lack of grade standardization causes uncertainty and additional costs at transactions. The existing export policy does not support teff producers to profit from the overseas market. The imposed restriction prevents the Ethiopian government, particularly farmers, from participating in and benefiting from rising global trade, which might boost GDP and transform producers' livelihoods. However, according to other reports, demand is expected to be quite high in the United States, the Middle East, and Europe due to the large number of Ethiopian immigrants living there . On the other side, restricted access to this crop product has hampered scientific investigation and the absence of worldwide consciousness of its potential health benefit and has restricted utilization (Fufa et al., 2013). Likewise, the 2006 export ban of the raw Teff grains could limit production of Teff in Ethiopia and even be unable to meet the homegrown demand. As of now, the normal grain yield of Teff in Ethiopia is under 1.0 t/ha. Nutritional value and health role of teff Teff is nutritionally rich and high in carbohydrates. It is gluten-free and can easily be tolerated by patients suffering from celiac disease. It is the most important cereal in Ethiopia and it accounts for 15% of calories consumed in Ethiopia (Central Statistical Agency, 2016;Fufa et al., 2011). Most notably, teff contains a higher quantity amino acid and minerals than other cereals (Abraham, 2015;Hailu et al., 2016). Millions of individuals in industrialized countries suffer from gluten-related disorders (Roseberg et al., 2006), in western markets its demand has risen rapidly (Gebremariam et al., 2014). Teff grains have 357 kcal/per 100 g, which is comparable to rice and wheat (Cheng et al., 2017). Moreover, with 11% of protein, teff is an excellent source of essential amino acids, especially lysine: the amino acid that is most often deficient in grains (Ayalew et al., 2011). Teff grains are low on the glycemic index, which makes them suitable for people with Type 2 diabetes. The grains are also gluten-free (FAO, 2015). Furthermore, teff has equivalent protein level with other common cereals like wheat but it contains greater amount of amino acid lysine than other cereals. Teff is rich in fatty acids, minerals, fiber and phytochemicals including polyphenols and phytates (Baye, 2014). According to (Dekking and Koning, 2005, teff is vital in preventing pregnancy anemia due to its high content of fiber, calcium, and iron. The crop has a longer shelf life, and slow staling of its bread products compared to other cereal crops like wheat, sorghum, rice, barley, and maize. As Gebru et al. (2020) states that the grain is linked to several health benefits including prevention and treatment of diseases such as celiac disease, diabetes, and anemia. As Zhu (2018 reveals that protein, dietary fiber, polyphenols, and certain minerals are all appealing ingredients in teff. The teff grain flour is becoming increasingly popular in the healthy food consumers, and it has been used to make gluten-free pasta and bread. Specifically, its composition from 100 grams of teff flour contains; Carbs: 70.7 grams, protein: 12.2 grams, Fat: 3.7 grams, Fiber: 12.2 grams, Iron: 37% of the Daily Value (DV), Calcium: 11% of the DV. It is important to know that the nutrient makeup of teff appears to vary greatly according to the variety, growing area, and brand (Koubová et al., 2018;Shumoy & Raes, 2017). Moreover, teff is an adequate source of each of the nine basic amino acids, including lysine, which is regularly missing in many kinds of cereal (Baye, 2014;Gebremariam et al., 2014). Various used forms of teff The most well-known use of teff in Ethiopia is the fermented flatbread called injera 5 (Assefa et al., 2013;Baye, 2014;Zhu, 2018). Crymes (2015) portrayed this customary flatbread as a delicate, slender flapjack with a sour taste. The most favored type of injera is one produced using pure teff flour (Crymes, 2015). Injera blended in with other flour, for example, wheat or sorghum is viewed as inferior. Different uses of teff incorporate local alcoholic beverages called tela 6 and areke, 7 and porridge (Abraham, 2015). Moreover, teff plant residues such as its straw could be utilized as fodder for animals, and regularly used as construction materials to reinforce houses built from mud or plaster (Cheng et al., 2017;Ketema, 1997;Stallknecht et al., 1993). Likewise, outside Ethiopia, global consumers following the super-food wave, various teff-based products are developed to capture the premium market in the form of bread, porridge, muffin, biscuit, cake, casserole, and pudding. The crops' potential is also explored as a thickener for soup, stew, gravy, and baby food (Zhu, 2018). Marketing performance and teff value chain in Ethiopia Ethiopia has yet to develop an efficient teff marketing and value chain scheme. Its value chain is often described as unsophisticated or untraceable (Amentae et al., 2016;Minten et al., 2016). Currently, little evidence exists for modernized teff trading and retailing practices. For instance, the role of credit is minor, most of the transactions are on a cash basis, and standardization of Teff grading is virtually absent . Along with the teff spatial prices in Ethiopia, the central market is the capital (Addis), given its large size and its central role (Getnet et al., 2005;Tamru, 2013). With adhering to the shadow prices and perfect market postulation of agricultural products, food prices elsewhere in the country move up or down with the prices in Addis Ababa after accounting for transportation costs, a crucial factor for spatial price integration (Jaleta & Gebremedhin, 2012), in which markets are all connected to Addis Ababa with the highest urban consumers and the highest volume of market size. As Minten et al. (2016) examined the share of teff price structure in detail, one notable result is that teff growers obtained on average 79.4% of the final retail price of the raw product. Similarly, Urgessa (2011) reveals that teff producers took 78.7% of the consumer price while the assemblers, wholesalers, and retailers shared the rest of the price. Despite teff, trade is highly profitable; little is known about the farm-level competitiveness of teff production and the distribution of the costs and value-added between the chain participants, which include farmers, traders, and processors. Although past studies in Ethiopia (Fufa et al., 2011;Minten et al., 2013) have looked at the value chain analysis of teff, literature on quantitative value chain analysis that captures the cost build-ups along the chain is scarce. According to Demeke and Di Marcantonio (2013), teff is largely produced for market mainly because of its high price and absence of alternative cash crops (such as coffee, tea, or cotton) in the major teff potential areas of Gojjam and Shoa (Amhara, Oromia respectively). Assemblers in village markets and wholesalers in regional markets give substantial attention to the quality of teff. Its grade is based on its color, and there are three grades based on its color: white, mixed, and red. The white is the highest grade and takes the highest prices while the red one is the lowest grade and price. There are also subgrades within each such as Magna (super white) and being sold at a premium price. Why teff value chain is vital? According to Fufa et al. (2011), the teff value chain program supports the doubling of teff production and ensures farmers access sufficient markets to capture the highest value from their production, increase incomes, and reducing the price to consumers within 5 years. In addition, various leverage points drive stakeholders to move into value addition streams. For instance, substantial changes are happening in agricultural and food markets worldwide and particularly in developing countries (Reardon et al., 2009;Tsakok, 2011). In a very sizable manner, supermarkets are taking off quickly (Reardon et al., 2003;Timmer, 2009), in the die of consumers, the share of high-value crops is increasing rapidly (Mergenthaler et al., 2009;Pingali, 2007). In worldwide, quality preferences by consumers are on the rise (Minot & Roy, 2007), for export agriculture the requirement of food safety from developing countries has vital effects on the value chain structure (Maertens & Swinnen, 2009). Across the value chain of teff, it helps rising the adoption of modern farm inputs by farmers, rising willingness to pay for convenience in urban areas, upgrading of the foodservice industry, enhanced marketing efficiency and quality demands (Minten et al., 2013). Teff value chain is involved six key parts: exploration and reproducing; seeds and inputs; production; harvest and processing; exchange and promoting; and value addition and export (Fufa et al., 2013). Teff value chain has not arrived at its maximum capacity due to systematic bottlenecks at each phase of the value chain. In addition, several existing challenges hinder the effective functioning of the teff value chain. These are very limited access to agricultural inputs, including high-quality seeds, fertilizer, or agrochemicals (certified seeds and other inputs are currently not available in sufficient quantity), lack of trade relations with the Addis Ababa market, and drought and erratic rainfall are among the most significant natural hazards and cause risks for smallholder's farmers (Weber & Tiba, 2017) An important reason why growth opportunities are not taken by the country is market failure and dysfunctional business linkages. Market failures are major obstacles to value chain development that explain why the economic potential is not realized as well as coordination failures and asymmetries along the value chain. One way to improve, develop and make a value chain for the pro-poor is to 'strengthen mutually beneficial linkages among firms so that they work together to take advantage of market opportunities, that is, to create and build trust among value chain participants. It is commonly accepted that the inclusion of smallholder farmers and other vulnerable populations in value chain development (VCD) leads to an inclusive and pro-poor value chain (Andreas et al., 2019). Hence, this review gives an overview of the teff production trends, challenges, and teff value chain missing links. Teff supply chain and its problems For most of the individuals in Ethiopia, teff has been and keeps on being a basic means of food that contributes significant levels of nutrients. With its more noteworthy resilience to outrageous conditions, for example, dry season, water-logging, and common pests and diseases compared with wheat and maize, teff remains the staple crop in Ethiopia and Eritrea, favored by a large number of local smallholder farmers (Central Statistical Agency (CSA); Central Statistical Agency (CSA)). Despite its considerable nutritional benefits, teff has its limitations as an essential food source. It produces lower yields than other significant grains because of several serious reasons, chiefly originating from its proneness to lodging, and minute seed size, and to an absence of continued innovative work endeavors and irregular social practices. Of these, lodging is viewed as the greatest hindrance to yield improvement in teff (Ketema, 1997). It causes the arrangement of a tall and delicate stem that is vulnerable to harm by wind and rain. The utilization of nitrogen composts further aggravates these delicate stems and therefore prevents mechanized harvesting. Moreover, teff's shortcomings to become an income-generating global commodity are low yields compared to other major cereals, high labor-input requirement, lack of infrastructure, and limited or inefficient market (Amentae et al., 2016;Cheng et al., 2017). Teff's tiny seed additionally present several difficulties to its commercial efficiency, especially for planting (Ketema, 1997). At planting time, the small seed makes it hard to control population density and its appropriation (Juraimi et al., 2009). Moreover, the process of threshing sifting, winnowing, and crushing the seeds can be relentless and tedious, and labor-consuming (Ketema, 1997). Another huge limitation to teff production is its defenselessness to frost at all growth stages. Although teff is moderately liberated from significant pests and disease, the plant is sometimes contaminated by rust (Uromyces eragrostidis; Assefa et al.). These joined, along with the absence of continued innovative work endeavors and improved strategies for research make up the fundamental variables adding to the moderately low quality and quantity of collected teff. Moreover, according to (Abraham, 2015), many factors are associated with this low supply problem, such as restricted utilization of improved seeds resulted from inconsistent production of adequate seeds both in quality and quantity alongside more noteworthy postponements in distribution, supply and storage problems. An inefficient agronomic practice because of technical inability and cost inaccessible of contributions for farmers and fragmented homestead plots further exasperates farmers' production capacity. The utilization of lime, which is utilized to treat profoundly acidic soil in Ethiopia, is restricted in access and expensive to bear for subsistence farming households. The existed farm equipment exploited by producers are the traditional ones utilized for quite a long time without slight change and the accessible improved instruments like row planters, broad bed centuries, and plough are additionally insufficient and not promptly accessible to farmers from one side of the country to the other Accordingly, many problems restrict teff to boost its supply and maintain its quality with affordable price for the consumers. Among these, (1) restricted resource for teff research and breeding, (2) low improved seed selection and costly fertilizer, (3) insufficient agronomic practices, (4) high post-harvest handling loss rates, (5) a fragmented value chain that includes numerous players and (6) restricted value chain set-ups (Fufa et al., 2013). Future demand simulation of teff As it has been stated teff plays a vital role for Ethiopian food; and trade and agriculture sectors. As a result, Ethiopia stands a good chance of ensuring food security by increasing teff production and exports. Because of the local price increase and fast-expanding injera exports, the government recently granted authorization to a small number of commercial farmers to begin producing teff to meet this export demand . Similarly, teff's popularity is rising quickly all around the world. Teff may also be the next supergrain, and Injera may be the next super-food in the world, because of its multiple benefits. Teff is the most labor-intensive cereal crop, with a relatively high production cost compared to other cereal crops (Abraham, 2015). As a result, a comprehensive strategy for large-scale development, adoption, and maintenance of farm tools must be developed at the national level. Effective and effective farm technology needs to address the most difficult and time-consuming aspects of teff farming, as well as the high cost of production. According to(F. Bachewe et al., 2019), utilizing the income elasticities of teff demand for its products might involve in the future, integrating the expected population dynamics, differentiating between urban and rural areas, relying on population projection by the world bank, and further assume that uniform annual income growth of 3% and no real price increases; there has been the evolution of teff demand for rural and urban areas. Summary and conclusion Ethiopia has suitable crop-producing ecological zones; such as from extremely lowland up to highland throughout all regions that enable it to produce teff all round. But there was a significant difference in teff production across the region. A country is the largest teff producing country, and the only country to have adopted teff as a staple crop and the country produces more than 90% of the teff in the world. It is Ethiopia's most important crop by area planted and value of production, and the second most important cash crop after coffee. The economic viability of teff is equal to the three other main kinds of cereal combined in the country (sorghum, maize, and wheat). Despite the conducive environment, the return and incentive for growth in teff face several production and marketing challenges. Moreover, Ethiopia's significant poverty and food insecurity has been a serious problem over the decades. This is due to teff production in Ethiopia largely depends on the hands of 6.5 million resource-poor smallholder farmers. Furthermore, inefficient production, climatic factors, presence of low yield varieties are the most significant problems that affect the teff production across different agro-ecology zones. On another hand lack of transport, infrastructure and weak market linkage decrease the profitability of teff in Ethiopia. In addition, the value chain is weak, and the market lacks large-scale processing and purchasing to capture economies of scale. A key for improving, production and economic viability of teff in Ethiopia are to develop the value chains and strengthen mutually beneficial linkages among actors. So that they work together to take advantage of economic and market opportunities, can create and build trust among value chain participants. Investments in productivity increase higher up value chain performance, such as through marketing and transportation infrastructure, which would increase prices farmers receive for output while also putting downward pressure on urban food prices. Higher market prices would create incentives for farmers to invest in productivity through increasing technologies usage and improved input used since output increases would offer significant economic gains. Possible suggestions The following policy responses should be taken as a policy response. First and foremost, improving productivity and resilience; by investing in basic research and researchers. As well as selecting and scaling up new technologies that guarantee durable, multipurpose, cheap mechanized planters and harvesters. In addition, conduct rigorous and regular evaluations of outcomes and establishing fit-for-purpose distribution systems; experiment with alternative input delivery mechanisms involving different arrangements, actors, and payment modalities and managing labor demand and postharvest operations, improve monitoring and evaluation of uptake of improved technologies in various feasible means would be vital. To meet the increasing demand for teff it is crucial to increase teff yields. This can only be done by the adoption of improved teff production technologies such as improved seeds, fertilizer, or mechanization. Inorganic fertilizer is a crucial input for teff production and most tef producers are estimated to make use of it. However, due to limited financial means and access to credits, fertilizer uses are often below-recommended rates. Access to credit and other financial services by small-scale farmers has been considered as one promising way to reducing poverty, improving farm productivity, and easing a smooth transition from subsistence farming to large-scale and agribusiness farming. Give emphasis & advocate teff for national as well as international research and development institutions to study further for productivity increment technology. In addition, Market stabilization: a critical factor in ensuring that smallholders benefit from participating in markets is the "rule of the game". When rules are transparent and fair, smallholders can benefit and improve their economic and social situation. Therefore, development organizations and governments tend to focus their interventions in value chains or market systems on increasing competitiveness, performance, and on providing effective technologies to smallholder producers. Present and future analysts and investors should guarantee the coherence of long-term research combined with more successful breeding projects to open the greatest capability of this grain. With its versatility to both dry season and warmth stresses, teff could be the response to the troubling condition of food security while additionally cooking for the dietary requests of a developing populace; these points are following the sustainable global agenda for food and nutrition security. Notes 1. Injera is a spongy, sourdough flatbread prepared from fermented Ethiopian traditional main meal teff flour. This food is consumed by almost everyone in Ethiopia at least once a day. Injera preparation consists of numerous phases, beginning with grain preparation and ending with baking; all of which are still carried out utilizing indigenous knowledge and traditional practice. This Ethiopian national super food is gaining popularity in many western nations due to its remarkable nutritional features, particularly gluten-free and good mineral compositions (rich in iron; Neela & Fanta, 2020). 2. Food security is an increasing concern around the world; it is estimated that over a billion people would be deprived of appropriate dietary energy, with at least twice as many suffering from micronutrient deficiencies. It is highlighted on the four pillars: food availability, access to food, and use of food (Barrett, 2010). Indicators on both the supply and demand sides. Food availability is a result of developments in agricultural productivity, and access reflects the demand side of food security, and which foods are compatible with societal tastes and values. All those concepts are inherently hierarchical, with availability necessary but not sufficient to ensure access, likewise, necessary but not sufficient for effective utilization. Utilization reflects whether individuals and households make good use of the food to which they have access. 3. Despite the rising demand, Ethiopia limited the export of raw Teff and flour in 2006 as the cost of the output surged, causing buyers to panic. (Injera is not included in the boycott.) Mama Fresh is one of the firms that has been preparing and trading injera to various parts of the world since 2003.. 4. Historically, teff is grown in developed nations, for example, Australia and the USA have served basically as a forage crop (Stallknecht et al., 1993). 5. Injera (staples for the majority of Ethiopians, a fermented, pancake-like, soft, sour, circular flatbread), sweet unleavened bread, local spirit, porridges and soups (Bultosa, 2007). 6. Tella is a traditional Ethiopian fermented beer-like beverage prepared from a variety of cereals (including teff) and a native herb known as gesho (Rhamnus prinoides). Tella is similar to commercial beer in that it is created from malted barley and other grains, but it also contains gesho, a traditional beer ingredient. It varies in alcohol content usually around 8.1-14.59 (% v/v) [Getaye et al., 2018] 7. Areki is a colorless, clear, and traditional alcoholic beverage distilled from the fermented product. It is prepared in almost the same way as Tella except that the fermentation mass, in this case, is more concentrated [mulaw et al., 2020]. Consent for publication The authors agreed and approved the manuscript for publication. Disclosure statement No potential conflict of interest was reported by the author(s).
9,632
2022-01-10T00:00:00.000
[ "Economics" ]
CSCNN: Lightweight modulation recognition model for mobile multimedia intelligent information processing With the advancement of the Internet of Things, the importance of multimedia intelligent information processing technology is increasing. Faced with massive electromagnetic data and limited terminal equipment computing resources, previous technologies cannot meet the real-time decisionmaking requirements for processing short-term observations or burst signals in deployable systems. In this paper, our approach proposes the Deep Complex Separable Convolution (DCSC) operation by combining separable convolution operation and complex convolution operation. At the same time, to better preserve coupling information between channels and minimize the model size, we propose the Multilevel Sepa-rable Convolutional Residual Block (MSCRB). Based on the above two methods, we constructed the Complex Separable Convolutional Neural Network (CSCNN). This neural network significantly reduces the complexity of the deep learning model. The smallest network we constructed, CSCNN-Tiny, has a model size of 0.760M, which is only 6% of the size of MobileNet. With 0.815M Flops, it is 3.8% of MobileNet. However, it achieves a recognition accuracy of 50.97%, only 0.97% lower than MobileNet. Introduction In recent years, as the world has entered the information age, wireless communication technology has developed rapidly and Internet of Things devices have been widely used.Therefore, electromagnetic spectrum resources have become increasingly scarce and electromagnetic space has become more complex and crowded.The security and efficiency of mobile multimedia intelligent information processing are important prerequisites for ensuring full utilization of spectrum resources. Communication signal modulation recognition is an important part of mobile multimedia intelligent information processing, and naturally faces greater challenges [1].In the IoT, there are numerous devices with different types and brands, each using various modulation schemes for communication.Modulation recognition technology enables analysis and identification of received wireless signals, allowing for device recognition and classification [2].IoT devices rely on wireless spectrum for communication.Modulation recognition technology assists in monitoring and identifying different modulation schemes within the spectrum.This information facilitates spectrum management, including allocation and conflict resolution [3].Wireless communication in the IoT is susceptible to interference, eavesdropping, and malicious attacks.Modulation recognition technology detects and identifies the modulation schemes of radio signals, enabling timely detection of anomalous signals or malicious activities.It enhances security monitoring and safeguards IoT communication.Modulation recognition is the process of identifying the modulation scheme of the non-cooperative signal, even when initial information is limited or there is no prior information available.As wireless communication technology continues to rapidly evolve, the scenarios and methods used for communication are becoming increasingly assorted [4].There is an increasing demand for faster signal transmission speeds and better service reliability, posing substantial obstacles to the actual use of modulation recognition technologies [5]. Facing dynamic and complex mobile multimedia devices, traditional modulation recognition technology is limited to shallow learning and relies on manual extraction of signal features, which cannot meet the needs of efficient and safe decision-making with intelligent information. In recent years, the technology of deep learning has gained widespread attention for its excellent performances [6].Deep learning has been extensively employed, revolutionizing various critical fields, including image recognition, object detection.Moreover, deep learning has shown remarkable performance in modulation recognition by making it possible to extract and express the aspects of modulation signals. Researchers have proposed innovative approaches to modulation recognition, such as the Contour Stellar Image (CSI) concept introduced by Lin et al. [7].To enhance the training procedure for modulation recognition, other researchers, such as Ji et al. [8] offered strategies including blind equalization-assisted deep learning networks and made use of transfer learning. Data in deep learning is typically represented using real numbers.Recurrent neural networks (RNNs) and other foundational theories, however, reveal that complex numbers have the potential to provide greater representation capabilities than real numbers.Complex-valued representations may be easier to optimize, possess greater generalization features, learn more quickly, and provide a noise-resistant method of memory retrieval.For taking use of the phase information, Tu et al. [9] used Deep Complex Networks (DCN) in modulation recognition.Experiments show that complex neural networks perform better than real neural networks in modulation recognition tasks. The progress of deep learning has led to an enormous increase in the accuracy of modulation recognition tasks.This advancement, however, comes at the expense of an increasing amount of network layers and a more complex model [10].Although the deep learning method is effective, the deep network model has a large amount of computation and model size.Despite the availability of GPU acceleration, deep neural networks still require a significant amount of memory space due to their largescale model parameters.This heavy reliance on high-performance hardware presents a challenge in practical applications [11]. In practical applications, a common computing architecture involves the integration of cloud and edge computing [12], as illustrated in Figure 1.This approach optimally leverages the strengths inherent in both, providing a more flexible, efficient, and rapidly responsive computing capability.Within this computational framework, certain preliminary processing tasks are delegated to edge devices [13-14], while relatively intricate computations are conducted in the cloud.This enables the effective utilization of computational resources on edge devices concurrently with harnessing the formidable computing power of the cloud.Considering the inherent limitations in performance of edge terminal devices, coupled with the challenges posed by vast amounts of electromagnetic data and complex electromagnetic environments, conventional technologies fall short in meeting the real-time decision-making requirements for deploying systems to handle short-term observations or sudden signals [15].Consequently, there is an urgent need for research focused on lightweight neural network models applicable to modulation recognition tasks [16].To deploy CNN models more effectively on edge devices, it is common practice to compress the model [17], so that the network carries fewer parameters and can simultaneously solve problems related to memory and computation speed [18].Various methods have been proposed for model compression, such as the hybrid pruning method combining weight and convolution kernel pruning [19]. These methods have varying degrees of effectiveness in compressing the model, but they can be operationally complex.Another approach is to improve the efficiency of network convolution.Gholami et al. [20] proposed SqueezeNet, which is composed of Fire modules.Han et al. [21] proposed G-Ghostnets, which demonstrated excellent performance. In order to achieve model compression without reducing recognition accuracy and to lower the computational complexity of the model, we propose the Deep Complex Separable Convolution (DCSC) operation by combining complex convolution operation with separable convolution operation.Simultaneously, to further enhance the recognition performance of the lightweight neural network and avoid information loss caused by residual connections, we design a Multilevel Separable Convolution Residual Block (MSCRB).Based on these two methods, we have designed a novel lightweight neural network, which we refer to as the Complex Separable Convolutional Neural Network (CSCNN). In summary, we make the following contributions in this paper: 1. We designed the DCSC operation by combining separable convolution operation with complex convolution operation.This approach enables a reduction in the number of convolutional layers without sacrificing model recognition accuracy, thereby reducing the model size and computational workload further; 2. We introduce a residual architecture, MSCRB, tailored for lightweight neural networks.This architecture alleviates the information loss caused by residual connections.It significantly improves the model's recognition performance without increasing the computational complexity and size of the model; 3. We combined MSCRB with DSCN to construct a new lightweight neural network, CSCNN.Experiments show that this network achieves significant model size reduction and reduced model computational complexity while maintaining good performance. Related Work In this section, we will primarily introduce the theoretical knowledge related to complex convolution operations and separable convolution operations and present the design of the classic residual block with a bottleneck structure. Complex Convolution Operation Complex numbers offer a superior method of expressing the relationship between amplitude and phase compared to real numbers, making them more adept at handling phase-related issues.The I/Q data in the communication signal is complex data, therefore, complex neural networks have significant advantages in the field of modulation recognition.Complex neural networks were first researched in the 1990s, and the concept of DCN has been introduced.In DCN, the most critical operation is the complex convolution operation, which effectively leverages the correlation between the real and imaginary parts of complex data, thereby enhancing the performance of the net-work.Currently, most deep learning frameworks only support real number operations.When constructing a complex network, it is necessary to construct equivalent real number operations.This part mainly introduces how to perform convolution operation by simulating complex number operation through real numbers.A complex number matrix can be defined through two real number matrices α, β: where α is defined by us as the real number part, and β is the imaginary number part.A complex number vector can be defined through two real vectors γ and δ. (2) We can make the complex vector γ convolve with the complex matrix Z through the complex convolution operation shown in Equation 3 where * denotes the convolution operation, the final equation ( 3) can be simplified as The convolution module of the complex network will output the result of the samechannel convolution of the feature map in the real part and the imaginary part will output the cross-channel convolution result of the feature map.In this way, the phase information hidden in the input feature map can be better utilized.The process of implementing complex convolution is illustrated in Figure 2. Separable Convolution Operation The separable convolution operation is a convolutional operation pattern used by MobileNet [22].The MobileNet series has been widely applied in recent years.MobileNet is a lightweight convolutional neural network proposed by the Google team in 2017.As shown in Figure 3, its main feature is that the ordinary convolution operation is divided into two steps, one step is a channel-by-channel depthwise convolution, and the other step is a pointwise convolution with a filter size of 1 × 1.Therefore, the number of network parameters and computational complexity are greatly reduced, enabling real-time operation on mobile devices.The former convolves each input channel separately, while the latter mixes feature maps from different channels.At the same time, it can be used to compute the linear combination of input channels to construct new features.This approach can significantly reduce computational complexity and the number of parameters while maintaining model recognition performance.The upper part of Figure 4 shows the convolution filter used in the normal convolution operation, and the lower part shows the convolution filter used by depthwise and pointwise in the MobileNet.The separable operation is to split the ordinary convolution into two convolution operations.Where K s represents the filter size and K c represents the number of channels of the output.K n represents the number of filters.We will use t 1 to represent the computational complexity of traditional convolution and t 1 to represent the computational complexity of separable convolution.F s is the size of the feature map.The computational complexity of traditional convolution: The computational complexity of separable convolution The number of parameters in separable convolution The ratio of separable convolution computation to traditional convolution computation: When the number of channels of the output vector is large, the ratio of the two convolution parameters is approximately inversely proportional to the square of the convolution kernel size.When the convolution kernel size is 3 × 3, using depthwise separable convolution can reduce the number of parameters to at most 1/9 of the original. Experimental results show that MobileNet series greatly shortens the training time and reduces the computational cost of parameter updates while ensuring stable image classification accuracy as much as possible, providing direction for optimizing subsequent network structures.However, there are still some limitations in the MobileNet structure, such as insufficient feature information extraction leading to low classification accuracy and the phenomenon of losing feature information in the activation function of network layers. Classic Residual Block The bottleneck structure, originally introduced in ResNet, consists of three convolutional layers: a 1 × 1 convolution for channel reduction, a 3 × 3 convolution for spatial feature extraction, and another 1 × 1 convolution for channel expansion.Residual networks are typically constructed by stacking multiple such residual blocks.Figure 5 illustrates the architecture of ResNet.The bottleneck structure has undergone further refinement in subsequent research, such as the expansion of channels in each convolutional layer, the application of group convolutions to the central bottleneck convolution for more expressive feature representations, and the introduction of attention-based modules to explicitly model inter-dependencies between channels.Certain studies integrate residual blocks with dense connections to boost performance [23].Although the residual structure performs well in various tasks, it is rarely used in lightweight networks due to the increased model complexity [24]. We will now explain the principles behind how residual networks address the is-sue of gradient degradation. Where x L represents a convolutional layer, x l represents the input layer of the shortcut in the residual.In backpropagation, the gradient formula for a residual network, after taking derivatives, is as follows: In gradient updates within residual networks, an additional constant 1 is introduced, mitigating the vanishing gradient problem. Our Approach Our approach mainly utilizes DCSC and MSCRB to construct a lightweight neural network -CSCNN for communication signal modulation recognition. Deep Complex Separable Convolution Inspired by the MobileNet and DCN, we proposed DCSC.This convolution operation separates the real and imaginary parts in feature maps and convolution kernels in the original complex convolution.It then recombines the real and imaginary parts of the same layer, simulating separable convolution operation.This approach integrates deep complex operations with separable convolution operation in a highly effective manner, ensuring the recognition accuracy of the network while greatly reducing its size. In the first step of DCSN, with the number of filters set to 1, we divide the 2Mchannel feature maps in complex convolution operation into M groups, where the i-th group consists of the real part feature maps from the i-th layer and the imaginary part feature maps from the i-th layer.The 2M-channel filters are also divided into two groups, where the i-th group consists of the filters of the real part layer and the filters of the imaginary part layer, we recombine the real and imaginary parts of the same layer in the feature maps and filters.Then, we perform deep complex convolution operations between these two groups of feature maps and filters.Finally, the resulting M groups of feature maps are concatenated.In the second step, we set the filter size to 1 × 1 and perform normal complex convolution operations.By performing these two steps, the separable complex convolution operation in DCSC is completed.At the same time, to avoid convergence issues in complex network convolution computations, we perform batch normalization and ReLU activation after each complex convolution operation.And then the DCSC is completed.Figure 6 displays the entire process of DCSC. Figure 7 Multilevel Separable Convolution Residual Block Due to the ReLU activation function exhibiting a gradient of 0 at the value of 0, a substantial proportion of depthwise convolution weights tends to be zeroed out during the training phase in separable convolutional neural networks.This phenomenon results in subsequent iterations being incapable of reactivating the corresponding neuron nodes.The application of the residual architecture in ResNet significantly alleviates the issue of feature degradation.However, the residual blocks in ResNet exhibit higher computational complexity, making them less suitable for lightweight neural networks.In response to this challenge, we have devised the MSCRB specifically for light-weight neural networks. Our idea is to replace common convolutions in the fundamental blocks with separable convolutions.In MSCRB, we initiate the process with a separable convolution with downsampling.Subsequently, standard convolution operations with dimensionality expansion are performed.The inclusion of a shortcut connection to high-dimensional representations in this design enables the network to preserve more in-formation from the lower layers during gradient propagation towards the upper layers, thereby augmenting cross-layer gradient propagation.Finally, a second round of separable convolution is executed. Within the building block, we conduct separable convolutions twice, the dimension is reduced first, and standard convolutions are inserted in between the two separable convolutions to expand the dimension.This approach leads to a substantial reduction in both parameters and computational costs.Figure 8 illustrates the architecture of MSCRB.The architecture retains additional information exchanged between blocks, facilitating improved optimization of network training through the utilization of high-dimensional residuals.Moreover, for enhanced spatial representation, instead of placing spatial convolutions in the bottleneck with compressed channels, we suggest applying them in the expanded high-dimensional feature space to enhance model performance.Additionally, we utilize pointwise convolutions for the channel reduction and expansion process, aiming to maximize the reduction in computational costs. Complex Separable Convolutional Neural Network By leveraging the DCSC and MSCRB, we combine to form the Complex MSCRBlock as illustrated in Figure 9. Through multiple instances of Complex MSCRBlock, we design our lightweight neural network architecture -CSCNN.The network initiates with a normal complex convolutional layer.Subsequently, we add our residual blocks. The last building block's output undergoes a global average pooling layer, converting 2D feature maps into 1D feature vectors.Then, a fully connected layer is added to predict the final accuracy for eleven modulation categories. We can build CSCNN models with different model sizes and performances by stacking Complex MSCRB with different parameters, where the adjustable parameters of Complex MSCRB mainly include the stride and the ratio between the original layer and the intermediate layer after downsampling.To enhance the generalization capability of CSCNN for deployment on multiple hardware platforms, we designed CSCNN models with four different levels of complexity, namely CSCNN-Large, CSCNN-Middle, CSCNN-Small, and CSCNN-Tiny.Table 1 the CSCNN-Large.The default down sampling ratio in the middle of our residual blocks is set to 6.In the process of network design, the I-way and Q-way of the signal are connected through a complex convolutional network, and the coupling features between the Iway and Q-way are extracted to improve the recognition accuracy.Then, considering the amount of network parameters and computational complexity, a small convolution kernel is used to perform multi-layer convolution to reduce model parameters.Furthermore, multiple nonlinear activation layers are integrated to replace a single nonlinear activation layer to enhance the discriminative ability.In the process of feature extraction, the use of small convolution kernels will bring problems of insufficient capability and insufficient field of view, so the method of increasing the number of convolutional neural network layers is used to make up for these problems. Experiment and Dataset In this section, we mainly introduce the experimental details and dataset used in the experiment. Experiment Setup We conducted experiments under Python 3.8.10 and PyTorch v1.8.The CPU is the Core i5 processor produced by Intel.We use Nvidia RTX 4070 G graphics card with 12G memory to train the model.And graphics card configuration adopts CUDA 12.1, CuDNN 8.9.4. Experiment Dataset The dataset for this experiment is RadioML2016.10a[25], a communication signal dataset specially provided for machine learning based on the GNU Radio platform simulation implementation.GNU Radio is an open-source software platform, which can provide various signal modules to build radio communication systems, for example, signal generation modules, modulation and demodulation modules, filter modules, communication channel modules, which are convenient for simulating various modulation signals.Although the dataset is generated using a simulation platform, the generated signal is very close to the real scene.Timthy J O'Shea explained in the literature that the real speech signal is used in the process of generating the public dataset, and the GNU Radio open source software simulates the real channel scene, which involves many parameters, such as center frequency shift, multipath fading, channel noise, sampling frequency deviation, etc., using the real signal captured in the air to pass through the random channel, and then The obtained output data is sampled at random time again, and the final output result is saved in the vector. Experiment Details We divide the dataset in a 3:1:1 ratio during the algorithm research phase.To avoid bias induced by imbalanced samples, the signal samples of different modulation schemes are randomly sampled according to the ratio under varied signal-noise ratio (SNR) situations.During the actual experimental process, we found that all models could converge within 23 epochs on the RML2016 dataset.Therefore, we trained each model for 25 epochs and recorded the loss values and accuracy during the training process.We evaluate the model's performance using four metrics.The metrics are model size, model parameters, Flops, and accuracy.We calculate the model size, model parameters and Flops using the Thop library. Ablation Studies This section mainly discusses the effectiveness of the proposed method.we use AlexNet and MobileNet as baseline experiments, we have designed three improved experiments. (1): We replaced the separable convolution operations in MobileNet with DCSC to construct the network C-MobileNet for experimentation, thereby validating the effectiveness of our proposed DCSC method.(2): We used MSCRB as the basic block to construct MSCRB-Net.This was done to verify the effectiveness of the novel residual architecture.(3): we conducted experiments using CSCNN. As shown in Figure 10, the recognition accuracy and training loss values of different network models across various training epochs are demonstrated.Table 2 presents the metrics of each network. Importance of resdual block.MSCRB alleviates the issue of information loss during the propagation of network information, with MSCRB-Net achieving a 4.77% higher recognition accuracy compared to MobileNet.Effect of using DCSC.DCSC enhances the correlation between the real and imaginary parts of complex data, maintaining consistent recognition rates while reducing model size and FLOPs by 20% compared to MobileNet. Superiority of the CSCNN.CSCNN exhibits a large reduction in model size and computational complexity, while still achieving a recognition performance 0.46% higher than AlexNet, 4.77% higher than MobileNet. Performance of Our Nets In this section, we experimentally evaluate all the networks we constructed, demonstrating the various networks designed for different hardware specifications. We constructed four different scales of MSCRB-Net as we constructed for CSCNN, namely, Net-Large, Net-Middle, Net-Small, Net-Tiny.We performed trials on networks we constructed, including eight networks based on MSCRB-Net and CSCNN, along with C-MobileNet. Figure 11 shows the training loss and accuracy of the network we constructed on the RML2016 dataset.Table 3 presents the performance of the networks we constructed.The smallest network, CSCNN-Tiny, is only 0.760M, which is just 6% of the size of MobileNet.With Flops of 0.815M, it is 3.8% of the MobileNet.However, it achieves a recognition accuracy of 50.97%,only 0.97% lower than MobileNet.The largest network we constructed, CSCNN-Large, exhibits a 65% reduction in model size and a 25% reduction in Flops compared to AlexNet.However, it achieves a 0.46% improvement in accuracy.The four networks we constructed, each exhibit a reduction in computational complexity and model size by approximately 2-4 times compared to the preceding network, with a slight decrease of around 1% in recognition accuracy.This makes them better suited for a variety of hardware devices with different specifications. The scatter plot in Figure 12 displays the performance of all experimental networks, further illustrating the superiority of the networks we constructed. Conclusions This paper proposes DCSC based on separable convolution operation and complex convolution operation.Furthermore, this paper introduces a novel residual structure, MSCRB, to combine with DCSC in constructing a new lightweight neural network -CSCNN.The network absorbs the advantages of DCN and MobileNet, which can ensure the recognition accuracy while compressing the model size.It is anticipated that signal modulation recognition will continue to be further improved in the future, particularly as deep learning technology advances.Researchers will likely explore additional methods that strike a balance between model accuracy and lightweight implementation.This could involve developing more efficient network architectures, exploring transfer learning techniques, or leveraging advancements in hardware acceleration.As these advances occur, the field of multimedia intelligent information processing is expected to make significant progress in achieving high accuracy while maintaining computational efficiency. Fig. 1 Fig.1The combined computing architecture of cloud computing and edge computing. Fig. 11 Fig. 11 Training accuracy and loss on our Nets. Table 1 Architecture details of CSCNN-Large. Table 2 Model performance. Table 3 Model performance on our Nets.
5,444.6
2024-04-24T00:00:00.000
[ "Computer Science", "Engineering" ]
Adaptive Importance Sampling for Equivariant Group-Convolution Computation † : This paper introduces an adaptive importance sampling scheme for the computation of group-based convolutions, a key step in the implementation of equivariant neural networks. By leveraging information geometry to define the parameters update rule for inferring the optimal sampling distribution, we show promising results for our approach by working with the two-dimensional rotation group SO ( 2 ) and von Mises distributions. Finally, we position our AIS scheme with respect to quantum algorithms for computing Monte Carlo estimations Introduction and Motivations Geometric deep learning [1] is an emerging field receiving more and more traction because of its successful application to a wide range of domains [2][3][4]. In this context, equivariant neural networks (ENN) [5] have been shown to be superior to conventional deep learning approaches from both accuracy and robustness standpoints and appear as a natural alternative to data augmentation techniques [6,7] to achieve geometrical robustness. One key bottleneck for scaling ENN to industrial applications lies with the numerical computation of the associated equivariant operators. More precisely, two main approaches have been used in the literature, namely a Monte Carlo sampling method [2] (which can be made exhaustive for small finite groups) and a generalized Fourier-based method [4,8,9]. However, these approaches suffer from scalability issues as the complexity of the underlying group increases (e.g., handling non-compact groups such as SU (1,1) or large finite groups such as the symmetric group S n is challenging). Even for groups such as SO (2) for which previous works on the use of spherical harmonics can be leveraged on, the efficient computation of a reliable estimate of the convolution remains a challenge (convergence). In this context, the authors of [10] have proposed an efficient method for building adequate kernel functions to be used within steerable neural networks [11] by leveraging on the knowledge of infinitesimal generators of the considered Lie group and on a Krylov approach for solving the linear constraints. We propose in this paper to cover the specific case of group-convolutional neural networks (G-CNN) [2,12], which in particular, rely on the computation of group-based convolution operators. By leveraging on information geometry as proposed in [13] for quantile estimation, we introduce here an adaptive importance sampling (AIS) variance reduction method based on information geometric optimization [14] to improve the convergence of Monte Carlo estimators for the numerical computation of group-based convolution feature maps, as used in several recent works [2,9,15]. We illustrate our approach on the two-dimensional rotation group SO(2) by regularizing with von Mises distributions [16], a set-up for which the Fisher information metric [17] can be computed using closed form formulas. Finally, we shed some light on the benefits of working toward a quantum version of our proposed AIS scheme in order to reach a quadratic speed-up [18]. Improving quantum Monte Carlo integration schemes is indeed a very active topic of research [19], mainly driven by applications within the financial industry [20]. Benchmarking with group-Fourier transform-based approaches, such as [21], which are more theoretically involved but with a promise of an exponential speed-up, will be of particular interest in this context. Group Convolution and Expectation We consider in the following a compact group G with corresponding Haar measure µ G . As µ G (G) < ∞, we can choose µ G so that G dµ G = 1 by using an adequate normalization. We are interested in evaluating the group-based convolution operator ψ G defined below for functionals f , k : G → R and g ∈ G: Using a probabilistic interpretation of (1), we can write where H is a G−valued random variable distributed according to µ G . The convolution can therefore be estimated with a Monte Carlo method by using the following estimator where h i ∼ µ G and for which the efficiency could be improved through variance reduction techniques [22]. By anchoring in [13], we describe in the following an adaptive importance sampling approach for the computation of (1). Similar ideas were also used in [23] for financial applications. Adaptive Importance Sampling We consider in the following a set Φ Θ of parametric probability density functions on G, where Θ represents the parameters space. Each density φ θ ∈ Φ Θ is assumed to be absolutely continuous with respect to the Haar measure µ G of the group G, so that the corresponding probability measure can be written as dµ θ = φ θ dµ G and the Radon-Nikodym derivative can be considered. Using the conventional importance sampling approach, we can then write: The idea is then to choose a measure µ θ * for which θ * minimizes the variance v k, f ,g of the random variable k H −1 g ω θ (H) f (H), which can be written as where m k, f ,g 2 Monte Carlo Estimator and Convergence We assume that we can construct a sequence of parameters (θ i ) n−1 i=0 , together with realizations (h i ) n i=1 of the random variables (H i ) n i=1 such that H i ∼ µ θ i−1 and that θ n → θ * ∈ Θ as n → ∞. We can then consider the following Monte Carlo estimator: Under usual integrability conditions, Theorem 3.1 of [13] states thatψ G n (g) → ψ G (g) almost surely as n → ∞. Furthermore, we have the following distributional convergence result, where N 0, σ 2 refers to the Gaussian distribution with 0 mean and variance σ 2 . Natural Gradient Descent We now discuss how to build the sequence of parameters (θ i ) n−1 i=0 and corresponding realizations (h i ) n i=1 as introduced in Section 3.1, reminding ourselves that we have Assuming that the parameter space Θ ⊆ R m is a smooth manifold, we can consider the Fisher information metric g on the density space Φ Θ , which is defined as it follows [17]: We then propose using a natural gradient descent strategy to minimize the quantity m k, f ,g 2 , namely where F k is the Fisher information matrix, i.e., the representation of the Fisher metric as a m × m matrix and α k ∈ R * + . Assuming that the considered functions are smooth enough, it is possible to write: Using a stochastic approximation scheme such as the Robbins-Monro algorithm [24] then leads to consider the following update rule, About IGO Algorithms Information geometric optimization (IGO) algorithms are introduced in [14] as a unified framework to solve black-box optimization problems. IGO algorithms can be seen as performing an estimation of a distribution over the considered search space X leading to small values of the target function Q when sampling according to it. More precisely, the idea is to maintain at each iteration t a parametric probability distribution P λ t on the search space X , for λ t ∈ Λ ⊆ R p and to have the value λ t evolve over time as to shift P λ t toward giving more weight to points x ∈ X associated with a lower value of Q. The IGO algorithms described [14] first transfer the function Q from X to Λ by using an adaptive quantile-based approach and then applying a natural gradient descent by leveraging on the Fisher information metric of the considered statistical model. The scheme described in the Definition 5 of [14] defines the following the update rule for the parameter λ t : where I is the fisher matrix of the model, x 1 , ..., x N are N samples drawn according to P λ t at step t, x i:N denotes the sample point ranked i th according to Q (i.e., Q(x 1:N ) < . . . < Q(x N:N ) ) and , with ω(q) = 1 q<q 0 a quantile-based selection function of threshold q 0 . IGO algorithms could therefore be used in our context by setting Q = m k, f ,g 2 (θ) and X = Θ to infer the optimal value θ * ∈ Θ. Implementing the update rule (19) requires a priori a large number of evaluations of the term Q = m k, f ,g 2 (θ) to derive the sorted samples x i:N , making this approach generally not well suited to our context. Application to SO(2)-Convolutions We give here an application of our AIS approach for the computation of SO(2)convolutions by using von Mises densities [16] for the weighting. This type of computation is in particular relevant when working with SE(2)-ENN by exploiting the semi-direct product structure SE(2) = R 2 SO(2), as performed in [3]. Numerical Experiments To numerically validate our approach, we have considered von Mises type feature functions f κ 0 ,α 0 : α → e κ 0 cos(α 0 −µ) and kernel functions k : [0, 2π] → R modeled as small fully connected neural networks with one hidden layer of 128 neurons with ReLu activation and uniform random weights initialization. To run our testing, we have used κ 0 = 3 and µ 0 = π 2 . Figure 1 shows the comparison between the results obtained with the estimator (10) using the adaptive importance sampling scheme and those obtained with the conventional estimator (3). We can in particular see that the adaptive importance sampling scheme converges faster to the theoretical value (here computed by using (3) with n = 50, 000 and displayed in black in Figure 1), while providing much narrower confidence intervals (because of lower variance) than the conventional Monte Carlo estimator. Figure 2 shows the evolution of the parameter θ = (µ, κ) as we iterate through the update rule (18), from which we can also observe a fast convergence. Extension to SO(3)-Convolutions Generalizing the above results to cover SO(3)-convolutions is of particular interest when using ENN for processing spherical data such as fish-eye images [4,25]. The Fisher-Bingham distribution [26], also known as the Kent distribution, can be leveraged in this context. More precisely, we have in this case, for x ∈ S 2 (the 2D-sphere in R 3 ): where γ i for i = 1, 2, 3 are vectors of R 3 so that the 3 × 3 matrix Γ = [γ 1 , γ 2 , γ 3 ] is orthogonal and c(κ, β) is a normalizing constant. Although we defer to further work the details of the derivation of the corresponding AIS estimator (10), we illustrate on Figure 3 that SO(3)-convolutions could also benefit from variance reduction methods by using a simple quasi-Monte Carlo scheme [27] with a three-dimensional Sobol sequence [28]. Monte Carlo Methods in the Quantum Set-Up Monte Carlo computations can generally benefit from a quadratic speed-up in a quantum computing set-up [18] and improving quantum Monte Carlo integration schemes is a very active topic of research [19], mainly driven by applications within the financial industry [20,29]. A similar speed-up can therefore be expected in our context by estimating (1) by leveraging on the quantum amplitude estimation (QAE) algorithm [30]. For g ∈ G, we denote φ f ,k g : G → R the function such that ∀h ∈ G, φ f ,k We first construct the operator U µ G to load a discretized version of µ G so that U µ G |0 = ∑ h∈G p(h)|h , with p(h) = g∈B(h,r ) µ G (g), B(x, r) the ball of radius r > 0 centered in and build another unitary operator U φ to compute and load the values of φ f ,k g taken on G , that we defined by Using the QAE algorithm on U φ U µ G gives us access to an estimate of (1) after proper rescaling, with a precision of δ in O . As described in Section 3.1, the AIS estimator (10) leads to a precision of δ for samples, which is asymptotically less efficient than the above quantum estimator. However, no quantum advantage has been evidenced on current hardware for general Monte Carlo estimations and further challenges with respect to the precision of the evaluation of the integrand φ f ,k g are expected in our specific context. Keeping working on the optimization of the estimators in the classical set-up while keeping track of the progress made on the development of quantum hardware therefore appears a reasonable path to follow. Conclusions and Further Work By leveraging on the approach proposed in [13] for quantile estimation, we have introduced in this paper an AIS variance reduction method for the computation of groupbased convolution operators, a key component of equivariant neural networks. We have in particular used information geometry concepts to define an efficient update rule to infer the optimal sampling parametric distribution and have also shown promising results when working with the two-dimensional rotation group SO(2) and von Mises distributions. Further work will include the study of non-compact groups such as SU(1, 1) as to improve the efficiency of the computations underlying to the ENN introduced in [9]. As shown in [31], Souriau Thermodynamics can be used to build Gaussian distributions over SU(1, 1), which appear as natural candidates for applying the AIS scheme presented in this paper. We have also seen that Monte Carlo computations can generally benefit from a quadratic speed-up in a quantum computing set-up. Further work will include the study of using AIS in this context as to provide a generic and efficient quantum algorithm for groupconvolution computation. Benchmarking with group-Fourier transform-based approaches such as [21], which are more theoretically involved but with a promise of exponential speed-up, will also be of high interest, as it will be the case for results coming from the emerging field of quantum geometric deep learning [32,33]. Author Contributions: All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript. Funding: This paper is the result of some research work conducted by the authors at Thales Group. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
3,304
2022-12-05T00:00:00.000
[ "Computer Science", "Mathematics" ]
Effect of Preparation and Annealing Temperature on the Properties of (Hg,Tl)-2223 Superconductor Samples of superconducting compounds were prepared by a solid-state reaction technique in a sealed quartz tube under normal pressure. The impact of the compound on the electrical properties has been studied using the electrical resistance measurements of the samples as a function of temperature. The obtained results appear that an enhancement in the phase formation, and the superconducting transition temperature Tc were improved. It may be due to the decreasing of the magnetic impurities or the delocalisation of carriers which resulted in the enhancement of the density of mobile carriers in the conducting CuO2 planes. INTRODUCTION Most studies on high-temperature superconductivity (HTS) have been concentrated on reaching the highest superconducting transition temperature, T c . The homologous series of HgBa 2 Ca n-1 Cu n O 2n+2+δ [Hg-12(n-1)n] consists of two types of layers, i.e., charge reservoir layers (CRL) HgBa 2 O x and infinite layers (IL) Ca n-1 Cu n O 2n , where n is the number of CuO 2 planes that exist between two CRLs and supply holes to the above-mentioned CuO 2 planes. 1 Concentration of charge carrier in the CuO 2 planes plays an important role in high T c superconductors. T c of HgBa 2 Ca n-1 Cu n O 2n+2+δ phases strongly depend on two parameters: oxygen content (δ) and number (n) of (CuO 2 ) planes in their structures. 2,3 The number of CuO 2 planes (n) dependence of T c is an interesting problem that may bring important information to understand the high-T c superconductivity. Figure 1 shows the dependence of T c versus n. In this family of materials, the transition temperature sequentially increases with increasing number n of CuO 2 planes up to n = 3, and then it is observed to decrease with further increase of n. Each phase has a different transition temperature to the superconducting state: for n = 1 (1201), T C = 97 K; n = 2 (1212), T C = 127 K; n = 3 (1223), T C = 135 K; n = 4 (1224), T C = 126 K; n = 5 (1245), T C = 110 K; and n = 6 (1256), T C = 107 K. Previous studies on high T c superconductor have shown that the chemical doping or substation, preparation conditions and hole concentrations play very important roles in high T c and conventional superconductors. 4,5 To improve critical transition temperature of the (Hg,Tl)-2223 compound, samples were prepared with different conditions of annealing at different temperatures. It is very important to investigate impact of varying annealing conditions on the oxygen content of the (Hg,Tl)-2223 samples. Decreasing oxygen content in the sample leads to increase in T c . However, if the oxygen content increases, this allows for forming of other phases inside the sample due to the increase of the pressure inside the quartz tube. 6 Thus, it is useful to observe the variation of both the superconducting properties, as well as the normal state properties of materials as a function of changing preparation, annealing temperature and time annealing in order to understand superconductivity better. Therefore, the aim of the current study is to synthesise and characterise the (Hg,Tl)-2223 high temperature superconductor and to investigate the effects of the preparation, annealing temperature and time annealing on the properties of these superconducting samples. The results of the present study may provide useful information to further studies of the properties of (Hg,Tl)-2223 superconductors and the optimisation of the annealing processes for (Hg,Tl)-2223 superconductors. EXPERIMENTAL Samples with the nominal composition of (Hg 0.1 ,Tl 0.9 ) 2 Ba 2 Ca 2 Cu 3 O 8+δ were prepared by the standard solid-state reaction method in only one step. High purity (99.95%) chemicals of HgO, Tl 2 O 3 , BaO 2 , CaO and CuO were used as starting materials. These oxides were mixed using an agate mortar to make fine powder which was sieved in 64 μm sieve to obtain a homogeneous mixture. The powder was pressed in discs (1.5 cm in diameter and about 0.3 cm in thickness). Then, these discs were wrapped in a silver foil with 0.1 mm in thickness, which were put in sealed quartz tubes with a diameter of 1.5 cm and length of 15 cm. Next, the sealed tubes were put in closed stainless steel tubes, and the stainless steel tubes were placed horizontally in a furnace. Next, they were heated at a rate of 4°C min -1 to (700°C/811°C). The samples have been maintained at this temperature for 6 h, and then they were cooled to room temperature at a rate of 0.5°C min -1 . (Hg,Tl)-2223 superconducting samples were annealed in normal atmosphere at 500°C for different time. A closed cryogenic refrigeration system was used to perform the measurements of DC resistance for all samples with the four-probe method. The four contacts on the samples were made by a conductive silver paint. During resistance measurements a constant current of 2 mA has run through the sample, which was provided from a Keithely 2400 current source to avoid heating effects on the samples. Figure 2 shows behaviour of normalised resistance R (T)/R (300K) versus temperature for (Hg,Tl)-2223 samples before and after annealing. It is clear that the annealed sample has a tail and its resistance zero at a temperature of 79 K. This may be due to missing oxygen content. Figure 1 shows that the annealing has improved the phase transition and the semiconducting-like behaviour in the normal state was absent for the sample S 1 . The onset temperature (T c onset ) was increased from 111 K to 117 K. A value of zero electrical resistance (T c offset ) was 63 K before annealing, and became as 79 K after annealing, being higher by the annealing process. Second is possibility to reduce the amount of extra oxygen from the sample to decrease the undesired phases (magnetic, non-superconductor). It was previously reported that T c of T1-2223 samples synthesised under ambient pressure was substantially enhanced by the annealing in an evacuated tube. 7 Figure 3 shows the results of resistance measurements for the as-sintered samples sintered at 700°C (sample S 1 ) and 811°C (sample S 2 ) for 6 h. Clearly, it is noticeable that the sample S 1 has two phases, but with the sample S 2 , it is almost single phase. This means that the second phase has disappeared when the sample sintered at 811°C. It is important to mention that the S 2 sample was improved in connectivity and the improvement is related to the uniform distribution and alignment of superconducting grains. Also, as shown in Figure 3, the resistance decreases with temperature from 300 K like a metal for the S 2 sample, whereas the S 1 sample does not. The results of R (T) show that the sample has semiconducting-like behaviour in the normal state. This may be due to the reduced oxygen concentration in the bulk that may act as effective channelling centres of oxygen vacancies. 8,9 The values of T c onset and T c offset , which are determined from the electrical resistance behaviour are 111 K and 63 K respectively for S 1 sample, while the T c onset and T c offset for the S 2 sample are 107 K and 93 K, respectively. The transition width ΔTc = T c onset -T c offset determined from the difference between the onset temperature and zero resistance temperature for (Hg,Tl)-2223 samples are listed in Table 1. ) have a linear metallic behaviour in the normal state. By decreasing the annealing time, the metallicity increases. More metallic behaviour may be attributed to the best grain connection or to optimal carrier doping in the CuO 2 conducting planes of the sample under high oxygen pressure. 10 T c onset , determined from the electrical resistance behaviour when the resistance first drops, is equal to 107 K for S 2 sample. When the sample was annealed at 500°C for 4 h (sample S 21 ), T c increased by about 7 K to 114 K. Further annealing at 500°C for 2 h (sample S 22 ) enhanced T c up to 122 K. It is clear that the transition width decreased with decreasing annealing time, and the annealing improved the coupling characteristics between superconducting grains. The different values for all samples determined from the resistance measurements are listed in Table 1. Annealing the samples in normal conditions leads to a decrease of resistance and to an increase of T c , in agreement with the results of other studies. [11][12][13][14][15][16][17][18][19] Temperature (K) The normal-state resistance (R 300 ) and residual resistance (R 0 ) as a function of annealing time are plotted in Figure 4 for (Hg,Tl)-2223 sample before and after annealing. Here, R 300 is the resistance at 300 K and R 0 is obtained from the fitting of resistance data in the temperature range 2T c K ≤ T ≤ 300 K, according to Matthiessen's rule. 20 It can be observed from Figure 5 that the sample S 2 has highest room temperature resistance R 300 (0.0712 Ω) while the sample S 22 has lowest R 300 (0.0323 Ω). RESULTS AND DISCUSSION The post annealing at 500°C and 811°C (S 22 ) enhances the value of resistance at 300 K. One of the possible contributions to the room temperature resistance is from the defects present in the samples. The defects may be due to the presence of impurities and weak links between the superconducting grains. 21 During annealing, the oxygen content and thus the density of mobile carriers of the superconducting phase could increase, giving rise to a decrease in resistance. 22 After annealing in normal conditions, the S 22 sample had a critical temperature T c onset = 122 K with a transition width ΔT c = 10 K. It may be due to the decreasing of the magnetic impurities or the delocalisation of carriers which results in the increase of the holes concentration in the conducting CuO 2 planes. CONCLUSION The current study has investigated the influence of preparation, annealing temperature and time annealing on the properties of the (Hg,Tl)-2223 superconductor (T c onset and T c offset ). The results of R (T) measurements have shown that the optimised annealing temperature is 811°C for 2 h. The highest T c of S 22 sample in this study was T onset = 122 K and at T c offset = 112 K. The increased pressure inside the quartz tube allows the formation of other phases inside the sample when the samples were annealing in air (normal conditions). The inter-grain superconductivity in (Hg,Tl)-2223 samples may be significantly affected by the time and the annealing temperature.
2,528.4
2019-05-15T00:00:00.000
[ "Physics", "Materials Science" ]
Understanding HPC Benchmark Performance on Intel Broadwell and Cascade Lake Processors Hardware platforms in high performance computing are constantly getting more complex to handle even when considering multicore CPUs alone. Numerous features and configuration options in the hardware and the software environment that are relevant for performance are not even known to most application users or developers. Microbenchmarks, i.e., simple codes that fathom a particular aspect of the hardware, can help to shed light on such issues, but only if they are well understood and if the results can be reconciled with known facts or performance models. The insight gained from microbenchmarks may then be applied to real applications for performance analysis or optimization. In this paper we investigate two modern Intel x86 server CPU architectures in depth: Broadwell EP and Cascade Lake SP. We highlight relevant hardware configuration settings that can have a decisive impact on code performance and show how to properly measure on-chip and off-chip data transfer bandwidths. The new victim L3 cache of Cascade Lake and its advanced replacement policy receive due attention. Finally we use DGEMM, sparse matrix-vector multiplication, and the HPCG benchmark to make a connection to relevant application scenarios. Introduction Over the past few years the field of high performance computing (HPC) has received attention from different vendors, which led to a steep rise in the number of chip architectures. All of these chips have different performance-power-price points, and thus different performance characteristics. This trend is believed to continue in the future with more vendors such as Marvell, Huawei, and Arm entering HPC and related fields with new designs. Benchmarking the architectures to understand their characteristics is pivotal for informed decision making and targeted code optimization. However, with hardware becoming more diverse, proper benchmarking is challenging and error-prone due to wide variety of available but often badly documented tuning knobs and settings. In this paper we explore two modern Intel server processors, Cascade Lake SP and Broadwell EP, using carefully developed micro-architectural benchmarks, then show how these simple microbenchmark codes become relevant in application scenarios. During the process we demonstrate the different aspects of proper benchmarking like the importance of appropriate tools, the danger of black-box benchmark code, and the influence of different hardware and system settings. We also show how simple performance models can help to draw correct conclusions from the data. Our microbenchmarking results highlight the changes from the Broadwell to the Cascade Lake architecture and their impact on the performance of HPC applications. Probably the biggest modification in this respect was the introduction of a new L3 cache design. This paper makes the following relevant contributions: -We show how proper microarchitectural benchmarking can be used to reveal the cache performance characteristics of modern Intel processors. We compare the performance features of two recent Intel processor generations and resolve inconsistencies in published data. -We analyze the performance impact of the change in the L3 cache design from Broadwell EP to Skylake/Cascade Lake SP and investigate potential implications for HPC applications (effective L3 size, scalability). -For DGEMM we show the impact of varying core and Uncore clock speed, problem size, and sub-NUMA clustering on Cascade Lake SP. -For a series of sparse matrix-vector multiplications we show the consequence of the nonscalable L3 cache and the benefit of the enhanced effective L3 size on Cascade Lake SP. -To understand the performance characteristics of the HPCG benchmark, we construct and validate the roofline model for all its components and the full solver for the first time. Using the model we identify an MPI desynchronization mechanism in the implementation that causes erratic performance of one solver component. This paper is organized as follows. After describing the benchmark systems setup in Sect. 2, microarchitectural analysis using microbenchmarks (e.g., load and copy kernels and STREAM) is performed in Sect. 3 to 5. In Sect. 6 we then revisit the findings and see how they affect code from realistic applications. Section 7 concludes the paper. Related Work. There is a vast body of research on benchmarking of HPC systems. The following papers present and analyze microbenchmark and application performance data in order to fathom the capabilities of the hardware. Molka et al. [17] used their BenchIT microbenchmarking framework to thoroughly analyze latency and bandwidth across the full memory hierarchy of Intel Sandy Bridge and AMD Bulldozer processors, but no application analysis or performance modeling was done. Hofmann et al. [9,11] presented microbenchmark results for several Intel server CPUs. We extend their methodology towards Cascade Lake SP and also focus on application-near scenarios. Saini et al. [20,21] compared a range of Intel server processors using diverse microbenchmarks, proxy apps, and application codes. They did not, however, provide a thorough interpretation of the data in terms of the hardware architectures. McIntosh-Smith et al. [15] compared the Marvell ThunderX2 CPU with Intel Broadwell and Skylake using STREAM, proxy apps, and full applications, but without mapping architectural features to microbenchmark experiments. Recently, Hammond et al. [6,7] performed a benchmark analysis of the Intel Skylake and Marvell ThunderX2 CPUs, presenting results partly in contradiction to known hardware features: Cache bandwidths obtained with standard benchmark tools were too low compared to theoretical limits, the observed memory bandwidth with vectorized vs. scalar STREAM was not interpreted correctly, and matrix-matrixmultiplication performance showed erratic behavior. A deeper investigation of these issues formed the seed for the present paper. Finally, Marjanović et al. [13] attempted a performance model for the HPCG benchmark; we refine and extend their node-level model and validate it with hardware counter data. Testbed and Environment All experiments were carried out on one socket each of Intel's Broadwell-EP (BDW) and Cascade Lake-SP (CLX) CPUs. These represent previous-and current-generation models in the Intel line of architectures, which encompass more than 85% of the November 2019 top500 list. Table 1 summarizes key specifications of the testbed. Measurements conducted on a Skylake-SP Gold-6148 (SKX) machine are not presented as the results were identical to CLX (successor) in all the cases. The Broadwell-EP architecture has a three-level inclusive cache hierarchy. The L1 and L2 caches are private to each core and the L3 is shared. BDW supports the AVX2 instruction set, which is capable of 256-bit wide SIMD. The Cascade Lake-SP architecture has a shared non-inclusive victim L3 cache. The particular model in our testbed supports the AVX-512 instruction set and has 512-bit wide SIMD. Both chips support the "Cluster on Die [CoD]" (BDW) or "Sub-NUMA Clustering [SNC]" (CLX) feature, by which the chip can be logically split in two ccNUMA domains. Unless otherwise specified, hardware prefetchers were enabled. For all microbenchmarks the clock frequency was set to the guaranteed attainable frequency of the processor when all the cores are active, i.e., 1.6 GHz for CLX and 2.0 GHz for BDW. For real application runs, Turbo mode was activated. The Uncore clock speed was always set to the maximum possible frequency of 2.4 GHz on CLX and 2.8 GHz on BDW. Both systems ran Ubuntu version 18.04.3 (Kernel 4.15.0). The Intel compiler version 19.0 update 2 with the highest optimization flag (-O3) was used throughout. Unless otherwise stated, we added architecture-specific flags -xAVX (-xCORE-AVX512 -qopt-zmm-usage=high) for BDW (CLX). For experiments that use MKL and MPI libraries we used the version that comes bundled with the Intel compiler. The LIKWID tool suite in version 4.3 was used for performance counter measurements and benchmarking (likwid-perfctr and likwid-bench). Note that likwid-bench generates assembly kernels automatically, providing full control over the executed code. Influence of Machine and Environment Settings. The machine and environment settings are a commonly neglected aspect of benchmarking. Since they can have a decisive impact on performance, all available settings must be documented. Figure 1(a) shows the influence of different operating system (OS) settings on a serial load-only benchmark running at 1.6 GHz on CLX for different data-set sizes in L3 and memory. With the default OS setting (NUMA balancing on and transparent huge pages (THP) set to "madvise"), we can see a 2× hit in performance for big data sets. The influence of these settings can be seen for multi-core runs (see Fig. 1(a) right) where a difference of 12% is observed between the best and default setting on a full socket. This behavior also strongly depends on the OS version. We observed it with Ubuntu 18.04.3 (see Table 1). Consequently, we use the setting that gives highest performance, i.e., NUMA balancing off and THP set to "always," for all subsequent experiments. Modern systems have an increasing number of knobs to tune on system startup. Figure 1(b) shows the consequences of the sub-NUMA clustering (SNC) feature on CLX for the load-only benchmark. With SNC active the single core has local access to only one sub-NUMA domain causing the shared L3 size to be halved. For accesses from main memory, disabling SNC slightly reduces the single core performance by 4% as seen in the inset of Fig. 1 Single-Core Bandwidth Analysis Single-core bandwidth analysis is critical to understand the machine characteristics and capability for a wide range of applications, but it requires great care especially when measuring cache bandwidths since any extra cycle will directly change the result. To show this we choose the popular bandwidth measurement tool lmbench [16]. Figure 2 shows the load-only (full-read or frd) bandwidth obtained by lmbench as a function of data set size on CLX at 1.6 GHz. Ten runs per size are presented in a box-and-whisker plot. Theoretically, one core is capable of two AVX-512 loads per cycle for an L1 bandwidth of 128 byte/cy (204.8 Gbyte/s @ 1.6 GHz). However, with the compiler option -O2 (default setting) it deviates by a huge factor of eight (25.5 Gbyte/s) from the theoretical limit. The characteristic strong performance gap between L1 and L2 is also missing. Therefore, we tested different compiler flags and compilers to see the effect (see Fig. 2) and observed a large span of performance values. Oddly, increasing the level of optimization (-O2 vs -O3) dramatically decreases the performance. The highest bandwidth was attained for -O2 with the architecture-specific flags mentioned in Sect. Load-only bandwidth as a function of data set size on CLX. The plot compares the bandwidth obtained from likwid-bench with that of lmbench. likwid-bench is able to achieve 88% of the theoretical L1 bandwidth limit (128 byte/cy). The extreme sensitivity of lmbench benchmark results to compilers and compiler flags is also shown. The "zmm-flag*" refers to the compiler flag -qopt-zmm-usage=high. A deeper investigation reveals that this problem is due to compiler inefficiency and the nature of the benchmark. The frd benchmark performs a sum reduction on an integer array; in the source code, the inner loop is manually unrolled 128 times. With -O2 optimization, the compiler performs exactly 128 ADD operations using eight AVX-512 1 integer ADD instructions (vpaddd) on eight independent registers. After the loop, a reduction is carried out among these eight registers to accumulate the scalar result. However, with -O3 the compiler performs an additional 16-way unrolling on top of the 128-way manual unrolling and generates sub-optimal code with a long dependency chain and additional instructions (blends, permutations) inside the inner loop, degrading the performance. The run-to-run variability of the highest-performing lmbench variant is also high in the default setting (cyan line). This is due to an inadequate number of warmup runs and repetitions in the default benchmark setting; increasing the default values (to ten warmup runs and 100 repetitions) yields stable measurements (blue line). We are forced to conclude that the frd benchmark does not allow any profound conclusions about the machine characteristics without a deeper investigation. Thus, lmbench results for frd (e.g., [6,7,20,21]) should be interpreted with due care. However, employing proper tools one can attain bandwidths close to the limits. This is demonstrated by the AVX-512 load-only bandwidth results obtained using likwid-bench [24]. As seen in Fig. 2, with likwid-bench we get 88% of the theoretical limit in L1, the expected drops at the respective cache sizes, and much less run-to-run variations. Single-core bandwidth measurements in all memory hierarchy levels for loadonly and copy benchmarks (likwid-bench). The bandwidth is shown in byte/cy, which is a frequency-agnostic unit for L1 and L2 cache. For main memory, the bandwidth in Gbyte/s at the base AVX512/AVX clock frequency of 1.6 GHz/2 GHz for CLX/BDW is also indicated. Different SIMD widths are shown for CLX in L1. Horizontal lines denote theoretical upper bandwidth limits. Figure 3 shows application bandwidths 2 from different memory hierarchy levels of BDW and CLX (load-only and copy kernels). The core clock frequency was fixed at 1.6 and 2 GHz for CLX and BDW, respectively, with SNC/CoD switched on. The bandwidth is shown in byte/cy, which makes it independent of core clock speed for L1 and L2 caches. Conversion to Gbyte/s is done by multiplying the byte/cy value with the clock frequency in GHz. The effect of single-core L1 bandwidth for scalar and different SIMD width is also shown in Fig. 3(a) for CLX. It can be seen that the bandwidth reduces by 2× as expected when the SIMD width is halved each time. Intel's New Shared L3 Victim Cache From BDW to CLX there are no major observable changes to the behavior of L1 and L2 caches, except that the L2 cache size has been significantly extended in CLX. However, starting from Skylake (SKX) the L3 cache has been redesigned. In the following we study the effects of this newly designed non-inclusive victim L3 cache. L3 Cache Replacement Policy A significant change with respect to the L3 cache concerns its replacement policy. Since SNB, which used a pseudo-LRU replacement strategy [1], new Intel microarchitectures have implemented dynamic replacement policies [8] which continuously improved the cache hit rate for streaming workloads from generation to generation. Instead of applying the same pseudo-LRU policy to all workloads, post-SNB processors make use of a small amount of dedicated leader sets, each of which implements a different replacement policy. During execution, the processor constantly monitors which of the leader sets delivers the highest hit rate, and instructs all remaining sets (also called follower sets) to use the best-performing leader set's replacement strategy [19]. Experimental analysis suggests that the replacement policy selected by the processor for streaming access patterns involves placing new cache lines only in one of the ways of each cache set; the same strategy is used when prefetching data using the prefetchnta instruction (cf. Section 7.6.2.1 in [1]). Consequently, data in the remaining ten ways of the sets will not be preempted and can later be reused. Figure 4(a) demonstrates the benefit of this replacement policy by comparing it to previous generations' L3 caches. The figure shows the L3-cache hit rate 3 for different data-set sizes on different processors for a load-only data access pattern. To put the focus on the impact of the replacement policies on the cache hit rate, hardware prefetchers were disabled during these measurements. Moreover, data-set sizes are normalized to compensate the processors' different L3-cache capacities. The data indicates that older generations' L3 caches offer no data reuse for data set sizes of two times the cache capacity, whereas CLX's L3 delivers hit rates of 20% even for data sets almost four times its capacity. Reuse can by detected even for data sizes more than ten times the L3 cache size on CLX. The fact that this improvement can also be observed in practice is demonstrated in Fig. 4(b), which shows measured bandwidth for the same load-only 0 2 4 6 8 10 12 14 16 18 data-access pattern on CLX. For this measurement, all hardware prefetchers were enabled. The data indicates that the L3-cache hit-rate improvements directly translate into higher-than-memory bandwidths for data sets well exceeding the L3 cache's capacity. L3 Scalability Starting from Intel's Sandy Bridge architecture (created in 2011) the shared L3 cache of all the Intel architectures up to Broadwell is known to scale very well with the number of cores [11]. However, with SKX onwards the L3 cache architecture has changed from the usual ring bus architecture to a mesh architecture. Therefore in this section we test the scalability of this new L3 cache. In order to test the L3 scalability we use again the likwid-bench tool and run the benchmark with increasing number of cores. The data-set size was carefully chosen to be 2 MB per core to ensure that the size is sufficiently bigger than the L2 cache however small enough such that no significant data traffic is incurred from the main memory. The application bandwidths of the three basic kernels load-only, copy and update are shown in Fig. 5 for CLX and BDW. As the update kernel has equal number of loads and stores it shows the maximum attainable performance on both architectures. Note that also within cache hierarchies write-allocate transfers occur leading to lower copy application bandwidth. The striking difference between CLX and BDW for load-only bandwidth can finally be explained by the bi-directional L2-L3 link on CLX which only has half the load-only bandwidth of BDW (see Table 1). In terms of scalability we find that the BDW scales almost linearly and attains an efficiency within 90%, proving that the BDW has an almost perfectly scalable L3 cache. However, with CLX this behavior has changed drastically and the L3 cache saturates at higher core counts both with and without SNC enabled, yielding an efficiency of about 70%. Consequently, for applications that employ L3 cache blocking it might be worthwhile to consider L2 blocking instead on SKX and CLX. Applications that use the shared property of L3 cache like some of the temporal blocking schemes [12,25] might exhibit a similar saturation effect as in Fig. 5. The effect of SNC/COD mode is also shown in Fig. 5, with dotted lines corresponding to SNC off mode and solid to SNC on mode. For CLX with SNC off mode the bandwidth attained at half of the socket (ten threads) is higher than SNC on mode. This is due to the availability of 2× more L3 tiles and controllers with SNC off mode. Multi-core Scaling with STREAM The STREAM benchmark [14] measures the achievable memory bandwidth of a processor. Although the code comprises four different loops, their performance is generally similar and usually only the triad (A(:)=B(:)+s*C(:)) is reported. The benchmark output is a bandwidth number in Mbyte/s, assuming 24 byte of data traffic per iteration. The rules state that the working set size should be at least four times the LLC size of the CPU. In the light of the new LLC replacement policies (see Sect. 4.1), this appears too small and we chose a 2 GB working set for our experiments. Since the target array A causes write misses, the assumption of the benchmark about the code balance is wrong if write-back caches are used and write-allocate transfers cannot be avoided. X86 processors feature nontemporal store instructions (also known as streaming stores), which bypass the normal cache hierarchy and store into separate write-combine buffers. If a full cache line is to be written, the write-allocate transfer can thus be avoided. Nontemporal stores are only available in SIMD variants on Intel processors, so if the compiler chooses not to use them (or is forced to by a directive or a command line option), write-allocates will occur and the memory bandwidth available to the application is reduced. This is why vectorization appears to be linked with better STREAM bandwidth, while it is actually the nontemporal store that cannot be applied for scalar code. Note that a careful investigation of the impact of write-allocate policies is also required on other modern processors such as AMD-or ARM-based systems. 4 Figure 6 shows the bandwidth reported by the STREAM triad benchmark on BDW and CLX with (a,b) and without (c) CoD/SNC enabled. There are three data sets in each graph: full vectorization with the widest supported SIMD instruction set and standard stores (ST), scalar code, and full vectorization with nontemporal stores (NT). Note that the scalar and "ST" variants have very similar bandwidth, which is not surprising since they both cause write-allocate transfers for an overall code balance of 32 byte/it. The reported saturated bandwidth of the "NT" variant is higher because the memory interface delivers roughly the CoD/SNC disabled. "NT" denotes the use of nontemporal stores (enforced by the -qopt-streaming-stores always), with "ST" the compiler was instructed to avoid them (via -qopt-streaming-stores never), and the "scalar" variant used non-SIMD code (via -no-vec). The working set was 2 GB. Core/Uncore clock speeds were set to 1.6 GHz/2.4 GHz on CLX and 2.0 GHz/2.8 GHz on BDW to make sure that no automatic clock speed reduction can occur. Note that the "scattered" graphs start at two cores. same bandwidth but the code balance is only 24 byte/it. This means that the actual bandwidth is the same as the reported bandwidth; with standard stores, it is a factor of 4/3 higher. In case of BDW, the NT store variant thus achieves about the same memory bandwidth as the ST and scalar versions, while on CLX there is a small penalty. Note that earlier Intel processors like Ivy Bridge and Sandy Bridge also cannot attain the same memory bandwidth with NT stores as without. The difference is small enough, however, to still warrant the use of NT stores in performance optimization whenever the store stream(s) require a significant amount of bandwidth. The peculiar shape of the scaling curve with CoD or SNC enabled and "compact" pinning (filling the physical cores of the socket from left to right, see Fig. 6(a)) is a consequence of the static loop schedule employed by the OpenMP runtime. If only part of the second ccNUMA domain is utilized (i.e., between 10 and 17 cores on BDW and between 11 and 19 cores on CLX), all active cores will have the same workload, but the cores on the first, fully occupied domain have less bandwidth available per core. Due to the implicit barrier at the end of the parallel region, these "slow" cores take longer to do their work than the cores on the other domain. Hence, over the whole runtime of the loop, i.e., including the waiting time at the barrier, each core on the second domain runs at the average performance of a core on the first domain, leading to linear scaling. A "scattered" pinning strategy as shown in Fig. 6(b) has only one saturation curve, of course. Note that the available saturated memory bandwidth is independent of the CoD/SNC setting for both CPUs. Implications for Real-World Applications In the previous sections we discussed microbenchmark analysis of the two Intel architectures. In the following we demonstrate how these results reflect in real applications by investigating important kernels such as DGEMM, sparse matrix-power-vector multiplication, and HPCG. According to settings used in production-level HPC runs, we use Turbo mode and switch off SNC unless specified otherwise. Statistical variations for ten runs are shown whenever the fluctuations are bigger than 5%. DGEMM-Double-Precision General Matrix-Matrix Multiplication If implemented correctly, DGEMM is compute-bound on Intel processors. Each CLX core is capable of executing 32 floating-point operations (flops) per cycle (8 DP numbers per AVX-512 register, 16 flops per fused multiply-add (FMA) instruction, 32 flops using both AVX-512 FMA units). Running DGEMM on all twenty cores, the processor specimen from the testbed managed to sustain a frequency of 2.09 GHz. The upper limit to DGEMM performance is thus 1337.6 Gflop/s. Figure 7(a) compares measured full-chip performance of Intel MKL's DGEMM implementation on CLX in Turbo mode (black line) to theoretical peak performance (dashed red line). The data indicates that small values of N are not suited to produce meaningful results. In addition to resulting in suboptimal performance, values of N below 10,000 lead to significant variance in measurements, as demonstrated for N = 4096 using a box-plot representation (and reproducing the results from [7]). Figure 7(b) shows measured DGEMM performance with respect to the number of active cores. When the frequency is fixed (in this case at 1.6 GHz, which is the frequency the processor guarantees to attain when running AVX-512 enabled code on all its cores), DGEMM performance scales all but perfectly with the number of active cores (black line). Consequently, the change of slope in Turbo mode stems solely from a reduction in frequency when increasing the number of active cores. Moreover, the data shows that SNC mode is slightly detrimental to performance (blue vs. green line). Similar performance behavior can be observed on Haswell-based processors, which have been studied in [10]. However, on Haswell a sensitivity of DGEMM performance to the Uncore frequency could be observed [11]: When running cores in Turbo mode, increasing the Uncore frequency resulted in a decrease of the share of the processor's TDP available to the cores, which caused them to lower their frequency. On CLX this is no longer the case. Running DGEMM on all cores in Turbo mode results in a clock frequency of 2.09 GHz independent of the Uncore clock. Analysis using hardware events suggests that the Uncore clock is subordinated to the core clock: Using the appropriate MSR (0x620), the Uncore clock can only be increased up to 2.4 GHz. There are, however, no negative consequences of this limitation. Traffic analysis in the memory hierarchy indicates that DGEMM is blocked for the L2 cache, so the Uncore clock (which influences L3 and memory bandwidth) plays no significant role for DGEMM. SpMPV -Sparse Matrix-Power-Vector Multiplication The SpMPV benchmark (see Algorithm 1) computes y = A p x, where A is a sparse matrix, as a sequence of sparse matrix-vector products. The SpMPV kernel is used in a wide range of numerical algorithms like Chebyshev filter diagonalization for eigenvalue solvers [18], stochastic matrix-function estimators used in big data applications [22], and numerical time propagation [23]. The sparse matrix is stored in the compressed row storage (CRS) format using double precision, and we choose p = 4 in our experiments. For the basic sparse matrix vector (SpMV) kernel we use the implementation in Intel MKL 19.0.2. The benchmark is repeated multiple times to ensure that it runs for at least one second, so we report the average performance over many runs. We selected five matrices from the publicly available SuiteSparse Matrix Collection [5]. The choice of matrices was motivated by some of the hardware properties (in particular L3 features) as investigated in previous sections via microbenchmarks. The details of the chosen matrices are listed in Table 2. The matrices were pre-processed with reverse Cuthill-McKee (RCM) to attain better data locality; however, all performance measurements use the pure SpMPV execution time, ignoring the time taken for reordering. L3 Scalability. Figure 8a shows the performance scaling of the ct20stif matrix on CLX and BDW. This matrix is just 32 MB in size and fits easily into the caches of both processors. Note that even though CLX has just 27. 5 a non-inclusive victim cache. The applicable cache size using all cores is thus the aggregate L2/L3 cache size, 47.5 MiB. The L3 bandwidth saturation of CLX as shown in Sect. 4.2 is reflected by the performance saturation in the SpMPV benchmark. For this matrix, BDW performs better than CLX since the sparse matrix kernel is predominantly load bound and limited by the bandwidth of the load-only microbenchmark (see Fig. 5a). Despite this advantage, the in-cache SpMPV scaling on BDW is not linear (parallel efficiency ε = 67.5% at all cores), which differs from the microbenchmark results in Fig. 5a. The main reason is the active Turbo mode, causing the clock speed to drop by 25% when using all cores (BDW: 3.6 GHz at single core to 2.7 GHz at full socket; CLX: 3.8 GHz at single core to 2.8 GHz at full socket). L3 Cache Replacement Policy. We have seen in Sect. 4.1 that CLX has a more sophisticated adaptive L3 cache replacement policy, which allows it to extend the caching effect for working sets as big as ten times the cache size. Here we show that SpMPV can profit from this as well. We choose three matrices that are within five times the L3 cache size (index 2, 3, and 4 in Table 2) and a moderately large matrix that is 37 times bigger than the L3 cache (index 5 in Table 2). Figure 8b shows the full-socket performance and memory transfer volume for the four matrices. Theoretically, with a least-recently used (LRU) policy the benchmark requires a minimum memory data transfer volume of 12 + 28/N nzr bytes per non-zero entry of the matrix [3]. This lower limit is shown in Fig. 8b (right panel) with dashed lines. We can observe that in some cases the actual memory traffic is lower than the theoretical minimum, because the L3 cache can satisfy some of the cacheline requests. Even though CLX and BDW have almost the same amount of cache, the effect is more prominent on CLX. On BDW it is visible only for the boneS01 matrix, which is 1.7× bigger than its L3 cache, while on CLX it can be observed even for larger matrices. This is compatible with the microbenchmark results in Sect. 4.1. For some matrices the transfer volume is well below 12 bytes per entry, which indicates that not just the vectors but also some fraction of the matrix stays in cache. As shown in the left panel of Fig. 8b, the decrease in memory traffic directly leads to higher performance. For two matrices on CLX the performance is higher than the maximum predicted by the roofline model (dashed line) even when using the highest attainable memory bandwidth (load-only). This is in line with data presented in [3]. HPCG -High Performance Conjugate Gradient HPCG 5 (High Performance Conjugate Gradient) is a popular memory-bound proxy application which mimics the behavior of many realistic sparse iterative algorithms. However, there has been little work to date on analytic performance modeling of this benchmark. In this section we analyze HPCG using the roofline approach. The HPCG benchmark implements a preconditioned conjugate gradient (CG) algorithm with a multi-grid (MG) preconditioner. The linear system is derived from a 27-point stencil discretization, but the corresponding sparse matrix is explicitly stored. The benchmark uses the two BLAS-1 kernels DOT and WAXPBY and two kernels (SpMV and MG) involving the sparse matrix. Algorithm 2. HPCG The chip-level performance of HPCG should thus be governed by the memory bandwidth of the processor. Since the benchmark prints the Gflop/s performance of all kernels after a run, this should be straightforward to corroborate. However, the bandwidth varies a lot across different kernels in HPCG (see Table 3 3 Gbyte/s, respectively. The latter value is substantially higher than any STREAM value presented for BDW in Fig. 6. In the following, we use performance analysis and measurements to explore the cause of this discrepancy, and to check whether the HPCG kernel bandwidths are in line with the microbenchmark analysis. Setup. For this analysis we use the recent reference variant of HPCG (version 3.1), which is a straightforward implementation using hybrid MPI+OpenMP parallelization. However, the local symmetric Gauss-Seidel (symGS) smoother used in MG has a distance-1 dependency and is not shared-memory parallel. The main loop of the benchmark is shown in Algorithm 2, where A is the sparse matrix stored in CRS format. As the symGS kernel consumes more than 80% of the entire runtime, the benchmark is run with pure MPI using one process per core. The code implements weak scaling across MPI processes; we choose a local problem size of 160 3 for a working set of about 1.3 GB per process. The maximum number of CG iteration was set at 25, the highest compiler optimization flag was used (see Table 1), and the contiguous storage of sparse matrix data structures was enabled (-DHPCG CONTIGUOUS ARRAYS). Performance Analysis of Kernels. We use the roofline model to model each of the four kernels separately. Due to their strongly memory-bound characteristics, an upper performance limit is given by P x = b s /C x , where b s is the full-socket (saturated) memory bandwidth and C x is the code balance of the kernel x. As we have a mixture of BLAS-1 (N r iterations) and sparse (N nz iterations) kernels, C x is computed in terms of bytes required and work done per row of the matrix. The reference implementation has three DOT kernels (see Algorithm 2). Two of them need two input vectors (lines 4 and 8 in Algorithm 2) and the other requires just one (norm computation in line 12), resulting in a total average code balance of C DOT = ((2 · 16 + 8)/3) byte/row = 13.3 byte/row. All three WAXPBY kernels need one input vector and one vector to be both loaded and stored, resulting in C WAXPBY = 24 byte/row. For sparse kernels, the total data transferred for the inner N nzr iterations has to be considered. As shown in Sect. 6.2, the optimal code balance for SpMV is 12 + 28/N nzr bytes per non-zero matrix entry, i.e., C SpMV = (12N nzr + 28) byte/row. Note that this is substantially different from the model derived in [13]: We assume that the RHS vector is loaded only once, which makes the model strictly optimistic but is a good approximation for well-structured matrices like the one in HPCG. For the MG preconditioner we consider only the finest grid since the coarse grids do not substantially contribute to the overall runtime. Therefore the MG consists mainly of one symGS pre-smoothing step followed by one SpMV and one symGS post-smoothing step. The symGS comprises a forward sweep (0:nrows) followed by a backward sweep (nrows:0). Both have the same optimal code balance as SpMV, which means that the entire MG operation has a code balance of five times that of SpMV: The correctness of the predicted code balance can be verified using performance counters. We use the likwid-perfctr tool to count the number of main memory data transfers for each of the kernels. 7 Table 3 summarizes the predicted and measured code balance values for full-socket execution along with the reported performance and number of flops per row for the four kernels in HPCG. Except for DDOT, the deviation between predicted and measured code balance is less than 10%. MPI Desynchronization. Surprisingly, DDOT has a measured code balance that is lower than the model, pointing towards caching effects. However, a single input vector for DDOT has a size of 560 MB, which is more than ten times the available cache size. As shown in Sect. 4.1, even CLX is not able to show any significant caching effect with such working sets. Closer investigation revealed desynchronization of MPI processes to be the reason for the low code balance: In Algorithm 2 we can see that the DOT kernels can reuse data from previous Fig. 9. Performance of different kernels in the HPCG benchmark (reference implementation) as a function of active cores. kernels. For example, the last DOT (line 12) reuses the r vector from the preceding WAXPBY. Therefore, if MPI processes desynchronize such that only some of them are already in DOT while the others are still in preceding kernels (like WAXPBY), then the processes in DOT can reuse the data, while the others just need to stream data as there is no reuse. To have a measurable performance impact of the desynchronization phenomenon, a kernel x should satisfy the following criteria: -no global synchronization point between x and its preceding kernel(s), -some of the data used by x and its predecessor(s) are the same, -the common data used by the kernels should have a significant contribution in the code balance (C x ) of the kernel. In Algorithm 2, DOT is the only kernel that satisfies all these conditions and hence it shows the effect of desynchronization. This desynchronization effect is not predictable and will vary across runs and machines as can be observed in the significant performance fluctuation of DOT in Fig. 9. To verify our assumption we added barriers before the DOT kernels, which caused the measured C DOT to go up to 13.3 byte/row, matching the expected value. The desynchronization effect clearly shows the importance of analyzing statistical fluctuations and deviations from performance models. Ignoring them can easily lead to false conclusions about hardware characteristics and code behavior. Desynchronization is a known phenomenon in memory-bound MPI code that can have a decisive influence on performance. See [2] for recent research. Combining Kernel Predictions. Once the performance predictions for individual kernels are in place, we can combine them to get a prediction of the entire HPCG. This is done by using a time-based formulation of the roofline model and linearly combining the predicted kernel runtimes based on their call counts. If F x is the number of flops per row and I x the number of times the kernel x is invoked, the final prediction is where Table 3 gives an overview of F x , I x , and C x for different kernels and compares the predicted and measured performance on a full socket. The prediction is consistently higher than the model because we used the highest attainable bandwidth for the roofline model prediction. For Intel processors this is the load-only bandwidth b S = 115 Gbyte/s (68 Gbyte/s) for CLX (BDW), which is approximately 10% higher than the STREAM values (see Sect. 5). Figure 9 shows the scaling performance of the different kernels in HPCG. The typical saturation pattern of memory-bound code can be observed on both architectures. Conclusions and Outlook Two recent, state-of-the-art generations of Intel architectures have been analyzed: Broadwell EP and Cascade Lake SP. We started with a basic microarchitectural study concentrating on data access. The analysis showed that our benchmarks were able to obtain 85% of the theoretical bandwidth limits. For the first time, the performance effect of Intel's newly designed shared L3 victim cache was demonstrated. During the process of microbenchmarking we also identified the importance of selecting proper benchmark tools and the impact of various hardware, software, and OS settings, thereby proving the need for detailed documentation. We further demonstrated that the observations made in microbenchmark analysis are well reflected in real-world application scenarios. To this end we investigated the performance characteristics of DGEMM, sparse matrix-vector multiplication, and HPCG. For the first time, a roofline model of HPCG and its components was established and successfully validated for both architectures. Performance modeling was used as a guiding tool throughout this work to get deeper insight and explain anomalies. Future work will include investigation of benchmarks for random and latencybound codes along with the development of suitable performance models. The existing and further upcoming wide range of architectures will bring more parameters and benchmarking challenges, which will be very interesting and worthwhile to investigate.
9,034.2
2020-02-09T00:00:00.000
[ "Computer Science", "Engineering" ]
Elucidating pharmacodynamic interaction of silver nanoparticle - topical deliverable antibiotics In order to exploit the potential benefits of antimicrobial combination therapy, we need a better understanding of the circumstances under which pharmacodynamic interactions expected. In this study, Pharmacodynamic interactions between silver nanoparticle (SNP) and topical antibiotics such as Cefazolin (CEF), Mupirocin (MUP), Gentamycin (GEN), Neomycin (NEO), Tetracycline (TET), Vancomycin (VAN) were investigated using the MIC test, Combination assay followed by Fractional Inhibitory concentration Index and Agar well diffusion method. SNP + MUP, SNP + NEO, SNP + VAN combinations showed Synergism (SN) and SNP + CEF, SNP + GEN, SNP + TET showed Partial synergism (PS) against Staphylococcus aureus. Four combinations (SNP + CEF, SNP + MUP, SNP + GEN, SNP + VAN) showed SN, SNP + TET showed PS and Indifferent effect (ID) were observed for SNP + NEO against Pseudomonas aeruginosa. SN was observed for SNP + CEF, SNP + GEN, SNP + NEO, SNP + TET and SNP + MUP showed ID, SNP + VAN showed PS against Escherichia coli. In addition, we elucidated the possible mechanism involved in the pharmacodynamic interaction between SNP-topical antibiotics by increased ROS level, membrane damage following protein release, K+ leakage and biofilm inhibition. Thus, our findings support that conjugation of the SNP with topical antibiotics have great potential in the topical formulation when treating complex resistant bacterial infections and where there is a need of more concentration to kill pathogenic bacteria. Pharmacodynamic interactions are those where the effects of one drug are changed by the presence of another drug at its site of action. An interaction is said to occur when the effects of one drug are altered by the co-administration of another drug, herbal medicine, food, drink or other environmental chemical agents 1 . The net effect of the combination may manifest as an additive or enhanced effect of one or more drugs, antagonism of the effect of one or more drugs, or any other alteration in the effect of one or more drugs. Combinatorial antibiotic treatments can have diverse effects on bacterial survival. Antibiotics can be more effective as a combination treatment displaying either an additive effect (an effect equal to the sum of the treatments) or a synergistic effect (an effect greater than the sum of the treatments). The combination can also be antagonistic-that is, the effect of the combination treatment is less than the effect of the respective single-drug treatments 2 . The use of Silver nanoparticles can be exploited in various fields, particularly medical and pharmaceutical due to their low toxicity to human cells, high thermal stability and low volatility 3 . This has resulted in a broad array of studies in which Silver nanoparticles have played a role as drug and as well as superior antimicrobial agent and even has shown to prevent HIV binding to host cells 4 . Researchers have investigated the synergistic effect of SNP when combined with other compounds: a combination of amoxicillin and SNP showed better antibacterial properties against Escherichia coli than when they were applied alone 5 . In addition, our previous study demonstrated that topically delivered biogenic SNPs formulation show superior wound healing efficacy when compared to standard silver sulphadiazine ointment currently available in the market 6 . Rather than using other delivery methods (oral or systematic), it is seen that silver is most commonly delivered through topical administration. Topical delivery of Silver metal has been used since ancient times for wound healing due to comparatively less toxic properties. Moreover, its surface properties provide the good conjugation ability with antimicrobial agents and easy impregnation on the cotton dressings. Incorporation of silver nanoparticle with topical antibiotics could provide broad coverage and be a better alternative against antibiotics resistant infection. The present study aims to evaluate any pharmacodynamic interactions such as synergistic, additive, antagonistic effect of SNPs when conjugate with the commonly used topical deliverable antibiotics such as Cefazolin (CEF), Mupirocin (MUP), Gentamycin (GEN), Neomycin (NEO), Tetracycline (TET), Vancomycin (VAN) against major causes of wound, burn bacterial infections. In order to exploit the potential benefits of antimicrobial combination therapy, we need a better understanding of the circumstances under which pharmacodynamic interactions expected. In this study, we elucidated pharmacodynamic interactions by increased ROS level, membrane damage following protein release, K + leakage and biofilm inhibition could allow for a more careful application of antibiotics that maintain the clinical capability and does not sacrifice the future usefulness of these drugs. To the best of the authors' knowledge, this is the first report on the possible pharmacodynamic interaction of several topical antibiotics and SNPs. Results Preparation, characterization of the SNP. The current study utilizes the SNP synthesized biologically using potato plant pathogenic fungus Phytophthora infestans. The results of the SNP synthesis and characterization were described in our previous study 6 Table 1). The MIC value of SNP for S. aureus ATCC 25922, P. aeruginosa ATCC 25619, E. coli ATCC 10536 were 5, 2.5 and 2.5 μ g/ml, respectively. Among the tested topical antibiotics; Cefazolin, Mupirocin, Neomycin and Vancomycin showed similar susceptibility (0.625 μ g/ml) and Gentamycin, Tetracycline activity were also found to be similar level, 1.25 μ g/ml against S. aureus ATCC 25922. Mupirocin, Vancomycin found to be more susceptible than other antibiotics tested and neomycin showed no activity against P. aeruginosa ATCC 25619. Cefazolin and Gentamycin showed similar activity, 0.3125 μ g/ml followed by Tetracycline and Vancomycin, 0.625 μ g/ml against E. coli ATCC 10536 but this strain showed resistant against Mupirocin. Combination assay and FIC Index. Combination assay followed by FIC Index was measured to determine the pharmacodynamic interactions such as synergistic, partial synergistic and antagonistic or additive effects of SNP combined with topical antibiotics. Fractional Inhibitory concentration Index of SNPs and topical antibiotics against S. aureus, P. aeruginosa, E. coli were shown in the Table 2. Among the combinations tested against S. aureus; SNP + MUP, SNP + NEO, SNP + VAN showed synergistic (SN) effect and partially synergism (PS) were observed with SNP + CEF, SNP + GEN and SNP + TET. Interestingly, four combinations (SNP + CEF, SNP + MUP, SNP + GEN and SNP + VAN) showed synergistic efficacy and one SNP + TET showed PS but the FICI of SNP + NEO were found to be 1.75 μ g/ml showed Indifferent (ID) efficacy against P. aeruginosa. Similarly, four combinations such as SNP + CEF, SNP + GEN, SNP + TET, SNP + NEO showed SN efficacy, one combination (SNP + VAN) showed PS and SNP + MUP showed an ID effect against E. coli. Agar well plate diffusion method. Pharmacodynamic interactions of SNP combined with topical antibiotics were also evaluated by agar well plate method, Zone of inhibition (ZOI) of SNP or antibiotics alone and the combination were measured and compared with the control. Figure 1 shows the inhibitory effect of individual substance and combination against S. aureus and ZOI comparison was depicted in the Comparative ROS formation assay. Measurement of ROS generation was utilized to elucidate the pharmacodynamic interaction in this study since the elevated ROS formation leads to the imbalance redox system Comparative measurement of intracellular potassium release. Measurement of intracellular K + release was carried out to confirm bacterial membrane damage by individual substances and combination tested in this study. All individual substances, including SNP showed significant levels of intracellular K + generation as compared to the negative and positive control against the strains S. aureus and P. aeruginosa (Fig. 8). But the Neomycin alone did not exhibit significant (p < 0.05) K + levels as compared to the negative and positive control against the strain E. coli. Though Neomycin alone was not cause considerable K + leakage, the still significant level was observed when combined with SNP (SNP + NEO) for the stain E. coli. Antibiofilm assay. Antibiofilm assay was performed for SNP alone and combined with topical deliverable antibiotics against three bacterial strains (Fig. 9). The results indicated that SNP mostly had an inhibitory effect on biofilm formation, an average of 60. 67 ± 2.52 against E. coli followed by 40.33 ± 5.033 for P. aeruginosa and 34.33 ± 9.71 against the strain S. aureus. The inhibitory effect of combinations with antibiotics on the biofilm formation was compared with the SNP alone, the results depicted that all combinations tested showed significant antibiofilm activity (P < 0.05) except the SNP combined with Neomycin (SNP + NEO) when compared to the SNP alone. Interestingly, the equal inhibitory effect was observed for SNP + CEF and SNP + MUP combinations against the strain P. aeruginosa. Discussion The discovery rate of new antibiotics is in decline, while antibiotic resistance in pathogens is rapidly increasing. Drug combinations offer potential strategies for controlling the evolution of drug resistance 2 . Despite their growing biomedical relevance, fundamental questions about drug interactions remain unanswered. In particular, little is known about the underlying mechanisms of most drug interactions. A better understanding of the circumstances under which pharmacodynamic interactions could allow for a more careful application of antibiotics that maintain the clinical capability and does not sacrifice the future usefulness of these drugs. This study elucidates for the first time, pharmacodynamic interactions of SNP -topically deliverable antibiotics against major causes of wound, burn bacterial infections. To examine susceptibility patterns of the SNP, topical antibiotics alone and conjugation with SNP, MIC test was performed against S. aureus, P. aeruginosa, E. coli. Shrivastava et al. 7 reported that the SNP were more susceptible against Gram-negative than Gram-positive bacteria 7 and some strains of E. coli showed resistance against SNP 8 . In this study, SNP showed susceptibility against both Gram-negative and Gram-positive bacterial strains. All topical antibiotics tested were shown an inhibitory effect against S. aureus, Neomycin found to be not effective against P. aeruginosa and Mupiracin also not showed efficacy against E. coli strain. The previous studies stated that, antibacterial activities of few antibiotics were increased in the presence of SNP 9 , synergistic effects between polymyxin B and silver nanoparticles for gram-negative bacteria 10 and Cephalexin conjugated Silver nanoparticles against E. coli 11 . The current study evaluated the entire pharmacodynamic interaction between SNP and topical antibiotics, results of combined assay followed by FIC Index and agar well diffusion assay showed that various degrees of pharmacodynamic interactions such as SN, PS and ID against a common cause of infections in wound, burns. Interestingly, antagonistic (AN) effect was not observed among the combinations against all tested bacterial strains in this study. However, no report was found in the literature showing the antagonistic effect of SNP with antiinfectives. Our study results suggested that the SNP could be conjugated with an anti-infective agents having less or no efficacy, an agents need more concentration to kill pathogenic bacteria and to target both gram-positive/ negative resistant bacterial strains causing complex infections. In order to elucidate the pharmacodynamic interactions observed in this study, we conducted the comparative ROS formation assay, Comparative measurement of intracellular potassium release, anti-biofilm assay, SDS-PAGE and Microscopic comparison of live/dead cells. Kohanski M. A. et al. 12 revealed that the mechanism of ROS formation induced by bactericidal antibiotics is the end product of an oxidative damage cellular death pathway and additionally, In-sok Hwang. et al. 13 reported that two different classes of bactericidal antibiotics, ampicillin and kanamycin, induced ROS formation 13 . Our findings consistent with this, there was a statistically significant (p < 0.05) increase of ROS in the bacterial strains treated with SNP alone and combinations, suggested that ROS generated membrane damage could be a major cause for pharmacodynamic interaction seen. Bortner and Cidlowski 14 report that, when the concentration of intracellular potassium (K+ ) is normal, the cell death process is repressed by the suppression of the caspase cascade and inhibition of the apoptotic nucleases activity and Tiwari et al. 15 correlated that the leakage of intracellular potassium due to membrane damage. Our findings revealed that SNP and combinations caused significant level potassium leakage due to membrane damage as compared to the control treated against S. aureus, P. aeruginosa, E. coli. Biofilms are microbial communities consisting of cells attached to biotic or abiotic surfaces, embedded in an exopolymeric matrix. These structures are well known for their remarkable resistance to diverse chemical, physical, and biological antimicrobial agents and are one of the main causes of persistent infections 16 resistance 17 . In this study, significant biofilm inhibition was observed for all combinations tested, however SNP also were shown a good degree of antibiofilm activity, suggested that the SNP could be conjugated when resistance of bacteria due to biofilm formation and SNP conjugation provide more potential to achieve greater biofim inhibition at lower antibiotic concentrations. Membrane damage following intracellular protein bands was observed by SDS-PAGE, however the protein bands obtained must be subjected to protein peptide mass fingerprinting analyses would provide specific clues and damage of the membrane caused by SNP alone, the combinations were visualized in the fluorescence microscopy. These findings revealed that SNP, conjugated antibiotics caused membrane damage following intracellular proteins and bacterial cell death visualized in fluorescence microscopy. Conclusion This study presented pharmacodynamic interaction between SNP and topical antibiotics such as synergism, partial synergism and indifferent effect against the major cause of wound infections. Moreover, we elucidated the possible mechanism involved in the pharmacodynamic interaction caused by increased ROS level, membrane damage following protein release, K+ leakage and biofilm inhibition. Thus, our findings support that conjugation of the SNP with topical antibiotics have great potential in the topical formulation when treating complex resistance, bacterial infections and where there is a need of more concentration to kill pathogenic bacteria. Materials and Method Agents, bacterial strains and culture conditions. All antibiotics used in this study (Cefazolin, Mupirocin, Gentamycin, Neomycin, Tetracycline, Vancomycin) were purchased from Sigma Chem. Co., USA. Staphylococcus aureus ATCC 25922, Pseudomonas aeruginosa ATCC 25619, Escherichia coli ATCC 10536 were obtained from the American Type Culture Collection. LB medium purchased from Oxoid, UK was used for the bacterial cell growth at 37 °C and growth was monitored by measuring the optical density at 620 nm. Preparation, characterization of SNP. The current study utilizes the SNP synthesized biologically using potato plant pathogenic fungus Phytophthora infestans. Method of synthesis and characterization of the SNP is described in our previous study 6 . MIC test. Susceptibility tests with SNPs, Cefazolin, Mupirocin, Gentamycin, Neomycin, Tetracycline, Vancomycin were carried out in 96-well microtitre plates using a standard twofold broth micro dilution method in Mueller Hinton (MH) broth (HiMedia), referring Clinical and Laboratory Standards Institute guidelines 18 . Briefly, bacterial cells were grown to midexponential growth phase in MH medium. 100 ml of the bacterial cells was then added in the wells of a 96-well microtitre plate at a density of 1 × 10 6 ml −1 ; 10 ml each of the serially diluted solutions of the compounds was then added to the bacterial cells. The MIC was defined as the lowest concentration of the agent/drug inhibiting visible growth after overnight incubation at 37 °C. Combination assay. The MICs of each antibiotic substance alone or in combination were determined by a broth microdilution method in accordance with CLSI standards using cation-adjusted MH broth, modified for a broth microdilution checkerboard procedure 18 . For the double treatment, a 2D checkerboard with twofold dilutions of each drug was used to test the different combinations as follows. A checkerboard with twofold dilutions of SNPs and antibiotic (Cefazolin, Mupirocin, Gentamycin, Neomycin, Tetracycline, Vancomycin) were set up as described above for the combined treatment. Growth control wells containing the medium were included in each plate. Fractional Inhibitory concentration (FIC) and FIC Index. To determine the effectiveness of test substances for synergistic, antagonistic or additive effects, FIC index was measured, using the following formulas 19 . FIC of drug A = MIC drug A in combination/MIC drug A alone, and FIC of drug B = MIC drug B in combination/MIC drug B alone. The FIC index (FICI), calculated as the sum of each FIC, was interpreted as follows: synergistic (≤ 0.5), partially synergistic (> 0.5 to 1), additive (Equal to 1), indifferent (> 1 to < 2) or antagonistic (≥ 2) on the basis of FIC. Agar well plate diffusion method. Pharmacodynamic interaction was confirmed by agar well plate diffusion method 20 and according to the guidelines of the National Committee for Clinical Laboratory Standards 18 . Bacterial cell 1 × 10 6 ml −1 density were treated with each substance or combination so that the final concentration was the MIC or FIC, respectively. Pharmacodynamic interaction, such as synergism, additive, indifferent and antagonistic effect were defined as comparing the zone of inhibition surroundings of well containing topical antibiotic, antibiotic + SNPs and control. Comparative intracellular ROS formation assay. Bacterial cells at a density of 1 × 10 6 ml −1 were treated with each substance or combination so that the final concentration was the MIC or FIC, respectively. Samples were incubated for 1 h at 37 °C. The negative and positive controls were maintained with untreated and 70% ethanol (50 μ L/mL) treated cells, respectively. To detect the ROS formation, we used the fluorescent reporter dye 2,7-dichlorodihydrofluorescein diacetate (Sigma, Bengaluru, India) at a concentration of 5 mM and used a spectrofluorometer (Shimadzu) to measure excitation at a wavelength of 490 nm and emission at 515 nm. The level of reactive oxygen species increase was calculated using the equation: potassium sensitive probe (PBFI, Sigma Aldrich) was added and treated with a substance or combination followed by incubation at 37 °C for 4 h. The negative and positive controls were maintained with untreated and 70% ethanol (50 μ L/mL) treated cells, respectively. The cells were measured following separation of cellular debris by centrifugation at 4000 rpm. Antibiofilm assay. 190 μ l bacterial cells at a density of 1 × 10 6 ml −1 were added in the 96-well polystyrene sterile flat bottom tissue culture plates and incubated for 18 hrs. 10 μ l SNP alone or combination so that the final concentration was the MIC or FIC, respectively, and incubated at 37 °C for 6 hrs. The tissue culture plates were washed with sterile phosphate-buffered saline for the removal of floating bacteria and sessile adherent bacteria were fixed using 2% sodium acetate, 0.1% crystal violet stain. Tissue culture plates were washed with deionized water and 95% ethanol was added to the wells after drying. The absorbance of stained adherent bacteria was recorded at 595 nm ELISA reader (Molecular Devices). The % of antibiofilm activity was calculated using the equation: − × − --1 100 A595 of cells treated A595 of non treated control A595 of non treated control and means ± SD were determined. Intracellular protein measurement. The pharmacodynamic effect of SNP and combination on P. aeruginosa cells were observed by SDS-PAGE assay according to the method 21 . Gels were run on a Mini Protean III vertical electrophoresis system (BioRad) at 100 V until the tracking dye of the sample buffer reaches to the other end. The gels were stained with 0.1% coomassie blue R-250 in 10% acetic acid and 40% methanol, destained in 10% acetic acid at 40% methanol. Microscopic comparison of live/dead cells. The pharmacodynamic effect of the SNP or combination and control was also observed microscopically using Live/dead assay kit (Thermo Fisher Scientific). P. aeruginosa cells were treated with SNP or combination and control for 4 h. Later, the cells were washed, resuspended in sterile phosphate-buffered saline and stained with LIVE/DEAD BacLight bacterial viability kit as per manufacturer's instructions. Statistical analysis. Three independent experiments were carried out and the results are represented as mean ± SD. The Student's T test was applied to calculate the statistical significance of the experimental data.
4,580.8
2016-07-18T00:00:00.000
[ "Biology", "Medicine" ]
Manipulation of encapsulated artificial phospholipid membranes using sub-micellar lysolipid concentrations Droplet Interface Bilayers (DIBs) constitute a commonly used model of artificial membranes for synthetic biology research applications. However, their practical use is often limited by their requirement to be surrounded by oil. Here we demonstrate in-situ bilayer manipulation of submillimeter, hydrogel-encapsulated droplet interface bilayers (eDIBs). Monolithic, Cyclic Olefin Copolymer/Nylon 3D-printed microfluidic devices facilitated the eDIB formation through high-order emulsification. By exposing the eDIB capsules to varying lysophosphatidylcholine (LPC) concentrations, we investigated the interaction of lysolipids with three-dimensional DIB networks. Micellar LPC concentrations triggered the bursting of encapsulated droplet networks, while at lower concentrations the droplet network endured structural changes, precisely affecting the membrane dimensions. This chemically-mediated manipulation of enclosed, 3D-orchestrated membrane mimics, facilitates the exploration of readily accessible compartmentalized artificial cellular machinery. Collectively, the droplet-based construct can pose as a chemically responsive soft material for studying membrane mechanics, and drug delivery, by controlling the cargo release from artificial cell chassis. Introduction Droplet interface bilayers (DIBs) are bottom-up, cellular membrane-mimicking models used for the in-vitro study of membrane constituents and properties 1 .DIBs are formed when lipid monolayer-coated aqueous droplets come into contact, forming an artificial lipid bilayer membrane.In addition, DIBs can be formed when an aqueous droplet sits on top of a hydrogel substrate 2,3 , where this model has been used in single-molecule imaging for biophysical and biochemical studies 4,5 .The versatility of DIB models enables them to be tailored for different research applications, ranging from the study of transmembrane protein behaviour 2 , to cell-free DNA expression 6 and in-vitro tissue culture development 7 . Sophisticated and functional artificial cellular networks can be constructed using DIBs as building blocks. Multisomes 8 , enclose DIB networks within an oil droplet, which can be suspended in air or water [9][10][11] .Various multisome demonstrations have been assembled using liquids only 8,11 , although, the encapsulation of DIBs and multisomes within soft hydrogels 12 , introduces soft material platforms towards the study of artificial membranes. Hydrogels are attractive because they are used for the immobilization of biological and non-biological matter, including living cells and synthetic cells, respectively 13,14 . DIBs on hydrogel substrates acquire enhanced mechanical resistance leading to their prolonged stability and extended lifetime 12,15,16 .Gel-encapsulated droplet interface bilayer constructs (eDIBs) 17,18 are a type of multisomes, which depict multi-compartmentalized artificial cell chassis and aim to impart cellular functionalities, such as polarization 19 .Furthermore, DIB systems are usually made by manual pipetting 20 , which limits the production yield rate and structural complexity attained.Recently, multiphase microfluidic droplet-forming devices have been developed to effectively generate DIBs, multisomes and eDIBs, using stepwise emulsification methods 8,17,19 .Such droplet-based artificial membrane networks formed by robust and high-throughput microfluidic techniques have been used in molecular sensing 8 , cell mimicking 19 , and artificial cell membrane studies 17 . The properties of simple and complex DIB systems are determined largely by the lipid and oil composition 21 , membrane chemistry 22 , as well as the droplet arrangement 19,23 .Bilayer mechanics, forces and capacitance are characteristics directly influenced by the conditions of a DIB model [24][25][26] .Various studies have focused on the geometrical parameters of DIBs, e.g., contact angle and bilayer area, which are often manipulated, in order to modulate the behaviour of the bilayer and transmembrane proteins 27 .An example includes mechanosensitive protein channels, whose activation relies on the tension across the phospholipid bilayer [28][29][30] .This has been achieved using chemical means, such as the hydrolysis of lysolipids by phospholipase A2 or through physical actuation of the membranes 30,31 .Another bilayer manipulation example includes the concentration minimization of proteins pores and channels in DIBs, by directly dragging/pulling the droplets using electrodes or pipettes 27,28,32 .Others have induced liquid volume-assisted pressure changes within the DIB droplet-based compartments, therefore manipulating the droplet size and the bilayer area 33 .Alternatively, DIB manipulation has been achieved via electrowetting methods 24 , or through the incorporation of magnetic particles and exposure to magnetic fields 34 .Electrowetting manipulation of DIBs can be limited by electroporation and bilayer rupture 35 , while mechanical manipulation can be constrained by the contact and movement of invasive pipettes and electrodes, often causing failure of the DIBs. In this work, we propose a simple chemical approach to alter hydrogel-encapsulated DIB networks, to directly modulate the properties of artificial cells and enable the construct's dynamic response to environmental changes.This concept is demonstrated by constructing eDIBs and observing their interaction with water-soluble lysophosphatidylcholine (LPC) for prolonged periods.These lysolipids are single-tailed phospholipids, which alter the surface tension of lipid monolayers and induce pressure changes along the phospholipid leaflet, as evidenced by artificial cell studies [36][37][38] .We find that at high concentrations (10-fold higher than the critical micellar concentration), LPC ruptures the artificial membranes and promotes rapid release of the enclosed aqueous content.At low, sub-micellar concentrations the droplet network endures physical changes, with significant alterations to the contact angle and bilayer area.Lysolipids were able to provide a facile and indirect contact approach for determining the fate of enclosed DIBs in aqueous environments. Results and Discussion High-order, gel-encapsulated DIBs using monolithic 3D-printed microfluidic devices. Three, in series, droplet-forming microfluidic junctions facilitated the formation of encapsulated DIBs in hydrogel capsules (Fig. 1a).For planar microfluidic devices, the wettability is vital for successful and stable emulsion formation, which is usually achieved through channel surface modification, including plasma treatment and coatings 39 .Here, triple emulsion capsules were produced using a 3D-printed microfluidic device made from Nylon and Cyclin Olefin Copolymer (COC) without any surface treatment or other device post-processing.Nylon and COC polymers are known for their hydrophilic and hydrophobic surface property, respectively 40,41 .The surface water contact angle measurements of 3D-printed Nylon and COC substrates exhibited water contact angles of 46 ° and 78°, respectively (Fig. 1b, i.-iii.).The print settings of each material (SI Appendix, Table S1) were kept consistent between all 3Dprinted samples and microfluidic devices, as they can affect the final water contact angle of the substrate 42 .The Nylon and COC microfluidic components fused well together with no indication of leaking when the humidity was controlled while printing.It should be noted that Nylon fibres and films have been previously used in digital and paper microfluidics as superamphiphobic and anti-corrosive substrates [43][44][45] , however Nylon is not widely used in droplet-microfluidics or 3D-printed microfluidics, possibly due to its hygroscopic properties 46 .Here, the 3D-printed Nylon microfluidic component offered a novel and facile method of producing high-order emulsions.In fact, Nylon filament is more suitable for dual-material 3D-printed microfluidic devices, since previously reported PVA devices were soluble in water, which limited the duration of the microfluidic experiments 19 .Earlier established eDIB models have been generated using glass capillary/3D-printed hybrid microfluidic devices 17 , or using double emulsion 3Dprinted devices 19 . The final microfluidic devices consisted of three droplet-forming junctions.The 1 st and 3 rd junctions were made of COC filament and the 2 nd junction was made of Nylon filament.Initially, a water-in-oil (W1/O1) emulsion was formed at the 1 st droplet-forming junction, which advantageously exploited the COC filament's hydrophobic properties.In the oil (hexadecane), DOPC phospholipids were dissolved and resulted in the formation of a lipid monolayer around individual water droplets, which when in contact with each other, formed DIBs (Fig. 1b, i.).Subsequently, the W1/O1 was inserted at the 2 nd droplet-forming junction made of Nylon filament (hydrophilic), and was broken by a continuous aqueous alginate phase.Therefore, multiple water droplets in lipid-containing oil (DIBs) were encapsulated in the liquid alginate, forming a water-in-oil-in-alginate (W1/O1/A) emulsion.At the site where an inner droplet comes in contact with the alginate, a droplet-hydrogel DIB is formed (Fig. 1b, ii.). The lack of synthetic surfactants within the alginate resulted in the failure of the complex emulsion W1/O1/A (data not shown).Instead, of adding a surfactant into the alginate solution, we explored the addition of multilamellar DPPC vesicles, as surface tension-lowering agents 47,48 .This hindered the coalescence between miscible phases.Both DOPC and DPPC phospholipids have been used towards the construction of artificial cell membranes (e.g.liposomes), hence either DOPC or DPPC could be used in the alginate phase, however, only DPPC vesicles were studied here.Finally, the W1/O1/A was encapsulated by a divalent-infused nanoemulsion, for further emulsification (W1/O1/A/O2) and simultaneous on-chip gelation (Fig. 1b, iii.).The final constructs are referred to as eDIBs, as they are hydrogel-based constructs encapsulating DIBs and can be stored in an aqueous environment.To our knowledge, this is the first report of fabricating monolithic, 3D-printed microfluidic devices that can generate multicompartment triple emulsions microgels, without performing any device post-fabrication treatment or processing.b Schematics of the stepwise generation of eDIBs from a including filament type, contact angle and emulsion order.i. Water-in-oil (W1/O1) emulsion formed by the 1 st hydrophobic (COC, 78°) droplet-forming junction.When the DOPC lipid monolayer-coated droplets come in contact, they form a DOPC droplet interface bilayer (DIB).ii.A close look at a water-in-oil-in-alginate (W1/O1/A) emulsion formed at the 2 nd hydrophilic (Nylon, 46 °) droplet-forming junction. The DIB is contained by an alginate phase with DPPC vesicles (vesicles are not shown).Where an inner aqueous droplet contacts the alginate, another DIB is formed defined as a droplet-hydrogel DIB.iii.Finally, the eDIB is formed at the 3 rd droplet-forming junction (COC, 78 °).The DIB contained by the alginate is engulfed by the Ca 2+ -infused nanoemulsion (W1/O1/A/O2), where the on-chip gelation starts (scale bar: 1 mm). Free-standing eDIB capsules were produced with varying numbers of inner droplets.By controlling the flow rates of the inner aqueous buffer and the DOPC containing hexadecane phase we produced eDIBs with either an average diameter of 90 μm ± 1 μm (Fig. 2a) or 190 μm ± 3 μm (Fig. 2b).For reducing the diameter of the inner droplets, the aqueous phase flow rate of the inner droplets was decreased to 0.1 mL/hour, and the lipid-containing oil was increased to 0.5 mL/hour.eDIBs with smaller inner aqueous droplet diameters (ᴓ < 100 μm) have been shown to be notably more robust after centrifugation (SI Appendix, Fig. S1). It should be noted that often with 3D FFF printed micro-scale components, variabilities may be introduced on the microfluidic channel dimensions (SI Appendix, Table S2 and Fig. S2), due to different environmental conditions and calibration inaccuracies.Because of these variabilities, eDIBs were formed at multiple phase flow rate combinations across experiments (SI Appendix, Table S3).For subsequent experiments the flow rates were manipulated accordingly, in order to enclose droplets with large diameter (ᴓ > 100 μm) and a small droplet number (typically less than 10), which would permit good visualization of the droplet arrangement and DIBs.eDIBs that survive the initial 2-3 hours of production can be stored for a month in an aqueous buffer with osmolarity that matches their internal droplets. Lysolipid-induced droplet release from eDIBs. Egg lysophosphatidylcholine (LPC) is a water-soluble, cone-shaped, single-tailed phospholipid with a headgroup larger than the tail, which tends to form micellar lipid structures with positive curvature 49 .LPC has been used to alter the membrane pressure and activate mechanosensitive channels in DIB systems 30 , increase the permeability of cell membranes for drug uptake studies 50,51 , and facilitate protein pore insertion into bilayers 52 . Here, the LPC lysolipid was introduced to the physiological aqueous environment surrounding the eDIB capsules and diffused passively to the phospholipid DIB between the inner aqueous droplets and the hydrogel shell (droplethydrogel DIB). Prior to imaging, the eDIBs were immobilised at the bottom of a 96-well plate using 1 % w/v agarose, and this was followed by the addition of LPC in buffer at the final concentration of interest (Fig. 3a).The amphiphilic lysolipids diffused to the droplet-hydrogel DIB and at high concentrations (e.g. 100 μM) the inner droplets completely leaked into the surrounding medium, leaving an empty oil core (Fig. 3b).This was further analysed in terms of the fluorescent signal drop over time, across a population of eDIB capsules exposed to various LPC concentrations (1 μM -1000 μM).The droplet release profile for each concentration over a period of 14 hours is shown in Fig. 3c. After approximately 3 hours of incubation at 37 °C and constant humidity, the intensity of 0 μM and 1 μM LPC treated eDIBs stabilised with negligible reduction.This reduction of the fluorescent signal was attributed to possible photobleaching and out-of-focus imaging, caused by the moving platform.In addition, the inner droplets of eDIB capsules treated with 10 μM and 100 μM LPC were subject to major instabilities after approximately 2-3 hours of the introduction of LPC.After the initial 3 hours, the 10 μM LPC-treated eDIBs were able to maintain their stability for longer time periods compared to 100 μM LPC-treated eDIBs.The logarithm of the intensity revealed exponential decay over time with fluctuations at concentrations of 10 μΜ and higher, whilst it also uncovered the bursting events at concentrations of 1000 μM. The phosphatidylcholine composition used in this study was dominated by approximately 69 % of 16:0 Lyso PC (information provided by manufacturer), leading to the assumption that the critical micelle concentration (CMC) is close to that of 16:0 Lyso PC (CMCLPC).The CMC value is a variable of temperature, pH and salt 53,54 , and the exact CMCLPC was not measured in this study.However, previous literature reported that the CMCLPC value of 16:0 Lyso PC ranges between 4 μM and 8.3 μM, at temperatures spanning from 4 °C to 49 °C 55,56 .Therefore, only the 10 μM concentration introduced to the eDIBs in this study, was considered as a concentration closest to previously reported CMCLPC. Either individual LPC lipid molecules, monomers (<CMCLPC), or micelles (>CMCLPC) were delivered to the droplethydrogel DIB and interacted with the first outer leaflet of the bilayer.This will alter the curvature of the membrane, leading to an asymmetric pressure distribution along the bilayer 57 .High micellar concentrations of LPC can lead to the rupture of phospholipid membranes, as a consequence of the translocation of crowded lysolipids to the second inner leaflet of the bilayer, or due to lysolipid-induced perturbations 51,58,59 .Similarly, here the droplets treated with equal to or greater than 100 μΜ LPC were subject to rapid droplet bursting, due to the failure of the droplet-hydrogel DIB membrane.In comparison to lower concentrations, this active release was attributed to the concentrated LPC micelles delivered to the targeted site (droplet-hydrogel DIB) and promptly induced membrane asymmetry.Supplementary fluorescence increase assays showed that 10 μM treated eDIBs underwent a major droplethydrogel DIB failure at a later timepoint (~7 hours), compared to higher concentrations which caused instant membrane failure (SI Appendix, Fig. S3). For lysolipids to diffuse and act on the droplet-hydrogel DIB, the monomers and micelles need to diffuse from the aqueous solution and then through the alginate shell.Lysolipids can interact and fuse with the DPPC lipid vesicles embedded in the hydrogel alginate shell, leading to the possible reduction of the lysolipid fraction delivered to the droplet-hydrogel DIB.An underestimated lysolipid concentration can influence the rate of impact on the eDIB constructs, which explains why the effects occur in the order of hours.Furthermore, the micellar size highly depends upon the concentration, where 7-50 μM LPC form micelles of 34 Å radius, whereas this micellar radius doubles at concentrations exceeding 50 μM 60 .Consequently, concentrations equal to or higher than 100 μM deliver large micelles, which contribute to the possible transient pore formation at the bilayer, thus the droplet-hydrogel DIB instantly fails and droplet release into the hydrogel occurs 36,61 . The effect of sub-micellar LPC concentrations on droplet displacement and arrangement. Lipid monolayer-coated aqueous droplets in the form of water-in-oil emulsion are governed by the interfacial tension. Bilayer and DIB formation is facilitated by Van-der-Waals forces, as the adhesive monolayer-coated droplets come in contact 62 .Due to the excess of lipids in the hexadecane oil phase, the monolayers of a DIB can expand and contract.There are temporary fluctuations of the disjointing pressure during DIB formation at the various interfaces, but attractive and repulsive forces work towards the equilibrium of the eDIB system [63][64][65] . Significant fluctuations begin when the lysolipids are externally introduced, as illustrated in Fig. 4a.The equilibrium is destabilized, due to the arrangement of the introduced lysolipids into the existing phospholipid bilayer and the subsequent changes in the pressure distribution 66 .The reorganization of the phospholipids within the first encountered lipid monolayer of the droplet-hydrogel bilayer, results in a change in the surface tension and subsequent lateral expansion, as lysolipids are being fed into the monolayer (Fig. 4a, ii.).Meanwhile, this forces the excess DOPC phospholipids in the hexadecane oil to compensate from the internal side of the droplet-hydrogel bilayer, towards the LPC-induced monolayer expansion.Together, both leaflets of the bilayer endure tensional changes, which causes the adhesive forces at the droplet-hydrogel bilayer to shift and the whole bilayer to expand along the interface. In the presence of lysolipids, we classify the dominating forces of the eDIB model as the attractive forces at the droplet-hydrogel DIB ( ̅ ℎ− ), and repulsive forces at the droplet-droplet DIB ( ̅ − ).As the droplet-hydrogel bilayer laterally expands in the presence of LPC, the ̅ ℎ− dominate over any other attractive forces and the bilayer becomes thinner (Fig. 4a).This leads to the pulling of the droplets towards the hydrogel shell, due to imbalanced forces.Here, we describe the pulling effect as the retraction of the droplets away from the centre of the middle oil core.At this stage, ̅ − are dominating and the thickness of the droplet-droplet bilayer increases.Above a characteristic critical bilayer thickness the aqueous droplets will separate, while below a critical bilayer thickness, coalescence occurs 65 .Otherwise, an equilibrium might be reached when the forces balance each other, and a characteristic equilibrium bilayer thickness is achieved. The above molecular dynamic changes after the addition of LPC promote the rearrangement and displacement of the inner droplets and DIBs.The rate and the degree of destabilization effects depended on the concentration of LPC introduced.Droplet pulling was more explicit in eDIBs treated with 10 μM LPC, as shown in Fig. 4b.eDIBs treated with 1 μM LPC were overall less disturbed with mean square displacement similar to the control (0 μM), while the displacement of the droplets exposed to 10 μΜ LPC was more apparent (Fig. 4c).After approximately 8 hours of incubation with the lysolipids, the pulling effect led to droplet merging for eDIBs treated with 10 μM LPC. In fact, during the study period and at this concentration of LPC, it was observed that the inner droplets would initially merge between them, and not with the hydrogel shell.This was due to the enhanced stability of DIBs formed on hydrogel semi-flat substrates 67 , compared to droplet-droplet DIBs.Once the first merging occurred, a cascade of merging continued where small droplets merged with larger droplets (the product of merging), due to the higher Laplace pressure inside smaller droplets 67 .The delayed droplet shifting and displacement in the presence of 10 μM LPC (Fig. 4c, ii) were attributed to the slower build-up of lysolipid concentration at the droplethydrogel DIBs 68 . The effect of sub-micellar LPC concentrations on DIB bilayer area and contact angle. High bilayer tension and strong adhesion forces at the droplet-hydrogel DIB contributed to the increased bilayer area and contact angles 23 .The bilayer area and contact angle were not captured at the droplet-hydrogel DIB, due to imaging limitations, and were only measured between the aqueous droplet-droplet DIBs. The three-dimensional micro-architecture of the eDIB capsules benefited the measurements of circular bilayer areas, which reflect the shape of the droplets, throughout incubation as shown in Fig. 5a.This allowed the quantification of circular bilayer areas of DIBs between adjacent droplets, which helps assess the bilayer stability and behaviour 16 .The bilayer area of 1 μM treated eDIBs shows delayed effects induced by LPC and subsequent return to equilibrium, as evident by the bilayer area plateau (Fig. 5a, ii).Moreover, the bilayer area of vertical dropletdroplet DIBs inside eDIB capsules was calculated on the assumption that the droplets on either side of the bilayer were of equal diameter (SI Appendix, Fig. S4).Fig. 5b shows the average bilayer area of 1 μM and 10 μM treated DIBs throughout the incubation period.Whilst negligible bilayer area reduction was observed with 1 μM LPC, the 10 μM LPC caused a significant reduction within the initial 3.5 hours, followed by droplet merging (bilayer area increase) and then once again, bilayer area reduction. The contact angle of DIBs typically depends on the droplet diameter, lipids and oil composition, as they can affect the surface tension and consequently the droplet-droplet adhesion 21 .The number of droplets enclosed within a volume forming DIBs can also affect the droplet-droplet contact angle 23 .In most DIB models, the contact angle of DIBs is manipulated prior to the DIB formation by varying the lipid and oil composition (SI Appendix, Fig. S5).In this study, the contact angle among the inner compartments can be manipulated post-fabrication through the incubation of the eDIBs with sub-micellar LPC concentrations.This is displayed in Fig. 5c., where the mean contact angle between eDIB droplets was measured before the LPC started to affect the droplet-droplet DIBs to a measurable extent, and at the end of the incubation.In addition to the endpoint contact angle measurements, the contact angle was measured at approximately 9 hours for eDIBs only treated with 10 μM LPC, representing an average timepoint after the first droplet coalescence. Bilayer peeling between two droplets of contact angle and monolayer surface tension was previously attributed to the exceeding of the critical adhesive bilayer force per unit length, by a quantity ( ⊥ ) which drives the dropletdroplet DIB separation, ⊥ = sin 25 (Fig. 4a, ii.).Here, the peeling or pulling at the droplet-droplet DIB is indirectly driven by the dominating monolayer surface tension, attractive forces and droplet shape deformation at the droplet-hydrogel DIB.The number of eDIBs is noted by N, whilst the sample population of the measurable characteristic (bilayer area or contact angle) is noted by n. Overall, we demonstrated the successful encapsulation of droplet interface bilayer membranes into self-supported hydrogel capsules (eDIBs) using COC/Nylon, 3D-printed microfluidic devices.This was benefited by utilizing lipid vesicles as interfacial tension-altering particles, which hindered the mixing between miscible phases.In addition, a method was established for inducing and monitoring the release of the inner aqueous compartments from these free-standing, complex emulsion capsules using micellar lysolipid concentrations.Sub-micellar concentrations, on the other hand, induced more refined effects, including 3D reorganisation and changes in the bilayer area and contact angle. Advantages for employing microfluidics in DIB model construction include the high production yield, and control over the size and structural order, whilst various features can be introduced, such as phospholipid bilayer asymmetry.The incorporation of phospholipid DPPC vesicles within the alginate phase can contribute towards the formation of asymmetric DOPC/DPPC bilayers, following partial lipid-in and lipid-out DIB formation.Although, in this study, we considered a lipid-out symmetric DOPC bilayer constructed at the bilayer interface between droplets, and between any droplet and the hydrogel. An earlier active-release study on hydrogel eDIBs demonstrated the synergy between pore-forming peptides, but no control over the organization or DIB adhesion was reported 69 .Furthermore, when cholesterol molecules insert between the phospholipids of a bilayer, they create a condensed monolayer with restricted motion between the acyl chains of the phospholipids 65 .Similar to cholesterol molecules, lysolipids at non-pore-forming concentrations insert between phospholipid molecules and increase the surface tension of the phospholipid monolayer (for LPC, tension will be higher between polar headgroups) and the energy of adhesion of the bilayer. For the duration of the lysolipid LPC treatment, we hypothesized that the LPC molecules introduced to the eDIB system were unable to encounter the droplet-droplet DIB, directly.Therefore, the lysolipids only affected the droplethydrogel bilayer, where strong adhesion forces pull the droplets and attenuate the droplet-droplet DIB area.These findings present an approach for in-situ and automated organization, as well as the manipulation of the bilayer area and contact angle of encapsulated droplet-droplet DIBs.It should be noted that, the duration of phospholipid bilayer exposure to lysolipids can enhance the lipid molecular transfer to the opposite leaflet 70 and hence, the degree of impact. Research in artificial cells and protein reconstitutions would benefit from the non-invasive modulation of artificial cellular membranes.Simply by introducing lysolipids we can modulate the spatial organisation and physicochemical characteristics of encapsulated droplet interface bilayers.These findings pave the way for non-invasive transmembrane protein density control studies, as well as establishing communications between the internal and external environment of artificial cell chassis.Complex artificial membrane models such as eDIBs aided by dropletmicrofluidics offer the benefit of encapsulating and interfacing two or more reagent-carrying compartments.Droplet microfluidic technology provides a versatile tool for developing such increasingly sophisticated droplets structures for artificial cellular models to study biomolecular interactions and precision engineering of encapsulated bioinspired membranes. 3D-printed microfluidic device fabrication and operation.The microfluidic device was designed using COMSOL Multiphysics (versions 5.6) and fabricated using the Ultimaker S5 Pro Bundle with cyclic olefin copolymer (Creamelt) and Nylon (Ultimaker).The device was sliced using the CURA software with the assigned print settings summarized in SI Appendix.All devices after printing were stored with silica gel sachets.Each liquid phase was delivered to the microfluidic device using SGE gas-tight glass syringes loaded onto positive displacement syringe pumps (KD Scientific).The SGE syringes were connected directly to the 3D printed microfluidic inlets using PTFE tubing (O.D. Optical and Fluorescence Microscopy of eDIBs.eDIBs during on-chip emulsification were imaged using Dino Lite edge USB microscope.eDIBs post-production and during the LPC treatment were imaged using EVOS M7000 Imaging System.Imaging associated with the lysolipid treatment was carried out at 37 °C, where the well plate containing the eDIB capsules was sealed with a tape to prevent evaporation. Bilayer area and DIB contact angle measurements.The bilayer area was measured in three different ways depending on the bilayer orientation and sphericity of the droplets forming the DIB.See SI Appendix for full bilayer area calculation.Due to the ability of the inner droplets to maintain their three-dimensionality, the contact angle was simply calculated by measuring the angle between two adjacent inner aqueous droplets using the angle drawing tools on ImageJ.Before measuring the angle, the contrast of the image was adjusted accordingly, in order to remove any noise around the region of interest.The bilayer area and contact angle of eDIBs produced using 4 mg/mL DOPC in 10 % silicone oil were also measured as reference and comparison to conventionally produced eDIBs (12.5 mg/mL DOPC, 100 % hexadecane). Fig. 1 . Fig. 1.Monolithic 3D-printed microfluidic device generates triple emulsion capsules of encapsulated droplet interface bilayers (eDIBs).a Schematic of the triple emulsion microfluidic flow and production of eDIBs.The water phase (W1) is broken into droplets by the lipid-containing hexadecane oil (O1), which is then engulfed by a vesicle-containing alginate solution (A).The eDIBs are formed at the final 3 rd junction and gelled downstream by the Ca 2+ -infused nanoemulsion (O2). Fig. 3 .a Fig. 3.The effect of externally added LPC lysolipids on the release of inner aqueous droplet from eDIB capsules.a Stepwise schematic of the LPC treatment execution on eDIB capsules.First, a thin layer of 1 % w/v agarose was added to the bottom of the well, followed by the addition of eDIB capsules and then another thin layer of agarose.This facilitated the immobilization of eDIBs at the bottom of the plate during the treatment and imaging with the EVOS automated platform.The temperature of the imaging platform was kept at 37 °C and the humidity was controlled by a well plate sealing tape.b i. Top view and side view schematic of the eDIB capsules, showing the external addition of monomeric and micellar LPC.During the incubation of the eDIBs with concentrated LPC micelles, the lysolipids interact with DIBs formed between the hydrogel and inner aqueous droplets (droplethydrogel DIB) and subsequently, the droplets get released into the hydrogel.ii.Time-lapse of the aqueous fluorescent (sulforhodamine B) inner droplets, showing the rapid release from eDIBs treated with micellar LPC concentrations (100 μM).Scale bar: 200 μm.c Fluorescent signal of the eDIB inner droplets incubated with different concentrations of LPC (fluorescent decrease assay).The intensity reduction for the untreated eDIB capsules (0 μM) is attributed to artefacts of the automated imaging platform and photobleaching.The sample population per concentration for the intensity analysis was as follows: n= 11 (0 μM), n=15 (1 μM), n=19 (10 μM), n=17 (100 μM), n=16 (1000 μM).LHS: Normalised intensity versus time.The shaded regions for each line plot correspond to the standard error of mean (±SEM).RHS: The normalised intensity replotted in the logarithmic (log) scale over time.Besides this exponential decay, there are three consistent fluctuations at concentrations 10 μM, 100 μM and 1000 μM showing a small delay with decreasing concentration.These fluctuations begin during a secondary process and finally level out. Fig. 4 .a Fig. 4. Inner droplet dynamics and re-arrangement under the influence of sub-micellar LPC lysolipid concentrations.a Schematic diagram of eDIBs and key bilayer interfaces before (-LPC) and after (+ LPC) the addition of lysolipids.i.The eDIB system and bilayer interfaces are at equilibrium, as attractive and repulsive forces balance each other.ii.The introduced lysolipids take the eDIB out of equilibrium, as the LPC and DOPC contribute to the lateral expansion of the droplet-hydrogel DIB, by inserting from the external and internal side of the bilayer, respectively.Consequently, the attractive forces at the droplet-hydrogel DIB ( ̅ ℎ− ) rise, which in turn drive the repulsive forces at the droplet-droplet DIB ( ̅ − ).These forces, ̅ ℎ− and ̅ − , lead to the thinning and thickening of the droplet-hydrogel DIB and droplet-droplet DIB, respectively.The contact angle (θb) between the droplets is influenced by the increasing repulsive forces and the characteristic surface tension, ⊥ .b Time lapse of the inner aqueous droplets of eDIBs treated with 1 μΜ and 10 μΜ LPC, showing significant pulling and subsequent merging of droplets treated with 10 μM LPC.c Plots of the, i. X and Y position of the inner droplets and, ii. the mean square displacement (MSD) of 0 μM, 1 μM and 10 μM LPC treated eDIBs measured over 11 hours, revealing that 1 μM treated droplets travelled similar to the untreated construct, while there was significant travel by 10 μM treated droplets.The dots in i. show the location of the individual droplets at t=0.Error bars in ii.correspond to the standard error of mean (±SEM). Fig. 5 . Fig. 5. LPC lysolipid impact on the bilayer area and contact angle of eDIBs.a i.A schematic of an eDIB capsule with two inner droplets and a formed DIB (yellow circular droplet contact area), before and after the addition of LPC.The DIB area is reduced during incubation with LPC, as the adhesive forces of the droplet-hydrogel bilayer and repulsive forces at the droplet-droplet bilayer begin to dominate (↑↑ Fdh-attractive, ↑↑ Fdd-repulsive), ii.Time-lapse of fluorescent droplets encapsulated within an eDIB capsule treated with 1 μM LPC, showing the reduction of the bilayer area as indicated by the red dotted circle.To reveal the bilayer between the contacting droplets, the brightness and contrasts of the image were adjusted.Scale bar: 200 μm.iii.The measured circular bilayer area from ii. is plotted over time as a scatter plot, whilst the dotted curve shows the linear decrease in the first 8 hours after 1 μM LPC addition; this is followed by a transition to a constant bilayer area (equilibrium reached) until the end of the study.b Average DIB bilayer area over time across a population of eDIBs treated with 1 μM (n=11, N=4) and 10 μM (n= 12, N=5) LPC.The DIB bilayer area of 10 μΜ treated constructs displays a drop at 3.5 hours and then an increase at approximately 8 hours, which indicates first the pulling of the droplets and subsequent merging, respectively.After that, the bilayer area follows a reduction and begins to equilibrate.A minimal and subtle decrease was observed in the bilayer area throughout the study in 1 μM treated eDIBs.The number of measured vertical bilayers for 10 μM treated DIBs was initially n= 12 (N=5), and this dropped to n=4 (N=5) by the final timepoint, due to droplet merging.c Line graph of the average DIB contact angle as a function of time for 1 μΜ (n =55, N= 6) and 10 μΜ (n= 47, N=9) treated eDIB capsules.An additional timepoint at approximately 9 hours was plotted, which corresponds to the initial merging of droplets treated with 10 μΜ (best fit for 10 μM treated eDIBs shown by the dotted line).The line plots are accompanied by linear equations, which reveal the initial average DIB contact angle (38 ° for 1 μM and 35 ° for 10 μM).The population number of the measured contact angles for 10 μM was n=47 (N=9), and this dropped to n=22 (N=9) by the final timepoint, due to droplet merging. ᴓ = 1 . 58 mm, I.D. ᴓ = 0.80 mm).Further details regarding the microfluidic device, channel dimensions and flow operation can be found in SI Appendix.Production of Water-in-Oil-in-Water-in-Oil eDIB capsules (W1/O1/A/O2).All reagents were purchased from Merck, unless otherwise stated.The inner water phase (W2) consisted of a buffer solution of 0.05 M HEPES, 0.15 M potassium chloride, 200 μM of sulforhodamine B (SulfB) or 70 mM calcein.The middle oil phase (O1) consisted of 12.5 mg/mL 1,2-di-oleoyl-sn-glycero-3-phosphocholine (DOPC) in hexadecane.DOPC was first dispersed in hexadecane following the thin film lipid hydration method.Briefly, the DOPC powder was dissolved in chloroform and evaporated using a gentle nitrogen stream until a thin film of lipids was formed.The DOPC film was subject to a vacuum for at least 30 minutes to evaporate any residual chloroform and then released under nitrogen gas.The shell phase (A) consisted of 1 % w/v alginate and 0.5 mg/mL 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) vesicles in buffer.The DPPC vesicle solution was prepared using the thin film lipid hydration method, following vacuum overnight.The DPPC film was dispersed in the buffer solution, vortexed for 30 seconds and sonicated in a water bath at 55 °C for 15 minutes.The eDIB capsules' oil carrier phase (O2) consisted of a Ca 2+ -infused mineral oil emulsion, which facilitated the gelation of the alginate shell.This carrier phase was prepared by mixing an aqueous solution of 1 g/mL CaCl2 and mineral oil at 1:9 volume ratio, with 1.2 % SPAN 80 surfactant.The mixture was stirred for at least 10 minutes using a magnetic stirrer and plate, creating a Ca 2+ -infused nanoemulsion.During experiments, the outlet orifice was slightly submerged in 0.2 M CaCl2.The microfluidic setup and execution here, aimed at the formation of approximately 1 mm diameter eDIBs, with large water droplet compartments (> 100 μm) segregated by artificial lipid membranes (i.e., DIBs).LPC treatment of eDIBs.eDIB capsules were immobilized with 1 % w/v low temperature melting agarose in wells of a 96-well plate.LPC in buffer was prepared and used appropriately, in order for each well to have final LPC concentration of 1, 10, 100 and 1000 μM.The droplet release was evaluated by monitoring the decrease in the fluorescence of sulfB (200 μM) from the droplets of individual eDIBs or the fluorescence increase in the wells with eDIBs encapsulating quenched calcein (70 mM).Details related to the LPC fluorescence increase assay can be found in the SI Appendix. Fluorescence and image analysis. The droplet release in the fluorescence decrease assay was evaluated by monitoring the fluorescence decrease from the aqueous sulfB droplets of individual eDIBs (measured the area of the fluorescent DIBs inside the whole construct.).The droplet release in the fluorescence increase assay was evaluated by monitoring the fluorescence increase of the wells with the eDIBs carrying droplets of quenched calcein (70 mM).Image handling and fluorescence analysis were carried out using ImageJ software.The integrated fluorescent intensity was measured at the timepoint of interest, with the ROI minimized to the area of the fluorescent droplets.The intensity plots show the intensity normalised to the intensity extracted from the control (0 μM LPC) fluorescent droplets in eDIBs.The position and displacement of the droplets was recorded using manual tracking tools within ImageJ.The eDIB samples were monitored for over 10 hours and the position of the droplets was recorded every 5 minutes.
8,331.4
2023-12-14T00:00:00.000
[ "Materials Science", "Chemistry", "Engineering", "Biology" ]
Homogeneity and Viscoelastic Behaviour of Bitumen Film in Asphalt Mixtures Containing RAP This article discusses the phenomenon of fresh and RAP binders miscibility and presents test results of bitumen film properties from specially prepared asphalt mixtures. The miscibility of a fresh binder and a RAP binder still has not been fully recognised. The aim of this study was to determine the homogeneity level of the bitumen film based on viscoelastic assessment. In addition, an attempt was made to assess the impact of fresh binder on the binders blending degree. The study included assessment of homogeneity of bitumen film comprising various types of bituminous binders. The assessment was conducted on the basis of tests in the dynamic shear rheometer regarding rheological properties of the binders recovered from specific layers of the bitumen film using a staged extraction method. A complex shear modulus as a function of temperature, an elastic recovery R and a non-recoverable creep compliance modulus JNR from MSCR test were determined. The conducted statistical analyses confirmed the significant impact of the type of fresh binder on the blending degree. Regressive dependencies have been set between the differences of the complex shear modulus of the binders subject to mixing and differences of the complex shear modulus of binders from the internal and external layer of the bitumen film comprised of those binders. It was found that there is no full blending of fresh hard bitumen-simulated binder from RAP, which results in non-homogeneity of the bitumen film. Introduction The part of the binder which is involved in the formation of the bitumen film increases with increasing the RAP content in the asphalt mixture. The need to use increasing amounts of RAP forces the necessity to recognise miscibility of binders and outcome properties of binder film in asphalt mixture containing RAP, as well as their impact on asphalt mixture properties. Current studies indicate that there is no full blending of the RAP binder with virgin binder [1][2][3][4][5][6]. The studies on binder miscibility consider three possible blending scenarios: Full blending-when the binder film is characterised by the same viscoelastic and chemical properties along its thickness; II. Partial blending-when the properties of the binder are different depending on the point in the binder film, but there is no strict interface between both binders; III. No blending-when virgin binder covers the RAP binder without blending and the RAP is then referred to as black aggregate or black rock. The distribution of binders in the binder film for specific blending degrees is presented in the Figure 1. The problem of blending between virgin and RAP binders began to be analyzed by researchers in the late 1970s. It was the beginning of intensive development of asphalt recycling technologies [8]. The first works from that period [9,10] nowadays constitute a valuable source of knowledge. Testing methods developed and applied at that time still allow testing of the binder blending phenomenon, but now using much more modern laboratory equipment. The methods of binder testing miscibility can be divided into two groups. Methods called direct methods constitute the first group. They consist of the assessment of the chemical properties of mixtures or the analysis of the microstructure of the bitumen film. Direct methods include, among others, FTIR spectrographic, GPC gel chromatography, X-ray spectroscopy or microscopy analysis [11][12][13][14][15]. The second group consists of indirect methods. They consist of analysing the viscoelastic parameters of the binder extracted from the bitumen film [16]. Parameters of the binder extracted from the bitumen film are compared with the parameters of the mixtures produced with fully blended binders [17,18]. The method allowing us to obtain binder samples to assess miscibility and homogeneity of the bitumen film is the staged extraction method proposed for the first time by Carpenter and Woloshick in work published in 1980 [9]. That method, based on the staged extraction of the bitumen film layers, leads to separation of samples from the dissolved binder for each of the sub-layers [19,20]. The studies concerning binder blending degree using indirect methods were also conducted by Cooley and Williams with the use of a DSR rheometer to determine the critical temperature with stress of 2.2 kPa. The current analyses of the blending phenomenon prove that in most cases, the most real variant seems to be the indirect variant including a partial blending of both binders [21][22][23][24]. With the discussion of the binders blending phenomenon, the latest scientific research indicates that the activation of the binder in RAP is also a significant and not fully explained problem. The activation of the RAP binder is necessary for its mixing and blending with the virgin binder and virgin aggregate. The degree of binder activation also determines the homogeneity of the asphalt mixture with RAP [25]. Currently, there are no reliable testing methods to predict the RAP binder activation degree. The degree of binder activation and the degree of binder blending are crucial in the design process of asphalt mixtures with RAP. These parameters could be helpful to determine the required content of virgin binder or rejuvenators as well [4]. Previous research shows that the degree of the binder activation and the binder miscibility depends mainly on the mixing temperature while the mixing time is less important [26,27]. This article presents the results of step-by-step research and analyses of the viscoelastic properties of binders extracted from bitumen film from a laboratory-prepared asphalt mixture. The aim of this study was to determine the non-homogeneity level of the bitumen film based on viscoelastic evaluation. In addition, an attempt was made to assess the impact of fresh binder on the blending degree. The assessment was conducted on the basis of tests in a dynamic shear rheometer (DSR) regarding rheological properties of the binders recovered from specific layers of the binder film using staged extraction method. The problem of blending between virgin and RAP binders began to be analyzed by researchers in the late 1970s. It was the beginning of intensive development of asphalt recycling technologies [8]. The first works from that period [9,10] nowadays constitute a valuable source of knowledge. Testing methods developed and applied at that time still allow testing of the binder blending phenomenon, but now using much more modern laboratory equipment. The methods of binder testing miscibility can be divided into two groups. Methods called direct methods constitute the first group. They consist of the assessment of the chemical properties of mixtures or the analysis of the microstructure of the bitumen film. Direct methods include, among others, FTIR spectrographic, GPC gel chromatography, X-ray spectroscopy or microscopy analysis [11][12][13][14][15]. The second group consists of indirect methods. They consist of analysing the viscoelastic parameters of the binder extracted from the bitumen film [16]. Parameters of the binder extracted from the bitumen film are compared with the parameters of the mixtures produced with fully blended binders [17,18]. The method allowing us to obtain binder samples to assess miscibility and homogeneity of the bitumen film is the staged extraction method proposed for the first time by Carpenter and Woloshick in work published in 1980 [9]. That method, based on the staged extraction of the bitumen film layers, leads to separation of samples from the dissolved binder for each of the sub-layers [19,20]. The studies concerning binder blending degree using indirect methods were also conducted by Cooley and Williams with the use of a DSR rheometer to determine the critical temperature with stress of 2.2 kPa. The current analyses of the blending phenomenon prove that in most cases, the most real variant seems to be the indirect variant including a partial blending of both binders [21][22][23][24]. With the discussion of the binders blending phenomenon, the latest scientific research indicates that the activation of the binder in RAP is also a significant and not fully explained problem. The activation of the RAP binder is necessary for its mixing and blending with the virgin binder and virgin aggregate. The degree of binder activation also determines the homogeneity of the asphalt mixture with RAP [25]. Currently, there are no reliable testing methods to predict the RAP binder activation degree. The degree of binder activation and the degree of binder blending are crucial in the design process of asphalt mixtures with RAP. These parameters could be helpful to determine the required content of virgin binder or rejuvenators as well [4]. Previous research shows that the degree of the binder activation and the binder miscibility depends mainly on the mixing temperature while the mixing time is less important [26,27]. This article presents the results of step-by-step research and analyses of the viscoelastic properties of binders extracted from bitumen film from a laboratory-prepared asphalt mixture. The aim of this study was to determine the non-homogeneity level of the bitumen film based on viscoelastic evaluation. In addition, an attempt was made to assess the impact of fresh binder on the blending degree. The assessment was conducted on the basis of tests in a dynamic shear rheometer (DSR) regarding rheological properties of the binders recovered from specific layers of the binder film using staged extraction method. Materials Melaphyr aggregate with 2/5 granulation was used to produce the asphalt mixture for the staged extraction procedure. Bitumen film on aggregate was introduced in two layers and was composed of two types of binders-paving bitumen 20/30 simulating RAP binder and virgin binder according to the testing plan. In total, seven types of virgin binders were used, including three paving bitumens, three polymer-modified bitumens (PMB) and one high-modified bitumen (HIMA). In order to ensure the same mixing conditions for all the binders, the temperatures representing dynamic viscosity at the level of 2 Pa·s were specified. The basic properties of the binders used for the tests are presented in Table 1. Staged Extraction Method In order to separate two layers of the binder film, the staged extraction method was used. This method was proposed for the first time by Carpenter and Wolosick [6]. That method was based on the staged extraction of the bitumen film layers leading to separation of samples from the dissolved binder for each of the sub-layers. It should be noted that this method does not allow step by step removal of the binder film in layers with the same thickness. However, it can be assumed that the samples obtained in this method from two or three sub-layers allow for an approximate analysis of this phenomenon [28]. In the case of a typical asphalt mixture consisting of aggregates with gradation from 0 to X mm, the fine aggregate particles could clump together in the form of clusters. As a result, a certain amount of binder remains hidden and can be stripped only in complex and time-consuming extraction processes. Considering these limitations and wanting to access the entire specific surface area of the binder as quickly as possible, the one single fraction of aggregate with granulation from 2 to 5 mm was used. In addition, the aggregate used in the mixture was previously washed in a 2 mm sieve in order to eliminate dust and undersized grains. Coarse crushed aggregate with the lowest possible granulation was used in order to introduce as much binder to the mixture as possible while keeping the standard thickness of the binder film. Asphalt mixture was produced in two steps. In the first step, hot aggregate was covered by 20/30 binder with half of the expected total layer thickness. That way, a system simulating RAP particles was obtained. Next, particles were heated to 140 ± 5 • C corresponding to RAP dosage temperature in hot recycling technology [29]. In the next step, the virgin binder was added, which was followed by the second mixing step lasting for 40 s. Virgin binder used in the second mixing step was heated to the temperature determined on the basis of the viscosity test (2 Pa·s criterium). After mixing, aggregate particles covered by bitumens were scattered on steel trays to cool and provide grains in a loose state. It was assumed that the amounts of binders obtained from specific layers of the bitumen film should be similar. On the basis of the initial studies, assuming different times needed for extraction of each specific layer, extraction was divided into two stages allowing selection of two sublayers from the whole bitumen film. The first extraction stage of the external layer of the bitumen film lasted for 10 s. The second stage, which aimed to extract the remaining binder, lasted for 120 s. The first extraction stage caused removal of approximately half of the thickness of the bitumen film, while the second stage caused stripping of the remaining part of the binder. The scheme presenting the applied method is included in Figure 2. It was assumed that the amounts of binders obtained from specific layers of the bitumen film should be similar. On the basis of the initial studies, assuming different times needed for extraction of each specific layer, extraction was divided into two stages allowing selection of two sublayers from the whole bitumen film. The first extraction stage of the external layer of the bitumen film lasted for 10 s. The second stage, which aimed to extract the remaining binder, lasted for 120 s. The first extraction stage caused removal of approximately half of the thickness of the bitumen film, while the second stage caused stripping of the remaining part of the binder. The scheme presenting the applied method is included in Figure 2. The collected bitumen and solvent solutions were evaporated in a vacuum rotary evaporator to recover the pure binder. The recovery process was conducted in accordance with the procedure described in the PN-EN 12697-3 standard. The use of hard binder 20/30 forced conducting the second recovery stage in higher temperature (180 °C) in order to fully vaporise the solvent. Rheological Tests Samples of recovered binders were subjected to rheological tests in the Dynamic Shear Rheometer (DSR). Rheometer Physica/Anton Paar MCR 101 (made in Graz in Austria) with Peltier Thermostated Temperature Device P-PTD200 temperature control system was used. The complex shear modulus G* was determined according to the methodology described in the PN-EN 14770 standard. The tests were conducted in two temperature ranges from −5 °C to 25 °C with 10 °C intervals using 8 mm diameter parallel plates with 2 mm gap, and from 30 °C to 100 °C with the same temperature interval using 25 mm parallel diameter plates with 1 mm gap. In both temperature ranges, the tests were conducted with constant angular frequency of 10 rad/s (1.59 Hz). The tests were made in oscillation-controlled strain mode in the range from 0.1% to 15% depending on the test temperature so that the binder remains in a linear viscoelastic range (LVE). For mixtures containing polymer-modified binders, a Multiple Stress Creep Recovery test (MSCR) in temperature of 60 °C was also performed. For the MSCR test, 25 mm diameter parallel plates and 1 mm gap were applied. Tests were conducted for two stress levels, 0.1 kPa and 3.2 kPa, respectively [30]. The collected bitumen and solvent solutions were evaporated in a vacuum rotary evaporator to recover the pure binder. The recovery process was conducted in accordance with the procedure described in the PN-EN 12697-3 standard. The use of hard binder 20/30 forced conducting the second recovery stage in higher temperature (180 • C) in order to fully vaporise the solvent. Rheological Tests Samples of recovered binders were subjected to rheological tests in the Dynamic Shear Rheometer (DSR). Rheometer Physica/Anton Paar MCR 101 (made in Graz in Austria) with Peltier Thermostated Temperature Device P-PTD200 temperature control system was used. The complex shear modulus G* was determined according to the methodology described in the PN-EN 14770 standard. The tests were conducted in two temperature ranges from −5 • C to 25 • C with 10 • C intervals using 8 mm diameter parallel plates with 2 mm gap, and from 30 • C to 100 • C with the same temperature interval using 25 mm parallel diameter plates with 1 mm gap. In both temperature ranges, the tests were conducted with constant angular frequency of 10 rad/s (1.59 Hz). The tests were made in oscillation-controlled strain mode in the range from 0.1% to 15% depending on the test temperature so that the binder remains in a linear viscoelastic range (LVE). For mixtures containing polymer-modified binders, a Multiple Stress Creep Recovery test (MSCR) in temperature of 60 • C was also performed. For the MSCR test, 25 mm diameter parallel plates and 1 mm gap were applied. Tests were conducted for two stress levels, 0.1 kPa and 3.2 kPa, respectively [30]. Complex Modulus and Phase Angle The degree of blending of the binders was assessed based on an analysis of a difference between the properties of the binder layers extracted and recovered from bitumen film-covered aggregates. In the first step, the zero-miscibility index ∆G* O was determined for original binders (O), i.e., binders before mixing and creation layered bitumen system on aggregate. This index is calculated as a difference of the logarithm of the complex modulus G* of the bitumen simulating the RAP binder (20/30) and logarithm of a complex modulus G* for a bitumen-simulating virgin binder i.e., 35/50, 50/70, 70/100, 10/40-65, 25/55-60, 45/80-55, 45/80-80 (Equation (1)). The ∆G* O parameter characterizes the maximum possible difference that can occur in the case of complete lack of blending between the two bitumen layers. In the second step of the research, a blending assessment was carried out on the specimens with two layers. The binders were recovered by a staged extraction method as was described in Section 2.2. The results of the complex modulus G* for binders from different layers, step by step extracted from laboratory made specimens, are presented in Tables 4 and 5. The results were organised in the Tables 4 and 5 into two groups, for the paving bitumens and for the polymer-modified bitumens, respectively. The virgin binder means the binder used to create outer layer of bitumen film. The layer C provided in Tables 4 and 5 denote layers where complete blending occurred. Results for data series C were obtained by testing physically mixed two binders i.e., 20/30 with other virgin binders, in equal mass proportion. In order to evaluate the differences between the two bitumen film layers, the ∆G* R index was defined. ∆G* R is the difference between the logarithm of a complex modulus G* of layer 1 (outer) and the logarithm of a complex modulus of layer 2 (inner) at a given temperature. When the binders are fully mixed, the ∆G* R index will have a value of 0. where: ∆G* R -absolute value of the difference of logarithm of a complex modulus of the binder of layer 1 and logarithm of a complex modulus of the binder of layer 2 at a given temperature; G* w1 -value of complex modulus of a binder of layer 1 (outer) at a given temperature [kPa]; G* w2 -value of complex modulus of a binder of layer 2 (inter) at a given temperature [kPa]. Example dependencies of the complex modulus G* with an example of determination of the ∆G* index are shown in Figure 3. The comparison of the calculated ∆G* R index for all tested recovered binders is shown in Figure 4. where: ΔG*R-absolute value of the difference of logarithm of a complex modulus of the binder of layer 1 and logarithm of a complex modulus of the binder of layer 2 at a given temperature; G*w1-value of complex modulus of a binder of layer 1 (outer) at a given temperature [kPa]; G*w2-value of complex modulus of a binder of layer 2 (inter) at a given temperature [kPa]. Example dependencies of the complex modulus G* with an example of determination of the ΔG* index are shown in Figure 3. The comparison of the calculated ΔG*R index for all tested recovered binders is shown in Figure 4. where: ΔG*R-absolute value of the difference of logarithm of a complex modulus of the binder of layer 1 and logarithm of a complex modulus of the binder of layer 2 at a given temperature; G*w1-value of complex modulus of a binder of layer 1 (outer) at a given temperature [kPa]; G*w2-value of complex modulus of a binder of layer 2 (inter) at a given temperature [kPa]. Example dependencies of the complex modulus G* with an example of determination of the ΔG* index are shown in Figure 3. The comparison of the calculated ΔG*R index for all tested recovered binders is shown in Figure 4. It should be noted that the highest values of ∆G* R index are achieved for paving binders 50/70 and 70/100, and therefore in the case of these binders there are the greatest differences in the complex modulus over the thickness of the bitumen film. The lowest differences of complex shear modulus on the thickness of the bitumen film occur in the case of polymer-modified bitumens i.e., 10/40-65 and 25/55-60. The values of ∆G* R for the remaining analyzed binders have values between those. On the basis of the obtained results, it can be concluded that the difference between the complex modulus of both layers of the bitumen film decreases with an increase of the binder stiffness and presence of the polymer. In addition, it was stated that the value of ∆G* R for all analysed binders increases with an increase of the test temperature in the range from −5 to approximately 40-50 • C. In higher temperatures (50-100 • C), such an increase was observed only in the case of paving bitumen, in particular for 50/70 and 70/100. The value of the indexes for modified binders in the temperature range 40-100 • C are almost constant at a similar level. In addition, it was stated that the value of ∆G* R does not have a linear relationship to the layer differences, so that the gap between curves in Figure 4 at different temperatures shows different layer differences. In order to determine the statistical significance of the impact of the type of the binders, polymer modification and test temperature on the ∆G* R index, an ANOVA analysis was conducted, with the level of probalility 95% [14]. When p values are less than or equal to 0.05 value it could be interpreted that there is a significant impact of the independent variable on the tested value. The results of the ANOVA are presented in Table 6. The conducted ANOVA analysis proved that the value of ∆G* R significantly depends on the test temperature and the type of virgin binder used in the layered aggregate/bitumen system. Therefore, it can be concluded that in the case when a full blending between two bitumen layers does not exist, the differences of G* of the internal and external binder film layer will approximately correspond to the differences of G* for both binders being mixed. In order to verify those findings, an analysis of correlation was conducted for original binders and recovered binders. The analysis used the following assessment criteria: r < 0.5 low correlation 0.5 ≤ r < 0.7 average correlation 0.7 ≤ r < 0.9 high correlation 0.9 ≤ r < 1 almost full correlation. The results of the correlation analysis of ∆G* indexes are presented in Table 7. Addtionally, results are arranged separately in the group of paving bitumens and polymer-modified bitumen, and jointly in the group of all binders. The correlation analysis did not include high-modified binder because of a different nature in the change of the complex modulus in the function of the temperature in relation to other polymer-modified binders. The results of the analysis showed that the differences in the value of the complex modulus in the range from −5 • C to +100 • C for external and internal layers significantly correlate with the differences of the complex modulus for the original binders tested in the same temperature range (p = 0.000). In the case of paving bitumens, almost full correlation was found with the r Pearson value equal to 0.948. In the group of all analysed virgin binders, correlation was determined on a high level. It can be assumed that the low correlation in the case of polymer-modified bitumen may be related to the differences in the content and cross-linking degree of polymer in the tested extracted binders. Moreover, the low correlation in the case of polymer-modified bitumen may be related to the higher viscosity of polymer-modified bitumen. As a result, the thickness of the film with polymermodified bitumen can be more varied. The consequence is a wider dispersion of the results. Figures 5-7 present a graphical interpretation of the correlation analysis for ∆G* indexes. The results of the analysis showed that the differences in the value of the complex modulus in the range from −5 °C to +100 °C for external and internal layers significantly correlate with the differences of the complex modulus for the original binders tested in the same temperature range (p = 0.000). In the case of paving bitumens, almost full correlation was found with the r Pearson value equal to 0.948. In the group of all analysed virgin binders, correlation was determined on a high level. It can be assumed that the low correlation in the case of polymer-modified bitumen may be related to the differences in the content and cross-linking degree of polymer in the tested extracted binders. Moreover, the low correlation in the case of polymer-modified bitumen may be related to the higher viscosity of polymer-modified bitumen. As a result, the thickness of the film with polymer-modified bitumen can be more varied. The consequence is a wider dispersion of the results. Figures 5-7 present a graphical interpretation of the correlation analysis for ΔG* indexes. The results of the analysis showed that the differences in the value of the complex modulus in the range from −5 °C to +100 °C for external and internal layers significantly correlate with the differences of the complex modulus for the original binders tested in the same temperature range (p = 0.000). In the case of paving bitumens, almost full correlation was found with the r Pearson value equal to 0.948. In the group of all analysed virgin binders, correlation was determined on a high level. It can be assumed that the low correlation in the case of polymer-modified bitumen may be related to the differences in the content and cross-linking degree of polymer in the tested extracted binders. Moreover, the low correlation in the case of polymer-modified bitumen may be related to the higher viscosity of polymer-modified bitumen. As a result, the thickness of the film with polymer-modified bitumen can be more varied. The consequence is a wider dispersion of the results. Figures 5-7 present a graphical interpretation of the correlation analysis for ΔG* indexes. The statistical analysis of the correlation force demonstrated that increase of the differences of the complex modulus G* of the mixed binders causes an increase in the difference between the complex modulus of binders extracted from two selected layers of the bitumen film. In Figures 5−7, the red line represents the case when ΔG*R could be equal to ΔG*O, i.e., extracted binders characterised in non miscibility. On the basis of the position of the regression line in relation to the red one, it allows us to conclude that with the increase of the value of ΔG*, the differences between the values of these indexes for the extracted and original binders increase. The smallest differences are observed for paving binders. Based on the statistical analysis, the dependence between the value of the ΔG*O index (original binders) and the value of the ΔG*R index (recovered binder) was determined. The dependence including results for all analysed binders is presented in Equation (3). On the basis of the Equation (3), it can be stated that the differences between logarithms of the complex modulus of an internal and external layer of the bitumen film constitute approximately 66% of the difference of the complex modulus for original binders used in binder film. The value of the slope factor of line function lower than 1 should be interpreted as the impact of partial blending of both binders. As a result of partial blending, the properties of the external layer change in the direction of the properties of the internal layer, and simultaneously the properties of the internal layer change in the direction of the properties of the external layer. In the case of the complete lack of blending of the two binders, the slope factor value will equal 1. The constant in the equation of about 0.16 kPa is related to the stifeness aging effect. The binder, which simulates rap asphalt, has stiffened due to aging. Aging took place when the 20/30 binder was mixed with aggregate and when the RAP-simulating mixture was reheated during mixing with fresh binder. Consequently, the greater stiffness of the inner layer causes a greater difference in the case of recovered binders. In a specific case when both original binders creating the bitumen film are characterised by the same values of complex shear modulus (ΔG*o = 0), the determined value of ΔG* will be approximately 0.16 kPa. This could be expected in real conditions, if the properties of the binder recovered from RAP and the original virgin binder were analyzed. Then, the constant could be equal to 0. The statistical analysis of the correlation force demonstrated that increase of the differences of the complex modulus G* of the mixed binders causes an increase in the difference between the complex modulus of binders extracted from two selected layers of the bitumen film. In Figures 5-7, the red line represents the case when ∆G* R could be equal to ∆G* O , i.e., extracted binders characterised in non miscibility. On the basis of the position of the regression line in relation to the red one, it allows us to conclude that with the increase of the value of ∆G*, the differences between the values of these indexes for the extracted and original binders increase. The smallest differences are observed for paving binders. Based on the statistical analysis, the dependence between the value of the ∆G* O index (original binders) and the value of the ∆G* R index (recovered binder) was determined. The dependence including results for all analysed binders is presented in Equation (3). On the basis of the Equation (3), it can be stated that the differences between logarithms of the complex modulus of an internal and external layer of the bitumen film constitute approximately 66% of the difference of the complex modulus for original binders used in binder film. The value of the slope factor of line function lower than 1 should be interpreted as the impact of partial blending of both binders. As a result of partial blending, the properties of the external layer change in the direction of the properties of the internal layer, and simultaneously the properties of the internal layer change in the direction of the properties of the external layer. In the case of the complete lack of blending of the two binders, the slope factor value will equal 1. The constant in the equation of about 0.16 kPa is related to the stifeness aging effect. The binder, which simulates rap asphalt, has stiffened due to aging. Aging took place when the 20/30 binder was mixed with aggregate and when the RAP-simulating mixture was reheated during mixing with fresh binder. Consequently, the greater stiffness of the inner layer causes a greater difference in the case of recovered binders. In a specific case when both original binders creating the bitumen film are characterised by the same values of complex shear modulus (∆G*o = 0), the determined value of ∆G* will be approximately 0.16 kPa. This could be expected in real conditions, if the properties of the binder recovered from RAP and the original virgin binder were analyzed. Then, the constant could be equal to 0. Equation (4) describes a similar relationship for virgin binders from the group of paving bitumens, while the group of polymer-modified bitumens is described by Equation (5). ∆G* R = 0.697∆G* O + 0.179 (4) ∆G* R = 0.428∆G* O + 0.193 (5) Based on comparison of the values of slope factors for paving bitumen (0.697) and polymer-modified bitumen (0.428), it can be stated that binder film composed of two paving bitumens is characterized by a lower level of homogeneity than binder film consisting of one of the layers made by polymer-modified bitumens. The specified dependencies can be applied in practice to forecast a non-homogeneity in the bitumen film composed of two types of binders with a different complexed modulus. Especially, this could be applied when reclaimed asphalt pavement is used. Multiple Stress Creep Recovery Test-MSCR In a further stage of the studies, extracted binders from bitumen film containing PMBs were subjected to the multiple stress creep recovery tests (MSCR) which were introduced to characterise how elastic properties of PMBs change in the thickness of the bitumen film. In order to determine the significance of differences in parameters R3,2 and JNR3,2 for specific layers of the bitumen film, as well as types of applied fresh binders, an ANOVA analysis was conducted. The results of variance analysis are presented in Table 8. Virgin binder type 1 C 2 Figure 9. Non-recoverable creep compliance J NR at 3.2 kPa stress; 1-outer layer of binder film, 2-inner layer of binder film, C-complete blending. In order to determine the significance of differences in parameters R 3,2 and J NR3,2 for specific layers of the bitumen film, as well as types of applied fresh binders, an ANOVA analysis was conducted. The results of variance analysis are presented in Table 8. While analysing the values of parameter R 3,2 presented in Figure 8 and results of variance analysis presented in Table 8, it can be concluded that in the case of regular polymer-modified bitumen, there are significant differences between the values calculated for external, internal and entire layers of the bitumen film. These differences indicate the non-homogenity of the bitumen film in terms of its elastic properties. It can be concluded that both binders are not fully blended. The external layer of the bitumen film, which contains mainly polymer-modified bitumen, is characterised by a higher elastic recovery. It should be noted that in the case of the tested binders recovered from a mixture containing high-polymer-modified bitumen 45/80-80, there were no found significant differences between internal and external layers of the bitumen film. It can be assumed that the elastic properties of the internal layer were changed by the modifier. It should be noted that HIMA binder is characterised by a softer bitumen base and a higher amount of polymer used for production than in traditional polymer-modified bitumen. In addition, some changes in elastic properties were recognised for internal layers of bitumen film, depending on the type of used binder. While analysing the values of creep compliance (J NR3,2 ) presented in Figure 9 and the ANOVA results, it can also be stated that in the case of regular polymer-modified bitumens, there are significant differences between the values calculated for external and internal layers of the bitumen film. The lowest values of creep compliance occured for the external layer of the binder film (virgin polymer-modified binder) and the lowest for the external layer (paving bitumen). Similarly, as in the case of elastic recovery (R 3,2 parameter), the found differences proved non-homogenity of the bitumen film in terms of its creep properties. Conclusions Summarising the conducted analyses within the scope of assessment of homogeneity of bitumen film in asphalt mixture systems, it should be stated that as a result of lack of full blending of two bituminous binders, the bitumen film covering the aggregates is not homogenous. These findings result from variability of viscoelastic properties along the thickness of bitumen film. On the basis of the conducted laboratory tests and analyses, the following detailed conclusions can be formulated: • The internal layer of the bitumen film has properties similar to the binder-simulated RAP cluster, while the external layer has properties similar to fresh binder. The presented non-homogeneity of bitumen film proves no full blending of the binders is included in it. • The ∆G* index allows for assessment of homogeneity of bitumen film based on the G* complex modulus from DSR tests. The binders from different bitumen layers should be extracted step by step. Thus, this indirectly allows us to assess the degree of blending of fresh binder and binder from RAP. • In medium and high operating temperatures, the differences of viscoelastic properties along the thickness of the bitumen film increase together with an increase of the differences of properties of binders subjected to mixing. Statistical regression was set between the differences of complex modulus of the binders subjected to mixing and differences of complex modulus of binders come from the internal and the external layers of the bitumen film layers comprised of those binders. • The MSCR test analysis allowed us to recognise that the bitumen film in an asphalt mixture system varies in terms of elastic recovery and creep compliance properties, when polymer-modified bitumens are used. • The staged extraction method proposed in this work can also be used to detect the presence of RAP in asphalt mixture. Currently, RAP in asphalt mixtures is mainly detected on the basis of petrographic analysis of the mineral aggregates. However, such analysis could be ineffective if RAP and virgin aggregates were characterised by the same petrographic stock. In such a case, binder properties analysis is the only option to detect RAP. Because of the proven non-homogeneity of the bitumen film in an asphalt mixture system, the presented method can be an effective tool to detect whether the asphalt mixture contains RAP.
8,782.2
2021-08-01T00:00:00.000
[ "Materials Science" ]
Updated and novel limits on double beta decay and dark matter-induced processes in platinum A 510 day long-term measurement of a 45.3 g platinum foil acting as the sample and high voltage contact in an ultra-low-background high purity germanium detector was performed at Laboratori Nazionali del Gran Sasso (Italy). The data was used for a detailed study of double beta decay modes in natural platinum isotopes. Limits are set in the range \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {O}}(10^{14}{-} 10^{19})$$\end{document}O(1014-1019) years (90% C.L.) for several double beta decay transitions to excited states confirming, and partially extending existing limits. The highest sensitivity of the measurement, greater than \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10^{19}$$\end{document}1019 years, was achieved for the two neutrino and neutrinoless double beta decay modes of the isotope \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{198}$$\end{document}198Pt. Additionally, novel limits for inelastic dark matter scattering on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$^{195}$$\end{document}195Pt are placed up to mass splittings of approximately 500 keV. We analyze several techniques to extend the sensitivity and propose a few approaches for future medium-scale experiments with platinum-group elements. Introduction Understanding the properties of the neutrino (e.g.absolute mass scale, Dirac or Majorana particle nature, and scheme of the mass hierarchy) are among the goals of modern particle physics.One way to determine if the neutrino is its own antiparticle (Majorana in nature) is through the observation of neutrinoless double beta decay (0ν-DBD).Unlike two neutrino double beta decay (2ν-DBD) which is Standard Model process, 0ν-DBD is a lepton-violating process where the two simultaneously-generated neutrinos annihilate leaving only two outgoing beta particles.DBD-processes (double electron emission, 2β − , double positron emission, 2β + , double electron capture, 2 , electron capture with positron emission, β + ) can proceed to the ground state or to excited states of the daughter nucleus if energetically allowed.Current experimental efforts focus mostly on transitions to the ground state (see [1] for a review).While the expected half-life of DBD-processes proceeding through transitions to the excited states are generally longer than to ground state transitions, some are predicted to be shorter due to resonant enhancement [2].The possibility of DBD-processes with shorter half-lives is an attractive opportunity in the search for 0ν-DBD. Current DBD-search experiments have sensitivity of O (10 26 ) yr, aiming to probe the inverted mass hierarchy, using the technique where the DBD-active isotope is embedded in the sensitive detector volume.Some elements, especially within the platinum group (Ru, Pd, Os, Pt) are interesting for DBD studies but cannot be easily investigated due to the lack of high performance detector materials in which they can be embedded.The chemical properties of platinum-group elements make it difficult to attain a high mass fraction while satisfying the strict radiopurity requirements of low-background experiments.Therefore, these elements have been previously studied with an experimental approach "source = detector", where the metal sample is placed externally to a large volume low-background high-purity germanium (HPGe) detector.This method is limited to detecting the low-energy deexcitation γ's that accompany DBD transitions to excited states of daughter nuclides [3,4,5,6,7,8].However this approach has also met arXiv:2209.11106v2[nucl-ex] 10 May 2023 some experimental difficulties due to the reduced efficiency of γ registration from non-optimized geometry of the experiment and self-absorption in the dense samples [6,7,8]. Recently, a new technique replaces the traditional copper high voltage contact in a high purity germanium detector with a foil contact made of metal containing the isotope of interest [9,10].Here, the foil thickness (O(0.1)mm) and geometry can be optimized to maximize detection efficiency (i.e.limit self-absorption). Both 190 Pt and 198 Pt, which possess several DBD modes, have been investigated previously using Pt metal pieces of various thickness and geometries [11,6].While searches using this method were able to set limits on the half-life of O(10 14 − 10 19 ) yr, they suffer from low detection efficiency due to self-absorption of the low energy γ's within the Pt sample itself.In the work presented here, data from a Pt foil sample using the new thin foil technique was analyzed to search for the various DBD modes.Comparable half-life limits are placed to [6] of O(10 14 − 10 19 ) yr with less exposure due to the increase in detection efficiency.While these results, obtained with O(100)-g-target mass are not competitive to those obtained in tonne-scale searches, it is useful to set limits with a diverse range of target isotopes, even through benchtop experiments, to aid in theoretical modeling as half-life predictions are obtained through model-dependent nuclear matrix element calculations. The diversifying of target isotopes can also be helpful in the search for dark matter.Some well-motivated models of particle dark matter [12, and references therein] have effective couplings of the form χ 1 χ 2 q q, where χ 1 and χ 2 need not have the same mass: one state can be seen as the excited state, and the other as the ground state.This leads to inelastic scattering, where the momentum transfer must overcome the mass splitting δ ≡ M χ2 -M χ1 between the two states in order for a recoil to occur.The lighter dark matter state χ 1 , being the dominant dark matter component, scatters inelastically with the nucleus and transitions to a heavier state χ 2 .Meanwhile, the nucleus is displaced and excited to a higher energy level.The excited nucleus subsequently deexcites to the ground state with the emission of a gamma, which can then be detected.The direct detection of inelastic dark matter (IDM) has been challenging, due to the kinematical suppression of the dark matter scattering rate in the detector arising from the mass splitting.At high mass splitting, the kinematics of the scattering imposes two conditions on the nucleus target: that the nucleus is heavy and the nuclear transition energy is low. As shown in [10,13,14], typical targets used in dedicated experiments (e.g.xenon, argon) are not heavy enough to allow for such an interaction for realistic halo dark matter velocities.Heavier nuclei such as Ta [15] and Pt offer the possibility of large momentum transfers and open the parameter space to larger mass splittings.In addition to recoil, dark matter scattering with Pt can yield a nuclear excitation-the absence of a deexcitation line thus yields a limit on the number of interactions with a heavy inelastic dark matter particle.Using 195 Pt, with the relatively low first excited state at 98.9 keV, we probe inelastic dark matter up to the mass splitting of approximately 500 keV for a dark matter mass of 1 TeV.The search excludes the dark matternucleon cross section above approximately 10 −33 cm 2 to 10 −28 cm 2 , pertinent to the mass splitting. This paper is organized as follows: Section 2 describes the experimental setup and sample radiopurity, Section 3 the search for the DBD decay modes, Section 4 a search for dark matter-induced deexcitations, and a discussion with conclusions in Section 5. Experimental method An ultra-low-background high-purity semi-coaxial p-type germanium detector (GS1) was operated underground at Laboratori Nazionali del Gran Sasso.A platinum foil of 0.12 mm thickness was wrapped around the 70mm-diameter, 70-mm-high Ge detector serving as the sample and high voltage contact.An additional circular Pt foil, of a like diameter, was placed on top of the Ge crystal.The sample had a mass of (45.30 ± 0.01) g and was made of commercially-available Pt foil of a 99.95% purity, confirmed with a dedicated inductively-coupled plasma mass spectrometry measurement.The detector and sample were placed inside the crystal holder made of oxygen-free high conductivity (OFHC) copper, and further placed in a dedicated passive multi-layer shield of 5 cm OFHC copper, 7 cm of ultra-low background lead, and 15 additional cm of low background lead.Details of the detector performance, data acquisition, and shielding setup are presented in [9].A schematic of the sample, detector, and shielding are shown in Figure 1. Approximately 1751 hr of background data using a copper high voltage contact and 12242 hr of data with the Pt foil were taken.The full energy spectra (counts/keV-hour) for the Pt foil sample and background data are shown in Figure 2. No cosmogenic nor anthropogenic radionuclides are observed in the Pt sample spectrum.Futhermore, no evidence of contamination from 192 Ir, which was a major contribution to the backgrounds in [11] due to the chemically-similarity to Pt and half-life of approximately 73 days, is observed.In the Pt sample spectrum, gamma lines from natural radioactivity ( 238 U, 235 U, and 232 Th decay chains and 40 K), 60 Co, and 137 Cs are observed.The peaks observed in the Pt sample spectrum, however, are consistent with background rates, leading to the upper limits (90% C.L.) given in Table 1. Table 1 Radiopurity of the Pt sample in mBq/kg, reproduced from [9].Upper limits are given with 90% C.L. The limit from 210 Pb comes from a subset of data runs with a threshold sufficiently low be sensitive to the 46. 5 the 2ν's escape the detector and one or both of the K/L shell X-rays can be detected.The mode is labeled depending on the K/L shell X-ray (e.g. if two K shell Xrays are emitted, the mode is labeled as 2ν2K).Due to the 50 keV energy threshold, we are sensitive to only Xrays from the K-shell; we look for only the K α1 emission at 63.0 keV.For the 0ν2 modes, Here, a bremsstrahlung photon or Auger electron is emitted along with two K or L shell X-rays.The energy of the bremsstrahlung photon is calculated as where 511 keV annihilation γ's are produced from the emitted positron and can be detected.This signature is independent of the 2ν/0ν decay mode.The γ/X-ray energies, E γ,X , searched for in this work can be bound into three regions of interest, shaded in Figure 2. Since all double beta decay isotopes have a 0 + ground state, the most likely transitions are to a 0 + states of the daughter.Spin suppression occurs for larger angular momentum transfers.Hence, we limit our search for double beta decays to the first excited 0 + state and low-lying 2 + states.Energy level diagrams for the DBD of 190 Pt and 198 Pt are shown in Figure 3. The Pt sample energy spectrum was fitted in each region of interest with a model describing the background (linear in the case of featureless and linear + gaussian(s) in the presence of known background gamma lines) and effect (gaussian).An example fit is shown in Figure 4. Using the fit results and following the Feldman-Cousins method [20], we obtain a limit on the number 3. of counts, lim S, and calculate the corresponding limit on T 1/2 at 90% C.L. No evidence of DBD decays to any studied excited states is observed.An upper limit on the half-life, T 1/2 for each DBD mode searched for is calculated as, where N is the number of beta decaying atoms in the sample, η is the detection efficiency for the energy of interest, and t is the counting time.We assume natural isotopic composition of platinum given in Table 2.The detection efficiency is calculated with a GEANT4 [21] simulation and includes the decay scheme of the daughter nuclide. The limits for T 1/2 calculated in this analysis are shown in Table 3 and are strongly dependent on the isotopic abundance and the detection efficiency.In general, the results presented here are comparable to leading limits from [6], albeit with less sample mass.The highest sensitivity we achieved is of 198 Pt at ≥ 1.5 × 10 19 yr due to the larger isotopic abundance of 198 Pt over 190 Pt.Indeed, the ∼ 500× increase in half-life sensitivity for the 198 Pt decay over 190 Pt decay modes with positron emission (e.g.2ν β + ) with a similar region of interest closely matches the ∼ 500× abundance of 198 Pt over 190 Pt.However, despite our optimized sample geometry and high detection efficiency, we are limited by the internal background of the detector setup which result in limits which are not consistently stronger than derived in [6].In particular, we set stronger limits for 0ν2K, 0νKL, and 0ν2L to the ground state, 0νKL resonant to the excited state, and 2νKL to the excited state. Inelastic dark matter search We focus on the most abundant isotope of natural platinum, 195 Pt, to search for inelastic dark matter scatter-Table 3 Limits on the DBD half-lives for 190 Pt and 198 Pt.Given are the transitions, decay modes to ground state (g.s.) or excited state (exc.), and corresponding γ/X-ray energy, detection efficiency, and experimental T 1/2 limits (90% C.L) derived here and compared to [6]. Experimental T 1/2 [×10 16 where with N T the number of target nuclei, M χ (M N ) the mass of the dark matter particle (nucleus), and µ χn the reduced mass between the dark matter particle and the nucleon.The minimum and the maximum recoil energy, E R,min and E R,max , respectively, dependent on the velocity v ranging from v min and v max , is related to the nuclear excitation energy and the mass splitting, as seen from [13].We have assumed a Maxwellian dark matter distribution, f (v), with v 0 = 220 km/s, the Earth velocity v e = 240 km/s, and the local escape velocity at the position of the Earth v esc = 600 km/s [22].The local dark matter energy density is assumed to be ρ χ = 0.4 GeV/cm 3 , consistent with the recent determination of the local dark matter density [23].The dark matter nuclear scattering cross section is related to the per nucleon scattering cross section σ n by Equation ( 6). In the absence of coherent enhancement, the nuclear response S(q) is estimated to be for atomic number A, proton number Z, initial nuclear spin J i , and momentum-dependent (momentum transfer q and atomic radius R) Bessel function of the second kind j 2 (qR).For 195 Pt the value of the reduced transition probability B(E2) is measured to be 11.1 W.u. [24] and the initial nuclear spin The measured event spectrum is quite flat around the excitation energy 98.9 keV with a rate s = 654 keV −1 [9].Therefore, the dark matter contribution should not exceed the variance of the measured background rate.At 68% C.L., this is Here, with the energy resolution σ = 1.90 ± 0.25 keV, detection efficiency η = 5.5%, and the counting time t = 12242 hr we obtain R Bkg = 1.45 × 10 −5 keV −1 s −1 .The limit on inelastic dark matter-nucleon scattering Fig. 4 Example fit to region III in Figure 2 sensitive to the 2νKL excited state and resonant 0νKL excited state modes at 1326.9 keV and 0νKL ground state mode from 1314.5 keV of 190 Pt. Background peaks are observed for 60 Co at 1332 keV and 214 Bi at 1378 keV. is shown as a function of the mass splitting δ in Figure 5 by requiring the dark matter scattering rate not to exceed the measured background rate, i.e., R ≤ 1.645R Bkg , with the numerical factor accounting for the conversion from 68% to 90% confidence level.As demonstrated in [13], a larger mass splitting indicates smaller range of kinematically allowed recoil energy, and hence smaller dark matter scattering rate from Eq. ( 5), which weakens the bounds on cross section.The result extends the previous direct detection constraints derived from PICO-60 [25], CRESST-II [26], XENON1T [27] for the mass splitting from 430 keV to 500 keV.The new limit, however, remains less stringent to constraints based on 189 Os [28], CaWO 4 [29], and PbWO 4 [30] derived in [13]. Ta Fig. 5 Constraints on inelastic dark matter-nucleon scattering cross section at 90% C.L. assuming a dark matter mass Mχ = 1 TeV.We also show limits based on data from PICO-60 [25], CRESST-II [26], XENON1T [27], 180 Ta [15], Os [28], CaWO 4 [29] and PbWO 4 [30], adapted from [13] and the Hf constraint from [10].The solid magenta line depicts the limit on cross section from the detection of the deexcitation gamma when inelastic dark matter scatters and excites the 195 Pt nucleus, derived in this work.The dashed magenta line shows the projected limit assuming cumulative increases in sensitivity described in Section 5. Discussion and conclusions A search for rare nuclear processes that can occur in natural platinum isotopes was performed using an ultralow-background HPGe detector and a 45.3 g Pt metal foil sample.The search included double beta decays, double electron captures into excited states of the daughter isotopes, and dark matter induced events.No signal was found and 90% confidence level limits on the halflife of the different DBD processes of O(10 14 − 10 19 ) yr were set.Existing constraints on double beta decay and double electron capture modes were confirmed and partially improved.We additionally place novel inelastic dark matter limits searching for the deexcitation gamma from the 98.9 keV excited state of 195 Pt up to mass splitting of approximately 500 keV excluding the dark matter-nucleon cross section above the range 10 −33 − 10 −28 cm 2 for a dark matter mass of 1 TeV.While not as competitive as existing limits, these results diversify the list of target materials used in dark matter searches. General improvement of this measurement can be achieved through further background reduction and increased sample mass and measurement time.Indeed, the content of radionuclides of the market-available platinum metal was already very low with no significant contribution from the natural decay chains 232 Th, 235 U, and 238 U, nor from 40 K, 60 Co, and 137 Cs. The total acquisition time presented here of approximately 510 days could be significantly improved in future measurements.A 5 year measurement would increase the sensitivity by approximately a factor of 2. To increase the mass of the sample there are several strategies that could be realized within the "source = detector" approach.For example, a promising method is to use a larger Pt sample mass with a HPGe detector array as adopted by the TGV collaboration [31].There, a thin foil of the studied material is placed between 16 pairs of Ge detectors (20.4 cm 2 × 0.6 cm) stacked in a large tower to search for different modes of 2 processes in 106 Cd.Assuming the same area (20.4 cm 2 ) and a Pt foil thickness of 0.12 mm, as used in our study, the total Pt sample mass would be 84 g.This would further increase the sensitivity by approximately a factor of 2.Moreover, by exploiting the coincidence between neighboring, face-to-face detectors, this method can strongly suppress backgrounds (by a factor of 10) and increase detection efficiency (by a factor of 2) leading to an even further increase in sensitivity.Unfortunately, HPGebased experimental methods have a limitation coming from the finite energy resolution (FWHM ≈ 2.5 keV in the 20 − 100 keV range for both [31] and our study). One of the possible further modification of the stacked detector array approach would be to use tower of Ge wafers working as cryogenic detectors, similar to the light detectors utilized in the CUPID-0 [32] or CUPID-Mo [33] experiments.Here, the Pt metal foil samples would be placed between neighboring, face-to-face wafers.Typical energy resolution for cryogenic light detectors are O(300) eV FWHM [32] that can be further improved to O(50) eV using Neganov-Trofimov-Luke amplification [33].The improved energy resolution would help to minimize background contributions to the region of interest leading to a factor of 10 − 50 experimental sensitivity enhancement.This method may also be useful in future inelastic dark matter searches with mass splitting above 400 keV due to a wide variety of target isotopes that can be studied.Recently, a prototype based on a similar stacked-wafer approach has been developed and tested using eight large-area (∅ 150 mm) Si wafers to study surface alpha contamination [34]. However, to approach the half-life sensitivity of O(10 24 ) yr, a drastic improvement of detection efficiencies is neces-sary.Realistically, this can be achieved in several ways though the "source = detector" approach, where target isotopes would be embedded into the detector material.Recently, scintillating crystals of the Cs 2 XCl 6 family started to be extensively studied because of their high light yield and good energy resolution, as well as due to their low internal background, see [35].Compounds of this family are extremely flexible to accommodate different elements (X = Ru, Pd, Os, Pt).This approach would especially benefit searches for low energy γ's where multiple orders of magnitude improvement in detection efficiency can be achieved. A well-known method to significantly increase the number of decaying nuclei is to use materials enriched with the isotope of interest, as performed in the LEG-END [36], CUPID-0 [37], and CUPID-Mo [38] experiments.However, this becomes difficult for elements of the Pt group as they can only be enriched by electromagnetic separation, a very expensive and time-consuming process.Price-evaluation performed on elements of this group with similar initial isotopic abundance of O(0.01)% gives a value of $1000/mg for a final enrichment level of O(10)%.With adequate funding, one would be able to enhance sensitivity to a factor of 1000. Despite initial complications in the study of double beta decay processes in natural isotopes of elements in platinum group, a number of techniques now exist to perform these investigations.With adaptation and tuning of the methods described above, even further sensitivity is possible. Fig. 1 Fig. 1 Left: Section view of the HPGe detector and sample (not to scale) with 1) Pt foils on the top and wrapping the Ge crystal acting as the target and high-voltage contact, 2) HPGe crystal, 3) copper crystal holder, and 4) copper end cap of 1 mm thickness.Right: Photo of the dedicated passive shield, in open configuration, shows 1) the movable part (dimensions (40 × 40 × 40) cm) consisting of 5 cm of OFHC copper, 7 cm ULB lead, and up to 15 cm lead, all enclosed in a polymethylmethacrylate box continuously flushed with boiloff nitrogen, 2) the dewar fixed together with movable part on a stainless steel platform, 3) the fixed part of the passive shield with overall external dimensions (60 × 60 × 80) cm. for the maximum available energy of the transition Q ββ , and binding energies E b1 (K = 73.871keV) and E b2 (L 1 = 12.968 keV) of the corresponding K/L shells of the daughter nuclide, and in the case of transition through the excited state, E γ is the deexcitation energy[19].For the 0ν2K mode, E brem = 1253.6keV, for 0νKL E brem = 1314.5keV, and E brem = 1375.4keV for the 0ν2L mode.Because of the lower background in the MeV-range with respect to the low energy X-ray region, we focus on the detection of the bremsstrahlung photon and do not consider coincidence between the bremsstrahlung photon and X-ray due to the low coincidence probability.For β + modes, Fig. 2 Fig.2Energy spectra from Pt foil sample data (red) and background data using a copper high voltage contact (black) normalized per hour of run time.Relevant leading background contaminants are indicated along with approximate energy regions of interest for the double beta decay search shaded.The specific decay mode energies in each region are given in Table3. keV gamma line.The activity of 226 Ra is derived from 214 Bi assuming secular equilibrium. 3 Search for DBD modesIn190Pt we search for several DBD modes with different signatures.For 2ν2 modes, Table 2 [17,18]ied decay scheme data of the190Pt decay to190Os and 198 Pt decay to 198 Hg for the energy levels investigated in this work.Isotopic abundance is taken from[16].The maximum available energy of transitions, Q ββ , spin/parity, and excited energy levels of the Os daughter are from[17,18].
5,489.4
2022-09-22T00:00:00.000
[ "Physics" ]
Structural Evidence for the Dopamine-First Mechanism of Norcoclaurine Synthase Norcoclaurine synthase (NCS) is a Pictet-Spenglerase that catalyzes the first key step in plant benzylisoquinoline alkaloid metabolism, a compound family that includes bioactive natural products such as morphine. The enzyme has also shown great potential as a biocatalyst for the formation of chiral isoquinolines. Here we present new high-resolution X-ray crystallography data describing Thalictrum flavum NCS bound to a mechanism-inspired ligand. The structure supports two key features of the NCS “dopamine-first” mechanism: the binding of dopamine catechol to Lys-122 and the position of the carbonyl substrate binding site at the active site entrance. The catalytically vital residue Glu-110 occupies a previously unobserved ligand-bound conformation that may be catalytically significant. The potential roles of inhibitory binding and alternative amino acid conformations in the mechanism have also been revealed. This work significantly advances our understanding of the NCS mechanism and will aid future efforts to engineer the substrate scope and catalytic properties of this useful biocatalyst. . X-ray data collection and refinement statistics. Experimental Procedures Protein purification and expression A construct containing a codon optimised, truncated Thalictrum flavum NCS 1 gene (ΔN33C196TfNCS), with an N-terminal hexahistidine tag and a TEV protease cleavage site was synthesised and cloned into pD451-SR (ATUM, CA, USA) 2 . The plasmid was transformed into BL21 (DE3) cells and a single colony inoculated 100 ml of Terrific broth media (TB) for 16 hours. One litre of TB was inoculated with 4% v/v of overnight culture and grown for 2 hours at 37 °C, then 1 hour at 25 °C. The protein was overexpressed by addition of 0.5 mM isopropylthiogalactoside, incubated for 3 hours at 25 °C and then harvested by centrifugation. Cell pellets were suspended in binding buffer (50 mM Hepes, 100 mM NaCl, 20 mM Imidazole pH 7.5) and 10% v/v BugBuster 10X (Merck Millipore, Germany) was used to break the cells. After centrifugation at 25,000 g for 1 hour, the lysate was loaded onto 1 ml of Ni-Sepharose HP resin (GE Healthcare). The protein was eluted from the resin with elution buffer (50 mM Hepes, 100 mM NaCl, 500 mM imidazole, pH 7.5) after washing with binding buffer and washing buffer (50 mM Hepes, 100 mM NaCl, 50 mM Imidazole pH 7.5) for 5 column volumes respectively. The eluted fractions were pooled and 0.1 mg of TEV protease (containing a N-terminal His-tag) was added to the sample and dialysed in 4 litres of dialysis buffer (20 mM Tris, 50 mM NaCl, pH 7.5) for 16 hours at 4 °C. The sample was loaded to a 1 ml of Ni-Sepharose HP resin to bind uncut NCS and TEV protease. Cut NCS was washed off the resin with wash buffer (20 mM Tris, 50 mM NaCl, 50 mM Imidazole, pH 7.5). Size exclusion chromatography was used to purify the NCS protein further using Superdex 75 16/600 column (GE, Healthcare). The eluents were pooled and concentrated using a 10 kDa cut off Vivaspin concentrator (Sartorius, Germany) to 12 mg/ml. The protein sample was either used directly to set up crystallization trials or stored at -80 °C. Protein crystallisation and data processing The truncated NCS apo protein crystals were grown by the sitting-drop method in 96-well crystallisation plates (Molecular Dimensions) in 10% w/v polyethylene glycol (PEG) 1000 and 10% w/v PEG 8000. Larger crystals were obtained by hanging-drop method. The protein was incubated with 10 mM of mimic compound 6 and crystallised in the same condition as the apo protein. The crystals were cryo-protected in crystallisation buffer containing 20% ethylene glycol. Diffraction data for the apo structure were collected at Soleil beamline Proxima 1 whereas the final mimic-bound dataset was collected at Diamond beamline I02. The diffraction images were processed using xia2 and XDS 3 software packages, scaled and merged using Aimless in the CCP4 program suite 4 . The initial phases of the apo NCS models were solved by molecular replacement with the program Phaser 5 using the previous apo NCS structure (PDB: 2VNE 6 ) as the search model. Model building was performed with COOT 7 and refinement was done with Refmac5 8 using TLS (one group per chain including the associated water molecules and ligand) and local noncrystallographic symmetry restraints. The positions of both aromatic rings of the mimic was clear in all three copies in the asymmetric unit of the mimic-bound structure from initial difference maps. There was a ring like density next to the dopamine ring despite there being no ring closure in the mimic. When the mimic is placed in the conformation proposed to be productive in the reaction mechanism refinement gave strong (>5 sigma) difference density where a 6 th atom could make up a second ring. Conversely if the nonproductive conformation where the dopamine is flipped and the rest of the molecule comes off the other side of the ring, is refined alone there is even stronger difference density where the C9 atom is in the first conformation. Neither of these positions correspond to water molecules in the apo structure probably ruling S4 out a mixed apo/ligand structure 9 . We propose that the structure is a mixture of these productive and unproductive ligand conformations. Alternative ligands were tried: neither a five-membered ring oxidation product nor the product of the typical enzyme reaction gave plausible fits ruling out any structure with the R group coming from an atom adjacent to the dopamine. A tertiary amine fills the density but gives poorer R factors than the two-conformation fit, and such a compound is also chemically implausible in the conditions used. Placing a water in the difference density gives a lower Rfree than the two-conformation model and no difference density. However, the water is too close to the Nitrogen (1.6 Å) and the ring (1.8 Å) and is only on the very edge of the density. The final deposited model used the 'complete' occupancy refinement in Refmac5 such that the combined occupancy of the two ligands are constrained to 1.0 in each copy. This final occupancy is not particularly stable and depends on slight drift apart of the two ligands during refinement. The unproductive conformation often ends up with a lower occupancy and a higher B factor and can drift to very low occupancy and high B factor and move quite far out of the density resulting in a return of the difference density peak. Conversely more even occupancy results when the dopamine ring of the unproductive conformation moves away from the optimum individual fit to the ring allowing the ring linking atoms in minor conformation to be closer to the position of the major conformation. This results in less difference density and has been deposited. Other refinement packages did not give better results for the two ligand model in our hands. Data collection and refinement statistics are summarized in Table S1. Figures and RMSD comparisons were performed using UCSF-Chimera (http://www.rbvi.ucsf.edu/chimera/) except the electron densities which were drawn with ccp4mg. Computational docking Subunit A of mimic-bound structure 5NON was used for docking experiments. Ligands and water molecules were removed. Ligands were MM2 energy minimised in ChemBio3D before docking with Chimera UCSF, using the AutoDock Vina plug-in 10 . The protein molecule was centred, and the docking box was position (-17.95, -7.43, 16.19) and size (18.14, 19.02, 28.24). The software was run with the settings: energy-range 3, exhaustiveness 8 and number of modes 10. Binding modes relevant to the dopamine-first mechanism were selected (see Table S2). Enzyme assays The time-courses of ΔN33C196TfNCS and Δ29TfNCS ( Figure S2) was conducted in triplicate. Each assay contained 10 mM dopamine, 10 mM hexanal, 10% v/v MeCN, 0.1 mg/mL purified enzyme, 5 mM sodium ascorbate and 50 mM Hepes pH 7.5. Samples were quenched with 100 mM HCl, diluted and analysed by HPLC. Enzyme activities (initial rates) for Δ29TfNCS and Δ29TfNCS-A79I ( Figure S3) were conducted in triplicate as previously reported 11 . Reactions contained dopamine (10 mM) and 4-HPAA or hexanal (2.5 mM) and were quenched after 30 seconds and analysed by HPLC. HPLC analyses were performed on a HPLC system consisting of an LC Packing FAMOS Autosampler, a P680 HPLC Pump, a TCC-100 Column oven and a UVD170U Ultraviolet detector (Dionex, Sunnyvale, CA, USA), and a C18 (150 x 4.6 mm) column (ACE, Aberdeen, UK). Samples were run with a gradient of H2O (0.1% trifluoroacetic acid) /MeCN from 9:1 to 3:7 over 6 min, at a flow rate of 1 mL.min -1 . The column temperature was 30 °C, and compounds were detection by monitoring A280. Retention times and concentrations were calculated based on chemically verified standards. Synthesis of 4-{2-[(4-Methoxyphenethyl)amino]ethyl}benzene-1,2-diol General. All chemicals were obtained from commercial suppliers and used as received unless otherwise stated. Thin layer chromatography (TLC) analysis was performed on Merck Kieselgel precoated aluminium-backed silica gel plates and compounds visualised by exposure to UV light, potassium permanganate or ninhydrin stains. Flash column chromatography was carried out using silica gel (particle size 40-60 µm). NMR: 1 H and 13 C NMR spectra were recorded at 298 K at the field indicated using Bruker Avance 300 and Brucker Avance 400 III spectrometers. Coupling constants (J) are measured in Hertz (Hz) and multiplicities for 1 H NMR couplings are shown as s (singlet), d (doublet), t (triplet) q (quartet) and m (multiplet). Chemical shifts (in ppm) are given relative to tetramethylsilane and referenced to residual protonated solvent. Mass spectrometry analyses were performed at the UCL Chemistry Mass Spectrometry Facility using a Finnigan MAT 900 XP and Waters LCT Premier XE ESI Q-TOF mass spectrometers. 3,4-Bis(benzoyloxy)dopamine 7 was synthesized as previously reported 12 . Preparative HPLC conditions: Varian Prostar instrument with a UV-visible detector (monitoring at 280 nm) and a DiscoveryBIO wide Pore C18-10 Supelco column (25 Å~ 2.12 cm). A gradient of 5% to 90% of acetonitrile/water (0.1% TFA)) was used. See Figure S8 for NMR spectra. Tables Table S1. X-ray data collection and refinement statistics. H. Original density after one round of refinement of apo structure (including waters) direct with Refmac. The two data sets were isomorphous enough to obviate a molecular replacement step. 2Fo-Fc maps in blue at 1 sigma. Fo-Fc at +3 sigma (green) and -3 sigma (red). All maps clipped to the double mimic coordinates at 1.5 A (Fo-Fc) and 2 A difference maps. Drawn with CCP4mg. Curly arrows represent electron movement, block arrows represent physical movement of residues/water.
2,415
2017-09-15T00:00:00.000
[ "Chemistry" ]
Homogeneity Measurements of Li-Ion Battery Cathodes Using Laser-Induced Breakdown Spectroscopy We study the capability of nanosecond laser-induced breakdown spectroscopy (ns-LIBS) for depth-resolved concentration measurements of Li-Ion battery cathodes. With our system, which is optimized for quality control applications in the production line, we pursue the goal to unveil manufacturing faults and irregularities during the production process of cathodes as early as possible. Femtosecond laser-induced breakdown spectroscopy (fs-LIBS) is widely considered to be better suited for depth-resolved element analysis. Nevertheless, the small size and intensity of the plasma plume, non-thermal energy distribution in the plasma and high investment costs of fs-LIBS make ns-LIBS more attractive for inline application in the industrial surrounding. The system, presented here for the first time, is able to record quasi-depth-resolved relative concentration profiles for carbon, nickel, manganese, cobalt, lithium and aluminum which are the typical elements used in the binder/conductive additive, the active cathode material and the current collector. LIBS often causes high variations in signal intensity from pulse to pulse, so concentration determination is, in general, conducted on the average of many pulses. We show that the spot-to-spot variations we measure are governed by the microstructure of the cathode foil and are not an expression of the limited precision of the LIBS setup. Introduction Due to their high specific energy density and cycling stability, Li-Ion batteries are one of the most popular technologies for storing electrical energy. Although degradation mechanisms are unavoidable in the current state of research, bad operating conditions (low temperature, deep-discharging, high charging/discharging-currents) as well as unfavorable production circumstances can significantly accelerate aging. A variety of, in part, complex mechanisms for the aging of Li-Ion batteries have been described in the literature and manifest themselves electrochemically as capacity fade, impedance rise and overpotential [1]. Although electrochemical models for the half-cells [2] in combination with charge and discharge curves allow a further classification into different degradation modes, the specific microscopic mechanism can still be versatile [3]. A different approach focusing on the underlying physical and chemical principles of battery aging is the elemental analysis of lithium-ion battery constituents and their degradation products. Various X-ray-based analytical methods have been applied and are covered by several reviews [4][5][6]. As summarized in [7], a variety of other analytical techniques, including mass spectroscopic methods (ICP-MS, SIMS), Atomic Absorption Spectroscopy (AAS), Raman Spectroscopy and Optical Emission Spectroscopy (GD-OES, ICP-OES and LIBS) have been used to monitor the transition metal dissolution (TMD) into solvent and its accumulation on the anode side. The electrolyte itself has been extensively analyzed by ion chromatography and gas chromatography-mass spectroscopy (GC-MS) for salts and acids which could affect the dissolution of transition metals. ICP-OES and AAS have also been successfully applied to detect Li loss in discharged anode material (graphite) by forming Li + vacancies [8] and to monitor the lithium distribution after extensive cycling [9]. LIBS outperforms other analytical techniques, such as X-ray fluorescence, by higher sensitivity for light elements. It is fast compared to Raman spectroscopy and GD-OES, and inline-capable in contrast to EDX and SIMS. LIBS furthermore allows the detection of a large number of different elements simultaneously, provided that microscopic damage can be tolerated. In 2012, Zorba et al. [10] studied the chemical composition of the solid electrolyte interface with respect to lighter elements using fs-LIBS with 7 nm depth resolution. Moreover, Imashuku et al. [11] showed the possibility of quantitative lithium mapping and detection of electrolyte decomposition products on cycled cathodes using time-resolved ns-LIBS. In 2019, depth-resolved fs-LIBS has also been demonstrated on a laser-structured 3D cathode architecture for the investigation of Li distribution [12]. In this study, we apply time-integrated ns-LIBS for quasi-depth-resolved elemental mapping of C and Al with respect to active material. Emphasis is put on the imaging of conductive additives (graphite and carbon black) for applications in production control. Materials The cathode of Lithium-ion batteries used in this study is composed of a Lithium Nickel Manganese Cobalt oxide (NMC) as an active material with a layered atomic structure (LiNi x Mn y Co z O 2 , x + y + z = 1) which allows for the intercalation of Li + ions, a current collector made of aluminum, Polyvinylidene fluoride (PVDF) as a polymer binder to ensure mechanical stability and conducting additive (graphite and carbon black) to ensure electrical conductivity to the current collector. The cathode material examined in this paper was produced by Fraunhofer ISIT and consists of an active material with composition x = 1/3, y = 1/3, z = 1/3, abbreviated NMC 111, and different amounts of additives (graphite/carbon black between 0 and 15% and binder between 6 and 11%). To produce the cathodes, PVDF was first mixed with acetone and dissolved. The solution was then mixed with the conductive additive and finally homogenized by ultrasonic dispersion. The concentrations of the different constituents were controlled gravimetrically during the production of the slurry. The slurry was applied to the aluminum foil with a doctor blade at a constant coating speed. Evaporation of the slurry on the coated current collector took place at room temperature. Methods The Zentraleinrichtung Elektronenmikroskopie (ZELMI) of the technical university of Berlin used electron microprobe analysis (EMPA) to determine the particle geometry of active material, its distribution and chemical composition along a cleaved cross-section of an NMC 111 cathode. This measurement should deliver information on the spatial arrangement of the NMC particles and their vacancies and complement the information on concentration known from gravimetrical measurement on the macroscopic dimension. Optical emission spectroscopy, using a self-built, inline-capable ns-LIBS system, was carried out with a variety of cathodes with different compositions. Figure 1 shows a simplified sketch of the setup. A diode-pumped Nd:YAG laser (Quantum Light Instruments Q2-100) with 1064 nm wavelength, a pulse duration of 6 ns, 1-100 Hz pulse repetition rate, a pulse energy between 0.25 and 50 mJ, and pulse-to-pulse energy stability < 0.5% (RMS) serves as an excitation source and is focused to a spot with a waist size of approximately 50 µm. The output power of the laser was measured using an optical energy meter with a pyroelectric sensor (Newport 818E-03-12L) While fs-LIBS is generally preferred for depthresolved concentration measurements compared with ns-LIBS, the larger plasma plume and laser and started right before the Q-switch was triggered. An exposure time of 2 ms close to the lower boundary of the device was chosen. Shorter exposures would have been desirable to reduce noise since delays of exposure showed that only the first 5 µ s after the laser pulse contributed notably to the intensity of the optical spectrum. Optical paths for excitation and detection are mechanically fixed and can be moved along the xy-plane with two motorized stages (OWIS LIMES 150, OWIS HPL 84N), the latter (and faster) with the optional feature to synchronize with a conveyor belt for inline applications. The fixed height of the sample was adjusted with an accuracy of approx. 200 µ m over the whole measurement area of 100 mm × 80 mm. A triangulation sensor (Micro-Epsilon ILD 1320), close to the measurement spot, supervised the distance between the measuring head and sample to simplify the adjustment and exclude measurement errors due to misalignment. Figure 1. Inline-capable ns-LIBS measurement setup. Red: optical excitation path for 1064 nm Nd:YAG laser with active Q-switch, blue: optical detection path (behind excitation path) with twolens collimating and refocusing system which couples the optical emission into a fiber. Both optical paths share the same focus point. Cathode Characterization with EMPA Backscattered electron microscopy has been used to obtain high-resolution images ( Figure 2) of the shape and location of NMC particles. Using OpenCV's threshold and contour methods, the locations and sizes of the particles were extracted. We determined a mean particle radius of 3.2 ± 1.7 µ m ( Figure 3) and a degree of coverage of 38%. This radius may differ from the real particle's radius since the slice does not cut through every particle in the layout of the maximum cross-section. The statistical information on NMC particles is essential to interpret spot-to-spot variations in the LIBS measurement. Both limited LIBS-based precision and the granular structure of the sample contribute to spot-to-spot variations in the measured LIBS signal. A detailed quantification of both contributions is necessary to distinguish between Figure 1. Inline-capable ns-LIBS measurement setup. Red: optical excitation path for 1064 nm Nd:YAG laser with active Q-switch, blue: optical detection path (behind excitation path) with twolens collimating and refocusing system which couples the optical emission into a fiber. Both optical paths share the same focus point. In favor of a larger optical throughput, three Czerny-Turner-type Avantes StarLine spectrometers with fixed grating were preferred over Echelle types, covering the whole region of interest (193 nm-671 nm). The devices are equipped with second-order filters to suppress the secondary spectrum. In all three spectrometers, a slit size of 25 µm was used. For the UV range (193 nm to 262 nm) the spectrometer is equipped with 2400 lines/mm grating, giving a wavelength resolution < 170 pm. For the UV-VIS range (268 nm-536 nm) and the VIS region (500-736 nm), the spectrometer comes with 1200 lines/mm gratings. A resolution of < 400 pm was achieved in both cases. Exposure was synchronized with the laser and started right before the Q-switch was triggered. An exposure time of 2 ms close to the lower boundary of the device was chosen. Shorter exposures would have been desirable to reduce noise since delays of exposure showed that only the first 5 µs after the laser pulse contributed notably to the intensity of the optical spectrum. Optical paths for excitation and detection are mechanically fixed and can be moved along the xy-plane with two motorized stages (OWIS LIMES 150, OWIS HPL 84N), the latter (and faster) with the optional feature to synchronize with a conveyor belt for inline applications. The fixed height of the sample was adjusted with an accuracy of approx. 200 µm over the whole measurement area of 100 mm × 80 mm. A triangulation sensor (Micro-Epsilon ILD 1320), close to the measurement spot, supervised the distance between the measuring head and sample to simplify the adjustment and exclude measurement errors due to misalignment. Cathode Characterization with EMPA Backscattered electron microscopy has been used to obtain high-resolution images ( Figure 2) of the shape and location of NMC particles. Using OpenCV's threshold and contour methods, the locations and sizes of the particles were extracted. We determined a mean particle radius of 3.2 ± 1.7 µm ( Figure 3) and a degree of coverage of 38%. This radius may differ from the real particle's radius since the slice does not cut through every particle in the layout of the maximum cross-section. The statistical information on NMC particles is essential to interpret spot-to-spot variations in the LIBS measurement. Both limited LIBS-based precision and the granular structure of the sample contribute to spot-to-spot variations in the measured LIBS signal. A detailed quantification of both contributions is necessary to distinguish between expected fluctuations and production-based irregularities. A detailed analysis will be presented in Section 4.4. Wavelength-dispersive X-ray spectroscopy (WDX) data supplement the geometrical information with chemical contrast. The WDX signals for Ni, Mn and Co are shown in Figure 4. Whereas the chemical composition appears homogenous throughout a single particle, slight variations in composition were observed between different particles. Wavelength-dispersive X-ray spectroscopy (WDX) data supplement the geometrical information with chemical contrast. The WDX signals for Ni, Mn and Co are shown in Figure 4. Whereas the chemical composition appears homogenous throughout a single particle, slight variations in composition were observed between different particles. Wavelength-dispersive X-ray spectroscopy (WDX) data supplement the geometrical information with chemical contrast. The WDX signals for Ni, Mn and Co are shown in Figure 4. Whereas the chemical composition appears homogenous throughout a single particle, slight variations in composition were observed between different particles. Wavelength-dispersive X-ray spectroscopy (WDX) data supplement the geometrical information with chemical contrast. The WDX signals for Ni, Mn and Co are shown in Figure 4. Whereas the chemical composition appears homogenous throughout a single particle, slight variations in composition were observed between different particles. In the case of the binder, which can be identified by the WDX signal of fluorine, accumulations were observed around the surface of the NMC particles, forming rims in the raster image ( Figure 5). In the case of the binder, which can be identified by the WDX signal of fluorine, accumulations were observed around the surface of the NMC particles, forming rims in the raster image ( Figure 5). Since both the binder and graphite/carbon black, contain carbon, the distinction between binder and graphite/carbon black is not straightforward. The structure seen in Figure 5 can also be seen in the WDX signal for carbon ( Figure 6) but superimposed by stronger contributions from the vacancies between different NMC particles, which we assume to be attributed to graphite/carbon black. The LIBS analysis typically covers a round area of 50 µ m waist size and the carbon fluctuations viewed in Figure 6 would appear smoothed in a single measurement. Averaging is furthermore beneficial due to the limited precision of LIBS in general, but the destruction of the sample at the measurement spot makes repetition impossible at the same position. LIBS Measurement The Czerny-Turner-type spectrometers with mechanically fixed grating cover the whole spectrum of interest for C, Ni, Co, Mn, and Li (193 nm to 671 nm). Corresponding spectra for NMC cathode material are shown in Figure 7a-c. A small mismatch was observed by comparison of the observed lines with the NIST database [13]. They are −40 pm (Carbon line) to −100 pm (Manganese line) for the UV device (186 nm-253 nm), −500 pm to -200 pm for the UV-VIS device (270 nm-530 nm) and +400 pm to +700 pm for VIS device (500 nm-730 nm). Since both the binder and graphite/carbon black, contain carbon, the distinction between binder and graphite/carbon black is not straightforward. The structure seen in Figure 5 can also be seen in the WDX signal for carbon ( Figure 6) but superimposed by stronger contributions from the vacancies between different NMC particles, which we assume to be attributed to graphite/carbon black. In the case of the binder, which can be identified by the WDX signal of fluorine, accumulations were observed around the surface of the NMC particles, forming rims in the raster image ( Figure 5). Since both the binder and graphite/carbon black, contain carbon, the distinction between binder and graphite/carbon black is not straightforward. The structure seen in Figure 5 can also be seen in the WDX signal for carbon ( Figure 6) but superimposed by stronger contributions from the vacancies between different NMC particles, which we assume to be attributed to graphite/carbon black. The LIBS analysis typically covers a round area of 50 µ m waist size and the carbon fluctuations viewed in Figure 6 would appear smoothed in a single measurement. Averaging is furthermore beneficial due to the limited precision of LIBS in general, but the destruction of the sample at the measurement spot makes repetition impossible at the same position. LIBS Measurement The Czerny-Turner-type spectrometers with mechanically fixed grating cover the whole spectrum of interest for C, Ni, Co, Mn, and Li (193 nm to 671 nm). Corresponding spectra for NMC cathode material are shown in Figure 7a-c. A small mismatch was observed by comparison of the observed lines with the NIST database [13]. They are −40 pm (Carbon line) to −100 pm (Manganese line) for the UV device (186 nm-253 nm), −500 pm to -200 pm for the UV-VIS device (270 nm-530 nm) and +400 pm to +700 pm for VIS device (500 nm-730 nm). The LIBS analysis typically covers a round area of 50 µm waist size and the carbon fluctuations viewed in Figure 6 would appear smoothed in a single measurement. Averaging is furthermore beneficial due to the limited precision of LIBS in general, but the destruction of the sample at the measurement spot makes repetition impossible at the same position. LIBS Measurement The Czerny-Turner-type spectrometers with mechanically fixed grating cover the whole spectrum of interest for C, Ni, Co, Mn, and Li (193 nm to 671 nm). Corresponding spectra for NMC cathode material are shown in Figure 7a-c. A small mismatch was observed by comparison of the observed lines with the NIST database [13]. They are −40 pm (Carbon line) to −100 pm (Manganese line) for the UV device (186 nm-253 nm), −500 pm to -200 pm for the UV-VIS device (270 nm-530 nm) and +400 pm to +700 pm for VIS device (500 nm-730 nm). Atomic spectral lines of C have been observed from the graphite or binder component. Many strong transitions in the vacuum UV region are present according to spectroscopic databases, but not measurable with a fiber spectrometer and in a LIBS experiment under atmospheric conditions. Nevertheless, single lines in the measurable spectral region at 193.0 nm and 247.9 nm were associated with Carbon. Molecular bands had been observed in experiments with another laser system in our lab using fs pulses and were mainly attributed to radicals like diatomic carbon, cyano radicals and methylidyne radicals known from the Swan system [14], initially observed in 1856 [15]. We noticed that molecular bands and spectral lines were in focus for different adjustments of the detection system, indicating a spatial separation of the different species, as already observed by other groups [16]. The simultaneous presence of molecular bands Atomic spectral lines of C have been observed from the graphite or binder com nent. Many strong transitions in the vacuum UV region are present according to spec scopic databases, but not measurable with a fiber spectrometer and in a LIBS experim under atmospheric conditions. Nevertheless, single lines in the measurable spectral reg at 193.0 nm and 247.9 nm were associated with Carbon. The major spectral lines of neutral Li are found in the red region of the visible spectrum, namely 670.8 nm (2p → 2s transition) and 610.4 nm (3d → 2p transition) [13]. The former transition occurs between two states of the same shell which are strongly split due to [17]. This spectral line suffers from self-absorption [18] in case of high concentrations which can lead to wrong quantification results, if not accounted for [19]. The optical emission spectrum of Ni in UV is dominated by transitions of the outmost electrons, namely the eight d electrons and the two outer s electrons. The strongest lines either involve the ground level configuration 3d 8 (3F)4s 2 or the 3d 9 (2D)4s configuration. A high density of states is found in the range 3.3 eV-4.2 eV (spectral lines between 280 nm and 400 nm), around 5.3 eV-5.6 eV (spectral lines in the range 228 nm-242 nm) and for energies larger than 6 eV [13]. Similar to Ni, the Co emission in the visible spectrum is governed by transitions of the seven d electrons and the outmost two s electrons. Several different terms for the configurations 3p 6 3d 7 4s 2 , 3p 6 3d 8 4s and 3p 6 3d 8 4s4p are responsible for a great number of transitions mainly in the UV spectrum shorter than 300 nm. For Mn, excited states are lifted at least 2.1 eV from the ground level. A high density of states is found in the range 2.1-2.3 eV, 2.9-3.4 eV and 3.7-3.9 eV. For energies > 4.2 eV many states are registered. Most of the prominent lines involve either the ground level (distinct lines at 279 nm, 322 nm, 403 nm, 539 nm) or a lower level in the range 2.1-2.3 eV (lines in the ranges 304-307 nm, 320-327 nm, 353 nm-373 nm, 379 nm-385 nm and 404 nm-408 nm) [13]. For Al, NIST data [13] show a large separation of 3.1 eV between the ground state and the excited states. Above 4.8 eV, the density of states is highly increased. Several terms at 3.1 eV, 3.6 eV, 4.1 eV and 4.6 eV are responsible for strong lines around 395 nm, 309 nm, 257 nm and 265-266 nm in the UV spectrum. Although detected in the spectrum, ionic Al II lines turned out to be much more intense compared to Al I lines and were subsequently used. The most prominent line of Al II detected in our measurement was between 198 nm and 199 nm. It is worth mentioning that spectral features were observed around 185 nm-193 nm, which we ascribe to Schumann-Runge bands of O 2 [20]. Concentration Measurement For quantitative analysis, the LIBS system was calibrated on NMC 111 samples with different mass concentrations of graphite/carbon black (0-15%) and binder (6-11%). We observed an increase in the intensity of carbon lines with respect to metallic lines for increasing pulse energy in the range below 2 mJ, which potentially indicates non-stoichiometric ablation [21]. Calibration measurements were performed at 6 mJ, in the stable region above 3.5 mJ. The large fluctuation in the intensity of spectral lines between successive laser pulses makes quantitative analysis directly from the line intensity challenging. This problem is often met with normalization methods, e.g., internal standard [22], standard normal variate of the whole spectrum, area under the spectrum, or background. Several publications treat the problem of selecting appropriate lines such as avoiding or dealing with resonant [23] and self-absorbing lines of major components. The selection of lines with similar upper energy levels can minimize the influence of changing plasma parameters [24]. For C, both observed lines in the UV spectrum, 193 nm and 248 nm share the same upper energy level (2s 2 3p3s (1P)) of 7.68 eV. This is comparable with the first ionization energy of Co (7.88 eV), Ni (7.61 eV) and Mn (7.43 eV), and therefore, larger than the typical upper energy levels of intensive lines in the spectrum from NMC targets [13]. Taking the intensity ratio of two spectral lines with different upper energy levels makes the measurement plasma temperature dependent. Nevertheless, reproducible calibration curves were achieved with normalization to the standard normal variate of the spectrum which we did for all evaluations shown in the following. A strong correlation between the 193 nm as well as the 248 nm carbon line with the graphite/carbon black concentration was observed (Figure 8a, Pearson correlation coefficient R 2 = 0.997), whereas the concentration of binder had no significant influence on the line intensity of C (Figure 9a). This could have several reasons: 1. Non-stoichiometric ablation of the sample, i.e., only little amount of the binder enters the plasma, whereas graphite/carbon black is easily ablated. 2. Long-chained polymer binder rather forms molecules instead of atomic species [25]. 3. Different plasma conditions, e.g., lower plasma temperature in the polymer containing plasma, prevent the upper energy level corresponding to the 193 nm and 248 nm line to be occupied. plasma temperature dependent. Nevertheless, reproducible calibration curves were achieved with normalization to the standard normal variate of the spectrum which we did for all evaluations shown in the following. A strong correlation between the 193 nm as well as the 248 nm carbon line with the graphite/carbon black concentration was observed (Figure 8a, Pearson correlation coefficient R 2 = 0.997), whereas the concentration of binder had no significant influence on the line intensity of C (Figure 9a). This could have several reasons: 1. Non-stoichiometric ablation of the sample, i.e., only little amount of the binder enters the plasma, whereas graphite/carbon black is easily ablated. 2. Long-chained polymer binder rather forms molecules instead of atomic species [25]. 3. Different plasma conditions, e.g., lower plasma temperature in the polymer containing plasma, prevent the upper energy level corresponding to the 193 nm and 248 nm line to be occupied. For each of the six samples, 25 measurements were taken at each of the 25 measurement spots. The average was taken over all measurements at the same spot. From those ratio of two spectral lines with different upper energy levels makes the measurement plasma temperature dependent. Nevertheless, reproducible calibration curves were achieved with normalization to the standard normal variate of the spectrum which we did for all evaluations shown in the following. A strong correlation between the 193 nm as well as the 248 nm carbon line with the graphite/carbon black concentration was observed (Figure 8a, Pearson correlation coefficient R 2 = 0.997), whereas the concentration of binder had no significant influence on the line intensity of C (Figure 9a). This could have several reasons: 1. Non-stoichiometric ablation of the sample, i.e., only little amount of the binder enters the plasma, whereas graphite/carbon black is easily ablated. 2. Long-chained polymer binder rather forms molecules instead of atomic species [25]. 3. Different plasma conditions, e.g., lower plasma temperature in the polymer containing plasma, prevent the upper energy level corresponding to the 193 nm and 248 nm line to be occupied. For each of the six samples, 25 measurements were taken at each of the 25 measurement spots. The average was taken over all measurements at the same spot. From those For each of the six samples, 25 measurements were taken at each of the 25 measurement spots. The average was taken over all measurements at the same spot. From those 25 spectra, the C signal was evaluated from the spectral region between 192.8 nm and 193.3 nm (247.7 and 248.0 nm) by dividing the baseline-corrected area through the standard normal variate of the whole spectrum. Figure 8b shows calibration curves together with the point-to-point deviation of the LIBS signal illustrated by error bars. Besides the growing standard deviation with increasing graphite/carbon black content, a significant Y offset has been observed which can hardly be explained by interference with other spectral lines. On the other hand, as already mentioned above and shown in Figure 9b, a variation of binder shows no significant influence on either the carbon signal. This offset could be due to residues from the solvent remaining in the sample. Further investigations are needed to identify the cause. Detailed statistics on spot-to-spot signal variations were gained from a 31.5 mm × 20 mm measurement field with 2520 measurement spots and 60 laser pulses at each position ( Figure 10). with the point-to-point deviation of the LIBS signal illustrated by error bars. Besides the growing standard deviation with increasing graphite/carbon black content, a significant Y offset has been observed which can hardly be explained by interference with other spectral lines. On the other hand, as already mentioned above and shown in Figure 9b, a variation of binder shows no significant influence on either the carbon signal. This offset could be due to residues from the solvent remaining in the sample. Further investigations are needed to identify the cause. Detailed statistics on spot-to-spot signal variations were gained from a 31.5 mm × 20 mm measurement field with 2520 measurement spots and 60 laser pulses at each position ( Figure 10). The mean spot-to-spot standard derivation inside each layer is 27.3%. We will show in the next section that we expect a similar value due to the inhomogeneity of our cathode sample. A decrease in Carbon signal has been observed systematically for subsequent laser pulses at the same spot ( Figure 11). Since this decrease is again independent of the binder concentration, we conclude that it is related to the ablation of graphite/carbon black. We interpret this observation as a result of a lower ablation threshold of graphite/carbon black compared to other constituents and conclude that uniform depth profiling is challenging with this composition of the material. The mean spot-to-spot standard derivation inside each layer is 27.3%. We will show in the next section that we expect a similar value due to the inhomogeneity of our cathode sample. A decrease in Carbon signal has been observed systematically for subsequent laser pulses at the same spot ( Figure 11). Since this decrease is again independent of the binder concentration, we conclude that it is related to the ablation of graphite/carbon black. We interpret this observation as a result of a lower ablation threshold of graphite/carbon black compared to other constituents and conclude that uniform depth profiling is challenging with this composition of the material. growing standard deviation with increasing graphite/carbon black content, a significant Y offset has been observed which can hardly be explained by interference with other spectral lines. On the other hand, as already mentioned above and shown in Figure 9b, a variation of binder shows no significant influence on either the carbon signal. This offset could be due to residues from the solvent remaining in the sample. Further investigations are needed to identify the cause. Detailed statistics on spot-to-spot signal variations were gained from a 31.5 mm × 20 mm measurement field with 2520 measurement spots and 60 laser pulses at each position ( Figure 10). The mean spot-to-spot standard derivation inside each layer is 27.3%. We will show in the next section that we expect a similar value due to the inhomogeneity of our cathode sample. A decrease in Carbon signal has been observed systematically for subsequent laser pulses at the same spot ( Figure 11). Since this decrease is again independent of the binder concentration, we conclude that it is related to the ablation of graphite/carbon black. We interpret this observation as a result of a lower ablation threshold of graphite/carbon black compared to other constituents and conclude that uniform depth profiling is challenging with this composition of the material. Nevertheless, reproducible depth profiling is possible with some limitations: As illustrated in Figure 12a, the aluminum current collector is uncovered with subsequent laser pulses leading to a strong increase in the single-ionized aluminum (Al II) line at 198.8 nm. The ablation rate significantly changes with laser power. The Al II signal is viewed in Figure 12b for subsequent laser pulses with 6 mJ energy per pulse at four different positions. Although the evolution of the Al II line showed good agreement for many different measurement points, a comparison with microscopic images pointed out, that the ablation rate significantly depends on the particle distribution at the specific position. The presence of large NMC particles slows down the uncovering process, as shown in Figure 12b (position 4), while the ablation rate of binder and graphite turns out to be higher. This leads to protruding NMC particles around the already vaporized binder and graphite/carbon black areas at the measurement spot. Additionally, the Gaussian profile of our laser beam together with the thermal conductivity of the sample leads to a V-shaped crater, which makes a difference between the profiles shown in Figure 12b and a step function. We conclude, that under ideal conditions, i.e., controlled distance to sample and constant laser power, our system is able to detect relative depth variations in the order of 20% for depths in the range of 25-100 µm. Nevertheless, reproducible depth profiling is possible with some limitations: As illustrated in Figure 12a, the aluminum current collector is uncovered with subsequent laser pulses leading to a strong increase in the single-ionized aluminum (Al II) line at 198.8 nm. The ablation rate significantly changes with laser power. The Al II signal is viewed in Figure 12b for subsequent laser pulses with 6 mJ energy per pulse at four different positions. Although the evolution of the Al II line showed good agreement for many different measurement points, a comparison with microscopic images pointed out, that the ablation rate significantly depends on the particle distribution at the specific position. The presence of large NMC particles slows down the uncovering process, as shown in Figure 12b (position 4), while the ablation rate of binder and graphite turns out to be higher. This leads to protruding NMC particles around the already vaporized binder and graphite/carbon black areas at the measurement spot. Additionally, the Gaussian profile of our laser beam together with the thermal conductivity of the sample leads to a V-shaped crater, which makes a difference between the profiles shown in Figure 12b and a step function. We conclude, that under ideal conditions, i.e., controlled distance to sample and constant laser power, our system is able to detect relative depth variations in the order of 20% for depths in the range of 25-100 µ m. Cathode Characterization with EMPA Given that a single LIBS pulse takes the average concentration inside the focal area of the excitation beam, the measurement can be understood as a smoothing operation of the local concentration within the beam's waist size w. We consider the smoothing kernel: Its spatial Fourier transform corresponds to the transfer function and is given by: The transfer function quantifies how spatial frequencies of concentrations are "smoothed out" by averaging over the measurement spot. When fluctuations are taking place on small length scales compared to the laser spot size, the measured variations are smoothed out efficiently by the extent of the laser spot. It is, therefore, instructive to quantify the spatial frequencies of local concentrations by Fourier transform and compare it to the transfer function H. Cathode Characterization with EMPA Given that a single LIBS pulse takes the average concentration inside the focal area of the excitation beam, the measurement can be understood as a smoothing operation of the local concentration within the beam's waist size w. We consider the smoothing kernel: h(x, y) = 2 Its spatial Fourier transform corresponds to the transfer function and is given by: The transfer function quantifies how spatial frequencies of concentrations are "smoothed out" by averaging over the measurement spot. When fluctuations are taking place on small length scales compared to the laser spot size, the measured variations are smoothed out efficiently by the extent of the laser spot. It is, therefore, instructive to quantify the spatial frequencies of local concentrations by Fourier transform and compare it to the transfer function H. For the spatial carbon distribution of Figure 6, we obtain the discrete Fourier transform as illustrated in Figure 13 with a logarithmic scale. The center of Figure 13 represents the average concentration of the whole measurement area whereas the points around correspond to amplitudes of concentration variations with different spatial frequencies. The pixels in Figure 13 are not square due to different pixel numbers in the x and y directions in Figure 6. The smoothing operation suppresses higher frequencies by multiplying the data pointwise with H (shown in Figure 14). The radius of the transfer function's spot is inversely proportional to the laser spot's radius. After smoothing, the squared sum of remaining amplitudes for frequencies other than (0 µm −1 , 0 µm −1 ) represents the expected amplitude of fluctuations. For the spatial carbon distribution of Figure 6, we obtain the discrete Fourier transform as illustrated in Figure 13 with a logarithmic scale. The center of Figure 13 represents the average concentration of the whole measurement area whereas the points around correspond to amplitudes of concentration variations with different spatial frequencies. The pixels in Figure 13 are not square due to different pixel numbers in the x and y directions in Figure 6. The smoothing operation suppresses higher frequencies by multiplying the data pointwise with H (shown in Figure 14). The radius of the transfer function's spot is inversely proportional to the laser spot's radius. After smoothing, the squared sum of remaining amplitudes for frequencies other than (0 µ m −1 ,0 µ m −1 ) represents the expected amplitude of fluctuations. For the chosen dataset, the relative amplitude of fluctuations is compared for different laser spot sizes in Figure 15. The center of Figure 13 represents the average concentration of the whole measurement area whereas the points around correspond to amplitudes of concentration variations with different spatial frequencies. The pixels in Figure 13 are not square due to different pixel numbers in the x and y directions in Figure 6. The smoothing operation suppresses higher frequencies by multiplying the data pointwise with H (shown in Figure 14). The radius of the transfer function's spot is inversely proportional to the laser spot's radius. After smoothing, the squared sum of remaining amplitudes for frequencies other than (0 µ m −1 ,0 µ m −1 ) represents the expected amplitude of fluctuations. For the chosen dataset, the relative amplitude of fluctuations is compared for different laser spot sizes in Figure 15. For the chosen dataset, the relative amplitude of fluctuations is compared for different laser spot sizes in Figure 15. We, therefore, expect a strong dependency between laser spot size and signal variance for this kind of inhomogeneous sample. For a laser waist size of 50 µ m, we obtain an expected relative variance of 27-32% which is in good agreement with the data shown in Figure 6 Thus, the laser spot radius appears as an important parameter when measuring inhomogeneous samples and needs to be compared to the typical length-scale of the structure. Discussion and Conclusions We examined the potential of nanosecond laser-induced breakdown spectroscopy (ns-LIBS) for depth-resolved concentration measurements in lithium nickel manganese cobalt oxide (NMC) cathodes for Lithium-Ion batteries. We preferred ns-LIBS over fs-LIBS because of the larger plasma plume, higher signal intensity and lower investment costs which we consider crucial for industrial inline applications. Although ns-LIBS is generally We, therefore, expect a strong dependency between laser spot size and signal variance for this kind of inhomogeneous sample. For a laser waist size of 50 µm, we obtain an expected relative variance of 27-32% which is in good agreement with the data shown in Figure 6 Thus, the laser spot radius appears as an important parameter when measuring inhomogeneous samples and needs to be compared to the typical length-scale of the structure. Discussion and Conclusions We examined the potential of nanosecond laser-induced breakdown spectroscopy (ns-LIBS) for depth-resolved concentration measurements in lithium nickel manganese cobalt oxide (NMC) cathodes for Lithium-Ion batteries. We preferred ns-LIBS over fs-LIBS because of the larger plasma plume, higher signal intensity and lower investment costs which we consider crucial for industrial inline applications. Although ns-LIBS is generally considered less suitable for depth-resolved concentration analysis, our measurements show repeatable intensity evolution of normalized spectral lines from the current collector for successive laser pulses under a well-controlled laser and focusing conditions. Despite the fact that LIBS always suffers from pulse-to-pulse fluctuations, in the case of NMC cathodes, we could show that the pulse-to-pulse fluctuations can be explained by the homogeneity of the sample itself. Thus, we conclude that LIBS is capable of spatially resolved concentration measurements even on very inhomogeneous samples. Our study on depth-resolved carbon detection, on the other hand, indicated non-uniform ablation of the different components in the material composition which makes depth-profiling challenging: Large particles at the specific measurement points slow down the ablation process since the ablation rate of NMC has been observed to be lower compared to graphite/carbon black and binder. Additionally, the ablation of carbon was observed to be disproportionately high in the beginning and dropped for successive laser pulses at the same measurement spot. We noticed relatively large fluctuations of 30% in C signal between single LIBS spectra at different measurement spots which we could ascribe to the microstructure of the cathode by comparison with electron microprobe analysis. If concentration fluctuations are on the length scale of the laser spot size, they strongly impact the LIBS measurement. We showed that, despite this effect, an accurate concentration measurement of C is possible by applying an average procedure with a sufficient number of N of single measurements at different positions. This reduces the relative standard variations with respect to a single measurement according to N −1/2 . Besides averaging, we suggest for inhomogeneous samples to increasing the laser spot size in order to smooth out local concentration variations.
9,224.6
2022-11-01T00:00:00.000
[ "Physics" ]
The San Carlo Colossus: An Insight into the Mild Galvanic Coupling between Wrought Iron and Copper The San Carlo Colossus, known as San Carlone, is a monument constituted by an internal stone pillar support to which a wrought iron structure is attached. Embossed copper sheets are fixed to the iron structure to give the final shape to the monument. After more than 300 years of outdoor exposure, this statue represents an opportunity for an in-depth investigation of long-term galvanic coupling between wrought iron and copper. Most iron elements of the San Carlone appeared in good conservation conditions with scarce evidence of galvanic corrosion. In some cases, the same iron bars presented some portions in good conservation conditions and other nearby portions with active corrosion. The aim of the present study was to investigate the possible factors correlated with such mild galvanic corrosion of wrought iron elements despite the widespread direct contact with copper for more than 300 years. Optical and electronic microscopy and compositional analyses were carried out on representative samples. Furthermore, polarisation resistance measurements were performed both on-site and in a laboratory. The results revealed that the iron bulk composition showed a ferritic microstructure with coarse grains. On the other hand, the surface corrosion products were mainly composed of goethite and lepidocrocite. Electrochemical analyses showed good corrosion resistance of both the bulk and surface of the wrought iron, and galvanic corrosion is not occurring probably due to the iron’s relatively noble corrosion potential. The few areas where iron corrosion was observed are apparently related to environmental factors, such as the presence of thick deposits and to the presence of hygroscopic deposits that create localized microclimatic conditions on the surface of the monument. Introduction The Colossus of San Carlo Borromeo (1538-1584), called San Carlone, i.e., Big Saint Charles ( Figure 1) due to its large size, located in Arona on the Lago Maggiore (Piedmont, Italy), was built between 1614 and 1698. The 22 m high statue (which is positioned on an 11 m high granite pedestal) was manufactured by a peculiar construction technique: An internal stone pillar supports a wrought iron structure, embossed with fixed copper sheets. The iron structure, in direct contact with the copper sheets, provides structural support and shapes the statue. Due to its construction technique and more than 300 years of outdoor exposure, the statue represents an opportunity for an in-depth investigation of the longterm galvanic coupling between the wrought iron and copper. Despite the frequent direct contact between the two metals, wrought iron presents only a limited number of areas heavily affected by corrosion, while the rest looks in quite good conservation condition. The aim of the present study is therefore to identify the factors responsible for the mild galvanic corrosion of the wrought iron elements despite the widespread direct contact with copper for more than 300 years. Galvanic corrosion is a specific form of corrosion that occurs between two dissimilar metals that are in contact in the presence of a conductive electrolyte [4]. The driving force for this corrosion process is represented by the difference in free corrosion potential between the two metals: The less noble metal acts as the anode and undergoes oxidation, while the nobler one is the cathode and is protected from corrosion [4][5][6]. A notorious case in which corrosion by galvanic coupling has been a conservation issue is the Statue of Liberty in the harbour of New York. The statue was built with an interna structure of iron elements on which an external copper "skin" was joined, giving shape to the female figure of Liberty. In fact, the Statue of Liberty has been previously compared to the San Carlone since they present relevant similarities in terms of materials and manufacturing technologies [1]. Due to the higher nobility of copper with respect to iron, severe corrosion of the iron elements of the Statue of Liberty was observed [7]. In this case, galvanic corrosion has been favoured and accelerated by New York's humid and chlorine-rich environment. To understand the galvanic corrosion of iron and its behaviour in different contexts it is important to know the level of protection provided by the corrosion layers that can atmospheric corrosion is the one of Ponte di San Michele (St. Michael's bridge), located in Italy between Paderno d'Adda and Calusco d'Adda and built-in 1889. In this case, the microclimatic conditions to which the iron is exposed and the disposition and interconnection of the different iron elements resulted in corrosion at specific locations with highly expansive effects. This peculiar situation induced significant deformation of many elements of the bridge [21,22]. Besides the qualitative identification of the corrosion products, the quantification of their phases has also been reported in the literature as relevant for understanding the level of protectiveness provided by corrosion layers [15,18,[23][24][25]. In particular, some authors suggested calculating a protective ability index (PAI) for the iron patinas based on the ratio between the number of stable phases and reactive ones [26]. The PAI was suggested by Yamashita et al. [26] for the first time. They quantified the amount of goethite (α-FeOOH), i.e., the stable phase, and of lepidocrocite (γ-FeOOH), i.e., the reactive one. Thus, the PAI was defined as follows: where α indicates the mass fraction of α-FeOOH and γ the mass fraction of γ-FeOOH. They demonstrated that the PAI is strongly correlated to the corrosion rate of the surface. When α/γ is higher than 1, the rust layer appears quite stable and protective [14]. According to European standards and several studies, copper alloys have good resistance to corrosion in moderately aggressive environments [27,28]. In particular, the copper shows lower corrosion rates with respect to iron and carbon steel in the same environments [27]. Furthermore, copper alloys show different corrosion resistances with respect to pure metal thanks to the contribution of alloying elements [28][29][30][31][32]. Therefore, copper and copper alloys have been widely employed in the artistic and architectural fields, especially to produce artefacts and architectural elements for outdoor exposure [28,29,33,34]. The aim of the present study is therefore to identify the factors responsible for the mild galvanic corrosion of the wrought iron elements despite the widespread direct contact with copper for more than 300 years. Galvanic corrosion is a specific form of corrosion that occurs between two dissimilar metals that are in contact in the presence of a conductive electrolyte [4]. The driving force for this corrosion process is represented by the difference in free corrosion potential between the two metals: The less noble metal acts as the anode and undergoes oxidation, while the nobler one is the cathode and is protected from corrosion [4][5][6]. A notorious case in which corrosion by galvanic coupling has been a conservation issue is the Statue of Liberty in the harbour of New York. The statue was built with an internal structure of iron elements on which an external copper "skin" was joined, giving shape to the female figure of Liberty. In fact, the Statue of Liberty has been previously compared to the San Carlone since they present relevant similarities in terms of materials and manufacturing technologies [1]. Due to the higher nobility of copper with respect to iron, severe corrosion of the iron elements of the Statue of Liberty was observed [7]. In this case, galvanic corrosion has been favoured and accelerated by New York's humid and chlorine-rich environment. To understand the galvanic corrosion of iron and its behaviour in different contexts, it is important to know the level of protection provided by the corrosion layers that can form under different exposure conditions. Depending on a series of environmental factors (e.g., oxygen availability, pH, ion concentration of the electrolyte, etc.), several forms of oxides and iron compounds are produced during Fe oxidation, forming a layer commonly referred to as "patina". A detailed model for the description of the corrosion mechanism of iron was proposed by Evans and Taylor [8] and demonstrated by Stratmann and Streckel [9,10]. Iron corrosion products are often porous, poorly adherent, and often show cracks in most external layers. Therefore, they are not effective in hindering the access of water and oxygen to the metallic surface and in lowering the corrosion rate. In the initial stages of corrosion, a thin oxide/hydroxide film (typically 1-4 nm) is formed with a passivating effect in nonaggressive conditions [11]. During the intermediate stages of corrosion, the formation of two types of the so-called "green rusts" is observed, namely, "green rust I" Fe II 2 Fe III O x (OH) y and "green rust II" Fe II Fe III O x (OH) y [11]. In the final stages of corrosion, the corrosion layers of surfaces exposed to atmospheric corrosion mainly consist of iron oxides and oxides-hydroxides. These oxides are characterized by different levels of crystallinity: In particular, several polymorphs of FeOOH can be present [11][12][13]. In general, it was observed that lepidocrocite (γ-FeOOH) is the main phase in the first weeks after the transformation of the green rust, while upon longer exposures the predominant phase becomes goethite (α-FeOOH), as it is the most stable iron oxide-hydroxide. In association with goethite, magnetite (Fe 3 O 4 ) can also be found. In fact, its development is promoted when the supply of oxygen is limited, and a slight variation in its availability can favour the stability of one or another of these two compounds [13]. In the case of high concentrations of chlorides in the atmosphere, high relative humidity and low pH, the formation of akaganeite (β-FeOOH) is promoted. In addition, the formation of akaganeite is typical in the proximity of coastal areas or in pits. In these cases, β-Fe 2 (OH) 3 Cl is the typical intermediate product that can be detected in association with green rusts [13]. Moreover, hematite (α-Fe 2 O 3 ) and maghemite (γ-Fe 2 O 3 ) can be detected among corrosion products. Even if they are seldom produced upon atmospheric corrosion, they have been identified in several cases of cultural heritage artefacts [13]. In fact, hematite occurs in high-temperature corrosion or as transformation products when other corrosion products are heated. Therefore, its presence on artistic or archaeological surfaces can be due to hightemperature treatments performed in the past [13,14]. Maghemite was, instead, identified in archaeological artefacts and on historical iron in the very first phases of atmospheric corrosion [14]. In addition, low crystallinity phases can be detected among the corrosion products, especially in the most internal layer of rust. These amorphous phases are usually constituted by feroxyhyte (δ-FeOOH) and ferrihydrite (Fe 3+ ) 2 O 3 ·0.5H 2 O [15][16][17][18]. The composition of corrosion products can be influenced by the presence and concentration of atmospheric pollutants. When sulphur dioxide (SO 2 ) is present, it can influence the corrosion rate of iron. When a significant concentration of SO 2 is present in the environment, the formation of H 2 SO 4 in the electrolyte film is promoted with a consequent decrease in its pH. Moreover, sulphuric acid can dissolve oxides to produce FeSO 4 . FeSO 4 is soluble and hygroscopic, thus favouring the formation of an electrolyte film even with low RH values. Even if the iron patinas resulting from atmospheric corrosion are normally constituted by iron oxides and hydroxides, hydrated phases of FeSO 4 have also been identified in atmospheric rusts on low alloy steels [19,20]. Moreover, chlorides can play an important role in the atmospheric corrosion of iron-based alloys. Chlorides can promote the formation of akaganeite instead of goethite in the final stages of corrosion. Additionally, chloride ions have a high transport number in water, thus moving easily in the electrolyte film. Thus, when the concentration of chloride ions is significant, an increase in the corrosion rate can be expected [11]. A critical aspect of the atmospheric corrosion of iron elements is connected with the high increase in volume that may be associated with the formation of corrosion products. A notable case of atmospheric corrosion is the one of Ponte di San Michele (St. Michael's bridge), located in Italy between Paderno d'Adda and Calusco d'Adda and built-in 1889. In this case, the microclimatic conditions to which the iron is exposed and the disposition and interconnection of the different iron elements resulted in corrosion at specific locations with highly expansive effects. This peculiar situation induced significant deformation of many elements of the bridge [21,22]. Besides the qualitative identification of the corrosion products, the quantification of their phases has also been reported in the literature as relevant for understanding the level of protectiveness provided by corrosion layers [15,18,[23][24][25]. In particular, some authors suggested calculating a protective ability index (PAI) for the iron patinas based on the ratio between the number of stable phases and reactive ones [26]. The PAI was suggested by Yamashita et al. [26] for the first time. They quantified the amount of goethite (α-FeOOH), i.e., the stable phase, and of lepidocrocite (γ-FeOOH), i.e., the reactive one. Thus, the PAI was defined as follows: where α indicates the mass fraction of α-FeOOH and γ the mass fraction of γ-FeOOH. They demonstrated that the PAI is strongly correlated to the corrosion rate of the surface. When α/γ is higher than 1, the rust layer appears quite stable and protective [14]. According to European standards and several studies, copper alloys have good resistance to corrosion in moderately aggressive environments [27,28]. In particular, the copper shows lower corrosion rates with respect to iron and carbon steel in the same environments [27]. Furthermore, copper alloys show different corrosion resistances with respect to pure metal thanks to the contribution of alloying elements [28][29][30][31][32]. Therefore, copper and copper alloys have been widely employed in the artistic and architectural fields, especially to produce artefacts and architectural elements for outdoor exposure [28,29,33,34]. Materials The San Carlone underwent an extensive diagnostic campaign. External and internal representative areas at different heights considering both copper and iron elements were investigated by nondestructive in situ methodologies and by laboratory analyses on microsamples. Three samples of iron fragments were taken due to the peculiar function and geometry of the iron elements in the statue. They have been selected, as they were representative of the different conservation conditions observed: • Sample 44: metallic fragment with slight corrosion despite direct contact with copper collected from the internal part of the chin of the statue. • Sample 46: metallic fragment not in contact with any copper element and with very slight atmospheric corrosion. • Sample 87: metallic fragment collected from an area in contact with a copper element showing severe corrosion. • Sample A: reference sample of 19th-century puddled wrought iron. In addition, powder samples were taken from iron bars with active corrosion and incoherent, scarcely adherent corrosion products. Environmental Data Temperature and relative humidity data collected during 2018 and 2019 were reported and discussed in previous work [1]. The collected data showed that different parts of the statue may present different times of wetness (TOW) as consequence of different solar radiation and ventilation. Based on the BS EN ISO 9223:2012 standard [27], the internal area of the San Carlone should be classified between τ3 and τ4. Annual pollutant concentrations from two monitoring stations located near Arona (Table 1) were downloaded from the regional environmental protection agency (ARPA) website (https://aria.ambiente.piemonte.it/#/qualita-aria/dati, accessed on 20 December 2022). These data show that the area is not characterised by high levels of pollutants compared to urban areas. Table 1. Average pollutants concentrations: data obtained from the monitoring station near Arona (from the regional environmental protection agency (ARPA) website). Pollutant Year Methods The composition of iron alloys has been analysed by GDOES and the elemental composition of the inclusions by SEM-EDX. The chemical composition of the corrosion layers has been investigated by means of micro-Raman spectroscopy and XRD. Sample microstructure has been analysed by optical microscopy and SEM, and the corrosion behaviour has been studied by means of electrochemical analysis (corrosion potential, LPR and EIS measurements). Optical microscope observations were performed with a Leica M205C microscope equipped with a Leica DFC 290 camera. SEM-EDX (Zeiss, Jena, Germany) was performed with an ESEM Zeiss EVO 50 EP in extended pressure equipped with an Oxford INCA Energy 200-Pentafet LZ4 spectrometer. The FTIR analysis was carried out by a Thermo Nicolet 6700 spectrophotometer (Thermo Fisher Scientific Inc, Waltham, MA, USA) employing a DTGS detector with a detection range between 4000 and 400 cm −1 . The XRD analysis was performed with a PANalytical diffractometer X'Pert PRO with radiation CuKα1 = 0.154 nm, operating at 40 kV and 30 mA, investigated range 2θ 3-70 • , equipped with X'Celerator multidetector. LPR and EIS measurements have been carried out on-site ( Figure 3) by employing the Contact Probe proposed by Letardi [35]. The Contact Probe is constituted by an AISI316L stainless steel counter (CE) and pseudoreference (RE) electrodes embedded in a PTFE case. On polished cross-section of microsamples collected from the same monument, LPR and EIS measurements have been carried out with the Minicell by Amel s.r.l. The Minicell is constituted by a platinum counter electrode and an Ag/AgCl reference electrode hosted in a cylindrical plastic case where the electrolyte flows continuously with a minipump. All LPR, EIS, and E corr measurements were performed with a portable potentiostat Ivium Technologies with CompactStat Ivium®software using oligomineral water (pH around 8 and conductivity around 200 µS/cm) as an electrolyte. Iron Bulk Alloy Composition The composition of the subsurface layers was analysed by GDOES (Table 2). Data show that all the samples have a low to very low content of carbon. Sample 46 is almost carbon-free (<0.1%), similar to reference sample A, while in Sample 44, the content of carbon is ~0.25 by weight. LPR measurements were performed after 10 min of monitoring time (MT) of the open circuit potential (OCP). The potential ranged ±10 mV with respect to the measured E corr with scan rate of 10 mV/min. EIS measurements were performed after 5 min of monitoring time (MT) and 5 min of stabilization of the surfaces through the application of few nA currents. The following protocol was used: frequency range [100 kHz-10 mHz] with ±10 mV with respect to E corr . The polarization resistance value (Rp) was obtained from both LPR and EIS measurements. In particular, the Rp from EIS measurements were calculated as the difference of the modulus |Z| at low and high frequencies [36,37]. A set of E corr measurements were performed on the bulk alloy of the iron fragments of the San Carlone. In these cases, the measurements were performed on the polished cross-section 24 h after polishing. Iron Bulk Alloy Composition The composition of the subsurface layers was analysed by GDOES (Table 2). Data show that all the samples have a low to very low content of carbon. Sample 46 is almost carbon-free (<0.1%), similar to reference sample A, while in Sample 44, the content of carbon is~0.25 by weight. The literature reports that a phosphorous (P) content between 0.1% and 0.6% could be associated with a higher resistance of wrought iron to corrosion [13,23,38,39]. However, the analysis of the samples of the San Carlone did not show a relevant presence of phosphorous. In contrast, for the reference sample of puddled iron, a phosphorus content of 0.3% was detected. The composition of Sample 87, the heavily corroded one, is not reported in Table 2, as its dimensions were too small to perform GDOES analysis. The elemental composition has been investigated by SEM-EDX on a polished cross-section, revealing only the presence of Fe, C, and O. In Figures 4 and 5, images of the microstructures observed in a cross-section of iron samples from the Colossus are reported (bulk in Figure 4, outermost zones in Figure 5). It has been observed that different areas of the samples presented slightly different carbon content and different microstructures, as could be expected for historical iron samples [13,40,41]. Moreover, the predominance of ferritic structure can be observed. In general, the core of the samples from the Colossus showed a ferritic microstructure with coarse grains ranging between about 10 and 100 µm (Figure 4a-c). However, the microstructure of wrought iron samples was not homogeneous. Ferritic-pearlitic microstructures were observed ( Figure 5) in discontinuous areas near the surface or at the outer edge of the samples. Moreover, other microstructural heterogeneities were observed across the surface of the polished samples. In Samples 44 and 87 (Figure 5a,c), the ferrite grains showed a rounded shape, and their dimensions became smaller (up to 10 µm) on going from the bulk towards the surface. In Sample 46, the ferrite grains in ferritic-pearlitic areas just below the surface showed a Widmanstätten structure (Figure 5b) [42] with the exception of the most superficial area (about a couple of µm thick) where ferrite with small and round-shaped grains became predominant again. In hypoeutectoid steels such as those investigated here, Widmanstätten structures may form mainly through rapid cooling combined with coarse prior austenitic grain size (leading to an insufficient number of nuclei for conventional proeutectoid ferrite crystallization) [42][43][44]. The abovedescribed microstructural heterogeneity in these samples can be ascribed to surface carburization/decarburization cycles during the hot forging process, typical of wrought iron produced in the reference period for this statue [40]. The Widmanstätten structures detected in bands parallel to the outer surface are probably due to localised fast cooling during the forging cycles. Moreover, the puddled iron reference sample showed a banded structure with the alternation of ferritic areas with grain size ranging from~100 µm (Figure 4d) to~10 µm or lower (Figure 5d). In particular, such a microstructure is caused by the intrinsic heterogeneity of this material since the puddling process does not allow full homogenization [39]. rapid cooling combined with coarse prior austenitic grain size (leading to an insufficient number of nuclei for conventional proeutectoid ferrite crystallization) [42][43][44]. The abovedescribed microstructural heterogeneity in these samples can be ascribed to surface carburization/decarburization cycles during the hot forging process, typical of wrought iron produced in the reference period for this statue [40]. The Widmanstätten structures detected in bands parallel to the outer surface are probably due to localised fast cooling during the forging cycles. Moreover, the puddled iron reference sample showed a banded structure with the alternation of ferritic areas with grain size ranging from ~100 µm (Figure 4d) to ~10 µm or lower (Figure 5d). In particular, such a microstructure is caused by the intrinsic heterogeneity of this material since the puddling process does not allow full homogenization [39]. For historical wrought iron, the corrosion and mechanical behaviour of the metal is strongly influenced by the amount, dimensions, orientation, and composition of the slag inclusions [13,23,39,45,46]. In general, for historical irons, the presence of a high number of inclusions with strongly variable dimensions is expected [13,40,45]. Inclusions are typically constituted by a glass matrix of fayalite (Fe 2 SiO 4 ) with wüstite (FeO) crystals [13,39] and are usually arranged in parallel bands due to forging [39]. Both in samples of San Carlone and on the reference A sample (puddled iron), slag inclusions with dimensions ranging from about 50 µm to millimetric dimensions ( Figure 6) were present. Moreover, in Sample A, smaller etching pits, typical of phosphorus-rich irons [39], could be observed (Figure 6d). In all the samples, small crystals (light areas) are visible in a glass matrix (darker areas in Figure 6) with the typical fayalite-wüstite structure confirmed by micro-Raman spectroscopy. Iron Corrosion Products Composition Corrosion products of the areas adjacent to Samples 44 and 46 looked compact and highly adherent to the surface. They were characterised by a dark brown-blackish colour. Sample 87 was a small lamina of iron surrounded by a thick layer (>1 cm) of reddish-brown corrosion products that peel off and disintegrate easily and that detached during sampling. When investigated by SEM/EDX, all samples presented a compact and adherent corrosion layer (Figures 7 and 8). EDS X-ray maps in Figure 8d also revealed some remnants of a Pb-containing antirust paint layer (the so-called "red lead paint" which was frequently used a few decades ago) above the corrosion products on the surface of Sample 87. The observed good corrosion behaviour of iron elements may be possibly associated with the presence of slag inclusions. As discussed by Chang et al. [47], in the case of historic copper, the dimensions and nobility of inclusions may influence the composition of corrosion products and therefore the protection they provide to the underlying alloy. Iron Corrosion Products Composition Corrosion products of the areas adjacent to Samples 44 and 46 looked compact and highly adherent to the surface. They were characterised by a dark brown-blackish colour. Sample 87 was a small lamina of iron surrounded by a thick layer (>1 cm) of reddish-brown corrosion products that peel off and disintegrate easily and that detached during sampling. When investigated by SEM/EDX, all samples presented a compact and adherent corrosion layer (Figures 7 and 8). EDS X-ray maps in Figure 8d also revealed some remnants of a Pb-containing antirust paint layer (the so-called "red lead paint" which was frequently used a few decades ago) above the corrosion products on the surface of Sample 87. our. Sample 87 was a small lamina of iron surrounded by a thick layer (>1 cm) of reddish-brown corrosion products that peel off and disintegrate easily and that detached during sampling. When investigated by SEM/EDX, all samples presented a compact and adherent corrosion layer (Figures 7 and 8). EDS X-ray maps in Figure 8d also revealed some remnants of a Pb-containing antirust paint layer (the so-called "red lead paint" which was frequently used a few decades ago) above the corrosion products on the surface of Sample 87. Atmospheric-generated iron corrosion layers are typically constituted by oxides, hydroxides, or oxides-hydroxides [11][12][13]. Depending on their relative abundance, the PAI index can be calculated, thus quantifying the protectiveness of the corrosion layers [14,26]. To assess the protective role of the corrosion layers against galvanic corrosion of the San Carlone, their corrosion products have been investigated using micro-Raman spectroscopy and XRD. Micro-Raman spectroscopy analysis was performed on polished cross-sections to evaluate the stratigraphy of corrosion products, whilst XRD was performed directly on the surface of the fragments. The results of these analyses are summarised in Table 3. Atmospheric-generated iron corrosion layers are typically constituted by oxides, hydroxides, or oxides-hydroxides [11][12][13]. Depending on their relative abundance, the PAI index can be calculated, thus quantifying the protectiveness of the corrosion layers [14,26]. To assess the protective role of the corrosion layers against galvanic corrosion of the San Carlone, their corrosion products have been investigated using micro-Raman spectroscopy and XRD. Micro-Raman spectroscopy analysis was performed on polished cross-sections to evaluate the stratigraphy of corrosion products, whilst XRD was performed directly on the surface of the fragments. The results of these analyses are summarised in Table 3. In all the examined fragments, XRD analysis detected goethite and lepidocrocite as the main corrosion products. The former is typically considered among the protective corrosion products, and the latter is normally more reactive. Moreover, iron oxides have been identified in traces. In contrast, micro-Raman spectroscopy identified a larger number of corrosion products where goethite and lepidocrocite were again the main ones. Goethite and lepidocrocite were found especially in the intermediate layer between the most internal one and the external layer. The internal layer was compact and richer in iron oxides (magnetite, maghemite, and hematite), whereas the composition of the external one was influenced by the presence of environmental contaminants and pollutants. The latter observation could also explain the widespread presence of akaganeite, an iron oxidehydroxide that usually contains 5-8 wt.% Cl. This data is in good accordance with the quite diffused presence of atacamite observed among the copper corrosion products. There might be three possible explanations for the presence of chlorides in Arona: They could have been transported by the wind to the Colossus; they could have been present in the past in the atmosphere as pollutants; or they could derive from the use of chloride-containing cleaning product during past restoration interventions. In addition, hematite has been detected in Sample 46. However, it should be considered that its presence could be partly due to a transformation of the corrosion products upon heating of the sample by the laser during Raman analysis. Moreover, a high fraction of amorphous phases has been highlighted by XRD analysis (Figure 9). For this reason, a quantification of the relative amount of each phase was impossible, as well as the evaluation of the protectiveness of corrosion layers based on the PAI index. Apparently, therefore, no significant differences can be observed in the chemical composition of the corrosion layers that could explain the different corrosion behaviour of the samples. The same analysis was also performed on powder samples of corrosion products collected from iron bars with evidence of active corrosion (not necessarily in direct contact with copper sheets, which provided similar results. Both XRD and micro-Raman analyses identified goethite and lepidocrocite as the main corrosion products in association with magnetite, maghemite, and hematite. On powder samples akaganeite was also detected. Moreover, for these samples, a lower variety of corrosion products were detected by XRD than by micro-Raman, basically identifying goethite, lepidocrocite, and a low amount of hematite on a few samples. Moreover, on powder samples, significant amounts of deposits and contaminants were detected. In particular, gypsum was typically identified in the areas with evident active corrosion. Gypsum normally reaches the surfaces by wet and dry deposition, thus explaining the high amounts of deposits on the surface. Furthermore, the presence of gypsum could suggest that the higher corrosion rate of those iron bars might be associated with specific microclimatic conditions promoted by deposition or condensation phenomena. In fact, the presence of more severe corrosion phenomena in association with high amounts of deposits and condensation was already observed during the preliminary inspections of the monument prior to the restoration of 1974-1975 [2]. Interaction of Copper and Iron Elements The electrochemical behaviour of the selected copper and iron elements has been investigated. The aim was to understand the good conservation state of the iron elements from the corrosion point of view and to explain the scarce presence of galvanic corrosion between copper and iron. In particular, corrosion potential (Ecorr) measurements and polarization resistance (Rp) measurements were performed. Figure 10a shows the Ecorr values measured in the laboratory on the surfaces and polished cross-sections of the iron samples from the Colossus and on the reference sample A. On Sample 87, only surface measurements could be performed due to its very small dimensions. The bulk of Samples 44 and 46 show a more noble potential (25 mV and −225 mV vs. Ag/AgCl) with a  E of +475 mV and +225 mV, respectively, with respect to the puddled iron (Ecorr = −450 mV vs. Ag/AgCl). Moreover, the corrosion potentials measured on the surfaces of such samples are quite noble. Surprisingly, the highest value (+120 mV vs. Ag/AgCl) has been measured on the surface of the most corroded sample (87). Interestingly, the difference in corrosion potential between the surface and bulk alloy of Samples 44 and 46 is very low and significantly lower than the one measured on the reference sample A. Figure 10b shows the results of on-site Ecorr measurements in the statue on adjacent copper and iron surfaces. In this case, all the iron bars appeared slightly corroded but were covered by compact corrosion layers adherent to the surface. The potential difference between the two surfaces never exceeded 200 mV, and in two cases out of three, it was lower than 100 mV. The three iron fragments from the Colossus and the reference sample A were electrochemically characterized by measuring the polarization resistance of their surfaces with LPR and EIS. Nyquist plots of Sample A (surface) and Sample 44 (section) are reported in Figure 11a,b, respectively. Average Rp values ( Figure 12) were higher than 30 Ω·m 2 . The surfaces of Samples 44 and 87 resulted in the least resistance to corrosion. Sample 46 and the reference sample A showed Rp values higher than 150 Ω·m 2 , suggesting a good corrosion resistance. The Rp measurements, performed in situ, of different iron elements of the San Carlone, characterised by different conservation conditions are displayed in Figure 13. Only a few areas were characterised by the presence of a protective coating ("painted ironnew paint"), while most areas were characterised by a fairly good conservation condition despite the lack of paint ("well preserved"). These "well preserved" areas showed Rp measurements similar to the ones obtained from the fragments analysed in the laboratory, suggesting a relatively low corrosion rate. Rusted areas ("heavily rusted") were those showing the lowest polarisation resistance. Interaction of Copper and Iron Elements The electrochemical behaviour of the selected copper and iron elements has been investigated. The aim was to understand the good conservation state of the iron elements from the corrosion point of view and to explain the scarce presence of galvanic corrosion between copper and iron. In particular, corrosion potential (E corr ) measurements and polarization resistance (Rp) measurements were performed. Figure 10a shows the E corr values measured in the laboratory on the surfaces and polished cross-sections of the iron samples from the Colossus and on the reference sample A. On Sample 87, only surface measurements could be performed due to its very small dimensions. The bulk of Samples 44 and 46 show a more noble potential (25 mV and −225 mV vs. Ag/AgCl) with a E of +475 mV and +225 mV, respectively, with respect to the puddled iron (E corr = −450 mV vs. Ag/AgCl). Moreover, the corrosion potentials measured on the surfaces of such samples are quite noble. Surprisingly, the highest value (+120 mV vs. Ag/AgCl) has been measured on the surface of the most corroded sample (87). Interestingly, the difference in corrosion potential between the surface and bulk alloy of Samples 44 and 46 is very low and significantly lower than the one measured on the reference sample A. Figure 10b shows the results of on-site E corr measurements in the statue on adjacent copper and iron surfaces. In this case, all the iron bars appeared slightly corroded but were covered by compact corrosion layers adherent to the surface. The potential difference between the two surfaces never exceeded 200 mV, and in two cases out of three, it was lower than 100 mV. The three iron fragments from the Colossus and the reference sample A were electrochemically characterized by measuring the polarization resistance of their surfaces with LPR and EIS. Nyquist plots of Sample A (surface) and Sample 44 (section) are reported in Figure 11a,b, respectively. Average Rp values ( Figure 12) were higher than 30 Ω·m 2 . The surfaces of Samples 44 and 87 resulted in the least resistance to corrosion. Sample 46 and the reference sample A showed Rp values higher than 150 Ω·m 2 , suggesting a good corrosion resistance. These results suggest that the iron alloy used for the construction of the Colossus generally presents a noble open circuit potential. Indeed, its corrosion potential resulted few tens of mV lower than the one of copper surfaces. Therefore, it could be hypothesized that the free corrosion potential difference between copper and iron was low for the development of an effective galvanic coupling. These results suggest that the iron alloy used for the construction of the Colossus generally presents a noble open circuit potential. Indeed, its corrosion potential resulted few tens of mV lower than the one of copper surfaces. Therefore, it could be hypothesized that the free corrosion potential difference between copper and iron was low for the development of an effective galvanic coupling. The Rp measurements, performed in situ, of different iron elements of the San Carlone, characterised by different conservation conditions are displayed in Figure 13. Only a few areas were characterised by the presence of a protective coating ("painted iron-new paint"), while most areas were characterised by a fairly good conservation condition despite the lack of paint ("well preserved"). These "well preserved" areas showed Rp measurements similar to the ones obtained from the fragments analysed in the laboratory, suggesting a relatively low corrosion rate. Rusted areas ("heavily rusted") were those showing the lowest polarisation resistance. Recently, oxygen depletion measurements [48] allowed for the classification of iron objects cared for by the English Heritage into four categories of corrosion behaviour. Surprisingly, a high number of objects were classified into Category 1 or 2. Particularly, Category 1 is related to a material that does not appear to deteriorate even up to very high RH values. Some sites reached 85% RH, but no sign of deterioration has been observed visually over 20 years of exposure. These observations are confirmed by oxygen testing of representative samples that show no detectable deterioration at 75% RH. No These results suggest that the iron alloy used for the construction of the Colossus generally presents a noble open circuit potential. Indeed, its corrosion potential resulted few tens of mV lower than the one of copper surfaces. Therefore, it could be hypothesized that the free corrosion potential difference between copper and iron was low for the development of an effective galvanic coupling. Recently, oxygen depletion measurements [48] allowed for the classification of iron objects cared for by the English Heritage into four categories of corrosion behaviour. Surprisingly, a high number of objects were classified into Category 1 or 2. Particularly, Category 1 is related to a material that does not appear to deteriorate even up to very high RH values. Some sites reached 85% RH, but no sign of deterioration has been observed visually over 20 years of exposure. These observations are confirmed by oxygen testing of representative samples that show no detectable deterioration at 75% RH. No clear explanation of the good corrosion behaviour of such iron objects has yet been identified. Unfortunately, due to the millimetric size of San Carlone samples, it was not possible to perform oxygen depletion measurements. However, the description of Category 1 objects closely resembles the observation of most areas of iron elements in the San Carlone. As previously discussed, a clear correlation between alloy composition and conservation condition could not be identified. Data and observations suggest that good corrosion resistance may be mainly ascribed to the presence of slag inclusions and favourable environmental conditions. Further investigations are required to provide a deeper understanding of this phenomena. Conclusions Both the bulk and the surface of the iron elements of the monumental statue of San Carlone of Arona showed quite good corrosion resistance. Galvanic corrosion is not expected to be relevant since their free corrosion potential was only 200 mV (or less) lower than that of the copper surfaces. The reason for the rather noble corrosion potential of iron elements is still unclear. The micro-Raman and XRD analyses of the corrosion products did not highlight any significant difference in the chemical composition of the corrosion products among different samples. Moreover, the investigation of the microstructure of the iron samples did not allow for an explanation as to why active corrosion was observed only in a few limited areas. The analysed sample was characterised by great heterogeneity of the iron-based material, consisting of three main microconstituents: ferrite, pearlite, and multiphase slag. The relatively noble corrosion potential and good corrosion behaviour of iron elements seem to be associated both with favourable environmental conditions (low chlorine content and low pollution) and possibly with the presence of slag inclusions with dimensions ranging from about 50 µm to~1 mm. The obtained results allow us to hypothesise that the corrosion phenomena observed only in a few areas may be promoted by specific and localized microclimatic conditions, associated with deposits and condensation phenomena. This hypothesis could be supported by the significant presence in the corroded areas of gypsum, soil, and dust deposits, which may enhance corrosion due to their hygroscopicity. To corroborate the obtained results and better understand the good corrosion behaviour of the San Carlone iron alloys, it would be interesting to analyse other similar case studies with comparable environmental conditions and to perform further laboratory testing with different historical wrought irons. Data Availability Statement: The data that support the findings of this study are available from the coordinator of the project, Sara Goidanich (sara.goidanich@polimi.it), upon reasonable request.
9,366
2023-03-01T00:00:00.000
[ "Materials Science" ]
Enabling a Battery-Less Sensor Node Using Dedicated Radio Frequency Energy Harvesting for Complete O ff -Grid Applications : The large-scale deployment of sensor nodes in di ffi cult-to-reach locations makes powering of sensor nodes via batteries impractical. Besides, battery-powered WSNs require the periodic replacement of batteries. Wireless, battery-less sensor nodes represent a less maintenance-intensive, more environmentally friendly and compact alternative to battery powered sensor nodes. Moreover, such nodes are powered through wireless energy harvesting. In this research, we propose a novel battery-less wireless sensor node which is powered by a dedicated 4 W EIRP 920 MHz radio frequency (RF) energy device. The system is designed to provide complete o ff -grid Internet of Things (IoT) applications. To this end we have designed a power base station which derives its power from solar PV panels to radiate the RF energy used to power the sensor node. We use a PIC32MX220F32 microcontroller to implement a CC-CV battery charging algorithm to control the step-down DC-DC converter which charges lithium-ion batteries that power the RF transmitter and amplifier, respectively. A 12 element Yagi antenna was designed and optimized using the FEKO electromagnetic software. We design a step-up converter to step the voltage output from a single stage fully cross-coupled RF-DC converter circuit up to 3.3 V. Finally, we use the power requirements of the sensor node to size the storage capacity of the capacitor of the energy harvesting circuit. The results obtained from the experiments performed showed that enough RF energy was harvested over a distance of 15 m to allow the sensor node complete one sense-transmit operation for a duration of 156 min. The Yagi antenna achieved a gain of 12.62 dBi and a return loss of − 14.11 dB at 920 MHz, while the battery was correctly charged according to the CC-CV algorithm through the control of the DC-DC converter. Introduction In creating a smarter, more connected world, wireless sensor networks (WSNs) are becoming more widespread. The proliferation of the Internet of Things (IoT) has had a positive impact on a broad range of applications, including agriculture, medicine, and supply chain optimization [1]. The large scale and sometimes remote placement of sensor nodes makes the powering of such sensor nodes with wires or batteries impractical [2]. Improvements in communication technologies have made low power energy harvesting methods a viable solution of wireless power transfer to sensor nodes [3]. The increasing popularity of the IoT has given rise to an increased demand for WSNs. WSNs allow for the placement of sensor nodes in difficult-to-reach remote locations. However, battery powered WSNs require the periodic replacement of batteries [4]. A less maintenance-intensive, more environmentally-friendly, compact alternative to battery powered sensor nodes are wireless, battery-less sensor nodes [5,6]. Such nodes are powered through wireless power transfer. Moreover, such sensor nodes will enable off-grid IoT applications such as smart lighting [7], surveillance of public spaces through battery-free video streaming [8,9], monitoring of environmental and habitat and alerting via battery-free cellphones [10], wildfire detection and prevention using smart camera networks, etc. [11]. Several power harvesting methods exist. A common method of power harvesting is the use of solar panels. This may not always be practical as the availability of energy is dependent in turn on the availability of solar illumination. Therefore, the focus of this work is the transmission of a dedicated RF signal and the subsequent harvesting of RF energy of the generated signal to provide wireless power transfer to a sensor node [12]. Radio frequency (RF) energy harvesting is a far-field, radiative wireless power transfer technique that can operate over distances ranging from several meters to kilometers [13][14][15]. Research has been conducted on the use of RF energy harvesting from both dedicated and non-dedicated sources. Non-dedicated sources incur no cost to the RF energy harvesting party, and do not affect the functioning of the RF source. Non-dedicated RF sources are comprised of RF transmitted by television transmitters, AM/FM radio transmitters, cellular base stations, Wi-Fi communication, etc. [16]. Dedicated RF sources have a multitude of frequency options, each with some constraint on the maximum allowable radiated power. However, exposure to RF energy can heat materials, including human body tissues, thus exposure to RF energy in excess can be unsafe. Therefore, rational safety standards needs to be adhered to while excessive safety margins that could compromise the effectiveness of systems need to be avoided [17]. Furthermore, dedicated RF sources are advantageous when a controllable energy supply is required. Narrowband antennas are suited to harvesting energy from dedicated RF sources. Due to the fact the RF power received is often low, the design of the antenna is an important contributing factor to the overall performance of the system [18]. The primary objectives for the antenna design are a high gain and high efficiency. Antennas can be either linearly or circularly polarized. Circular polarization may be advantageous for RF energy harvesting in that they minimize polarization mismatch losses [19]. However, an array of radiating elements may be set up to shape the radiation pattern as desired. Typically, an array of radiating elements is used to increase the gain. That is, to maximize the radiation in a particular direction, and minimize it in others [20]. Examples of the use of arrays of radiating elements is the antenna [21][22][23][24]. The RF received is converted to DC voltage by a rectification stage, which is often also used as a voltage multiplier. Various rectification topologies have been investigated with differing numbers of stages to produce the required voltage [25]. Fully cross-coupled rectifiers have good sensitivity and a high conversion efficiency. In wireless battery-less sensor nodes deriving power from RF energy harvesting, the energy stored typically has a voltage that is not useable by microcontroller units or wireless communication modules. Typically, the energy stored has a voltage of less than 3.3 V, therefore, the voltage has to be boosted to a useable voltage. The challenges of harvesting RF energy become more pronounced as the distance over which energy is transferred increases. The power density of RF waves is inversely proportional to the square of the distance between the RF source and harvester. Furthermore, the Independent Communications Authority of South Africa (ICASA) does not allow more than 4 W effective isotropic radiated power (EIRP) to be radiated by RF sources at 920 MHz which limits the capability of harvesting energy from dedicated RF sources. Nonetheless, the proposed system can be deployed in a non-critical but difficult to reach areas, e.g., to take the temperature reading of a nuclear plant/high temperature furnace and send the readings to a human operator, who then acts on the reading to make a decision. The problem addressed in this paper is how to develop a system that provides a sensor node with energy through RF energy harvesting. The main contributions of the paper are summarized as follows: Energies 2020, 13, 5402 3 of 21 (1) We generate high voltage output from a low received RF power by designing an antenna with large gain and low return loss at 920 MHz. (2) We design and implement specific aspects of the proposed system including the DC-DC converter, CV-CV charging algorithm, microcontroller unit, microcontroller firmware and the sensor node energy storage circuitry. (3) We design and implement a proportional integral derivative (PID) to control the DC-DC converter to ensure a stable output current and voltage of the step-down DC-DC converter. (4) We develop a constant current-constant voltage (CC-CV battery) charging algorithm to effectively charge lithium-ion batteries to prevent permanent damage due to overvoltage or overcurrent conditions. (5) We develop an energy harvesting system that provides 140 mW of power for a duration of 2.14 s once every 156 min. The rest of the paper is structured as follows: Section 2 presents the related work. Section 3 describes the theory of the design, whereas the design alternatives and implementation is presented in Section 4. Our findings is presented and discussed in Section 5. A summary of results achieved is presented in Section 6. Section 7 concludes the paper. Related Works Several power harvesting methods exist. In [26], its authors examined the various architectures, energy sources, and storage technologies in energy harvesting sensor systems. In [27] the authors proposed an RF energy harvesting supply that is very efficient and highly sensitive. The objective of the designed harvester which comprises a single-series circuit with one double diode is to provide the least reflection coefficient and a high rectification efficiency. This was achieved by considering the rectifier micro strip trace dimensions and load as well as the impedance matching network. In another related work, in [28] the authors designed an electrically small as well as efficient and sensitive rectenna for harvesting ultra-low power RF energy. They exploited rectenna-array configuration to increase the DC output voltage in order to deliver low power density levels to the load. A common method of power harvesting is the deployment of solar panels. In [29], an approach of harvesting solar radiant energy by the use of a nanofluid concentrating parabolic solar collector was proposed. However, this method may not be efficient as the availability of energy depends on the availability of solar illumination. RF energy harvesting is a solution that eliminates the energy dependency on solar illumination. In [30], an RF energy harvesting system operating from 865.7 to 867.7 MHz using resonant inductive coupling radio frequency identification (RFID) technology was proposed. The limitation of this technique is the large loss of power over distances longer than a few meters. However, RF energy harvesting using the radiative wireless power transfer technique can operate over long distances. In [31], the authors investigated if wireless energy transfer is possible through the living body. As a result, they examined energy harvesting through of RF, heat and vibration. Afterwards, they presented system architecture and circuitry of effective energy transfer and harvesting techniques. Similarly, the work in [32] provides justification for energy harvesting from external ambient sources. Furthermore, they proposed RF energy harvesting technique in the 935-960 MHz frequency range. This objective was achieved by receiving more power from the antenna through impedance matching. Additionally, incoming RF signal is converted to DC signal through a rectifier circuit and boosted by a chopper circuit before it is fed to the battery. In [33], its authors showed by means of a sampling theorem using a statistical model that the amount of energy harvested is linearly related to amount of incident energy. Then, they used their findings to develop the statistical characteristics of harvested energy in a series of N harvesting blocks. In another work [34] multiple dedicated RF sources was proposed for an efficient RF energy harvesting system to avert energy holes. The optimum energy transmission challenge was solved as an optimization problem with minimum energy charge by each node as the constraints. Some of the shortcomings of the proposed techniques are that the sensor nodes Energies 2020, 13, 5402 4 of 21 are constantly on while receiving the RF energy, and the received RF energy needs to be greater than the energy required by sensor nodes to operate. Therefore, the main contributions of our proposed system is to mitigate the abovementioned shortcoming. Theory The schematic diagram of the setup functional analysis is shown in Figure 1. The power base station (FU1) is powered via radiant solar energy. Radiant energy is converted to electrical energy by the solar panel (FU1.1). This electrical energy is conditioned by the direct current-direct current (DC-DC) converter as well as the CC-CV charging circuit (FU1.3) and stored in the battery (FU1.4). Stored electrical energy is used to power the RF transmitter (FU1.5), and RF transmitter amplifier (FU1.6). The transmitting antenna (FU1.7) transmits the amplified RF signal. The sensor node (FU2) is powered via RF energy. The rectenna (FU2.1) of the sensor node (FU2) receives the transmitted RF signal and rectifies the signal. The rectified signal is harvested by the RF harvesting circuit (FU2.2) and is stored by the energy storage component (FU2.3) which is a capacitor. Energy is accumulated over a period of time and is subsequently used to power the temperature sensor and the long range (LoRa) transceiver (FU2.5) [35]. The user base station (FU3) consists of a LoRa transceiver (FU3.1) and a liquid crystal display (LCD) screen (FU3.2) and derives its power from the grid. The user base station LoRa transceiver (FU3.1) receives data from the sensor node LoRa transceiver (FU2.5). The data is displayed on a graphical user interface (FU3.2) in the form of an LCD screen. In Table 1, the technical deliverables of the system are shown. Energies 2020, 13, x FOR PEER REVIEW 4 of 21 Therefore, the main contributions of our proposed system is to mitigate the abovementioned shortcoming. Theory The schematic diagram of the setup functional analysis is shown in Figure 1. The power base station (FU1) is powered via radiant solar energy. Radiant energy is converted to electrical energy by the solar panel (FU1.1). This electrical energy is conditioned by the direct current-direct current (DC-DC) converter as well as the CC-CV charging circuit (FU1.3) and stored in the battery (FU1.4). Stored electrical energy is used to power the RF transmitter (FU1.5), and RF transmitter amplifier (FU1.6). The transmitting antenna (FU1.7) transmits the amplified RF signal. The sensor node (FU2) is powered via RF energy. The rectenna (FU2.1) of the sensor node (FU2) receives the transmitted RF signal and rectifies the signal. The rectified signal is harvested by the RF harvesting circuit (FU2.2) and is stored by the energy storage component (FU2.3) which is a capacitor. Energy is accumulated over a period of time and is subsequently used to power the temperature sensor and the long range (LoRa) transceiver (FU2.5) [35]. The user base station (FU3) consists of a LoRa transceiver (FU3.1) and a liquid crystal display (LCD) screen (FU3.2) and derives its power from the grid. The user base station LoRa transceiver (FU3.1) receives data from the sensor node LoRa transceiver (FU2.5). The data is displayed on a graphical user interface (FU3.2) in the form of an LCD screen. In Table 1, the technical deliverables of the system are shown. Design Alternatives and Design Implementation The sequence of the design follows the path of energy transfer from the solar panel to the batteries to the RF amplifier to the receiving antenna and to the energy harvesting circuit. A complete system visualization is shown in Figure 2. Design Alternatives and Design Implementation The sequence of the design follows the path of energy transfer from the solar panel to the batteries to the RF amplifier to the receiving antenna and to the energy harvesting circuit. A complete system visualization is shown in Figure 2. Power Base Station (FU1) The reason for implementing a power base station is to use the harvested solar energy to charge lithium-ion batteries which in turn are used to power the RF transmitter. Firstly, the voltage from the solar panel needs to be stepped down to a desired voltage level. To achieve this, either a linear regular or a DC-DC converter can be utilized. In this work a DC-DC converter was selected because of its higher efficiency in stepping down voltage. Similarly, the CC-CV charging algorithm used for effectively charging the batteries can either be executed by a current limiting circuit cascaded with a voltage limiting circuit or by controlling the DC-DC converter with a microcontroller. Furthermore, a MOSFET instead of an IGBT was used as a switch for the DC-DC converter due to a MOSFET medium output impedance and fast switching speed. This is because an increase in switching frequency will result in a decrease output current and voltage ripple. [36][37][38]. Power Base Station (FU1) The reason for implementing a power base station is to use the harvested solar energy to charge lithium-ion batteries which in turn are used to power the RF transmitter. Firstly, the voltage from the solar panel needs to be stepped down to a desired voltage level. To achieve this, either a linear regular or a DC-DC converter can be utilized. In this work a DC-DC converter was selected because of its higher efficiency in stepping down voltage. Similarly, the CC-CV charging algorithm used for effectively charging the batteries can either be executed by a current limiting circuit cascaded with a voltage limiting circuit or by controlling the DC-DC converter with a microcontroller. Furthermore, a MOSFET Energies 2020, 13, 5402 6 of 21 instead of an IGBT was used as a switch for the DC-DC converter due to a MOSFET medium output impedance and fast switching speed. This is because an increase in switching frequency will result in a decrease output current and voltage ripple [36][37][38]. 4.2. Step-Down DC-DC Converter Design (FU1.2) The purpose of the DC-DC converter is to step the voltage fed by the 50 W solar panel down to the voltage required to charge Li-ion batteries. The converter is controlled using a high frequency pulse width modulated (PWM) signal generated by a microcontroller. The duty ratio of the PWM signal determines the output voltage of the converter. The power base station requires two 2200 mAh Li-ion batteries to operate. It was decided to place the batteries in series. This decision was made to ensure that at any state of charge, the batteries combined voltage would be greater than 5 V. Additionally, the batteries being placed in series requires less current to flow through the step-down DC-DC converter than if they were placed in parallel. A higher output voltage requires a higher duty ratio which results in a slightly higher overall efficiency. A switching frequency of 50 kHz was determined through practical observation to provide acceptable output ripple and small enough losses. To fulfil the requirements of the CC-CV charging of the Li-ion batteries connected in series, the converter should have an output current I 0 of 1.1 A and an output voltage V 0 ranging from 5.6 V to 8.4 V (2.8 V to 4.2 V per cell). Ferrite toroidal inductors were selected due to low magnetic losses. The resistance of one inductor was measured as R L = 375 mΩ. In practical implementation of the converter, four inductors were placed in series yielding a total resistance of R L = 1.5 Ω. The inductance of each inductor was chosen to be 88 mH for a total inductance of L = 352 mH. An electrolytic capacitor with a capacitance of C = 3300 was chosen. From the PR6003-T diode datasheet, the forward voltage drop is given as V f wd = 1.2 V. Whereas, in the IRF3205 datasheet, the drain-to-source current is given as I D = 1.1A. In the CSRB20G10L00 datasheets, shunt resistor is given as R sh = 10 mΩ. Due to the resistance of the inductors being significantly higher, the shunt resistance was taken to be negligibly small, thus R sh = 0 mΩ. Two cases were considered in the theoretical analysis DC-DC converter to determine the minimum size of the storage components. Case 1: when V o = 8.4V and I O = 1.1A The voltage across the inductor is taken by applying the Kirchhoff voltage law to the converter when the MOSFET is on: To ensure that the converter operates in continuous conduction mode, the current ripple (∆i L,pk−pk ) should not exceed a maximum value of ∆i L,pk−pk = 2.2 A with a switching frequency of f sw = 50 kHz the minimum size of the inductor was calculated as: With an output current ripple of ∆i L,pk−pk = 2.2 A, to maintain a maximum output voltage ripple of V o,pk−pk = 10 mV, the minimum size of the capacitor was calculated as: Energies 2020, 13, 5402 7 of 21 The average input current I in,avg was calculated as: The input power (P in ) was calculated as: The output power (P o ) was calculated as: The overall efficiency (η) was calculated as: Case 2: when V o = 5.6 V and I O = 1.1 A, and by applying Equation (2) δ = 04401. The voltage across the inductor is obtained using Equation (3) as V L = 7.950 V. Also, with the maximum current ripple of ∆i L,pk−pk = 2.2A and a switching frequency of f sw = 50 kHz the minimum size of the inductor was obtained using Equation (4) as L = 31.181 µH. Furthermore, with the maximum current ripple of ∆i L,pk−pk = 2.2 A, to maintain a maximum output ripple of V O,pk−pk = 10 mV, the minimum size of the capacitor was obtained using Equations (5) and (6) as C = 550 µF. Whereas, the average input current I in,avg was obtained as 0.4841 A using Equation (7) the input power (P in ) was obtained as 8.7140 W using Equation (8) and the output power (P o ) using Equation (9) is obtained as 6.1600 W. Therefore, the overall efficiency (η) was calculated as: Microcontroller Input Design Due to the solar PV input to the DC-DC converter, and the CC-CV Li-ion battery charging output of the DC-DC converter, both the input and output of the converter are variable. As such there exists the need for a dynamic control of the converter according to its input and output. By extension, there is a need for circuitry that acts as input to the microcontroller. To monitor the voltage output, a simple voltage dividing network of resistors was used in conjunction with a non-inverting operational amplifier. The circuit that provides voltage information to the microcontroller was simulated using OrCAD and is shown in Figure 3. With an input voltage of V in = 2.1 V and a resistance R f = 12 kΩ and R in = 560 kΩ the output of the non-inverting amplifier was calculated as: The calculated output voltage value is congruent with the simulated output voltage value. To monitor the current output, the voltage across a 10 mΩ shunt resistor was fed through a differential operational amplifier. However, due to the large voltage outputs from the difference amplifier when no current flows through the shunt resistor, the output of the difference amplifier was fed to a non-inverting amplifier with a supply voltage of 3.3 V. This is to ensure that the voltage present on the ADC input pin of the microcontroller does not exceed 3.3 V. The circuit that provides current information to the microcontroller is shown in Figure 4. the need for a dynamic control of the converter according to its input and output. By extension, there is a need for circuitry that acts as input to the microcontroller. To monitor the voltage output, a simple voltage dividing network of resistors was used in conjunction with a non-inverting operational amplifier. The circuit that provides voltage information to the microcontroller was simulated using OrCAD and is shown in Figure 3. With an input voltage of = 2.1 V and a resistance = 12 kΩ and = 560 kΩ the output of the non-inverting amplifier was calculated as: The calculated output voltage value is congruent with the simulated output voltage value. To monitor the current output, the voltage across a 10 mΩ shunt resistor was fed through a differential operational amplifier. However, due to the large voltage outputs from the difference amplifier when no current flows through the shunt resistor, the output of the difference amplifier was fed to a noninverting amplifier with a supply voltage of 3.3 V. This is to ensure that the voltage present on the ADC input pin of the microcontroller does not exceed 3.3 V. The circuit that provides current information to the microcontroller is shown in Figure 4. The expected input to the differential amplifier is the voltage across the 10 Ω shunt resistor. Assuming a current of 1.1 A flows through the shunt resistor the voltage difference will be 11 . When the resistors are selected such that 1 = 2 and 3 = 4 the differential amplifier equation is simplified to: The expected input to the differential amplifier is the voltage across the 10 mΩ shunt resistor. Assuming a current of 1.1 A flows through the shunt resistor the voltage difference will be 11 mV. When the resistors are selected such that R1 = R2 and R3 = R4 the differential amplifier equation is simplified to: With R3 = 2.7 kΩ, R1 = 27 Ω and V3 − V4 = 11 mV we obtained V out, di f f erence = 1.1 V. The output of the non-inverting amplifier was obtained as: The simulated output values of V out, di f f erence = 1.1 V and V out, non−inverting = 1.6029 V are congruent with the calculated values. Microcontroller Firmware Design The microcontroller chosen for the control of the DC-DC converter and implementation of the CC-CV charging of the Li-ion batteries was the PIC32MX220F032 (Microchip, Chandler, AZ, USA ). To meet the requirements of the CC-CV battery charging system, the microprocessor needs to meet a number of requirements. The resolution of the analogue to digital display (ADC) module affects how accurately the microprocessor reads the voltage and current outputs of the DC-DC converter. The resolution of the PWM module affects how accurately the microprocessor adjusts the duty ratio of the MOSFET in the DC-DC converter. The instruction clock frequency affects how rapidly the control system implemented on the microprocessor responds to differences between the desired set-point and the actual output. The requirements of the microprocessor are summarized in the Table 2. In Figure 5, the program flow of the implementation of the microcontroller firmware is presented. PID Controller Design To ensure a stable output current and voltage of the step-down DC-DC converter, a proportional integral derivative (PID) control was applied to the converter. This is of importance when dealing with lithium-ion rechargeable batteries which can experience permanent damage if exposed to overvoltage or overcurrent conditions. Therefore, a continuous cycling test was performed to obtain the ultimate period (P u ) and the ultimate gain (K u ) of the system responsible for controlling the CC-CV charging of the Li-ion batteries. Based on the experiment performed, the ultimate gain associated with the undamped response of the system was recorded as K u = 0.1525 µs whereas the ultimate period of the undamped response was measured as P u = 520 µs. Subsequently, the obtained ultimate gain and ultimate period of the system were used to calculate the tuning parameter of the PID. In addition, the continuous cycling method used to obtain the ultimate gain and ultimate period is often used in conjunction with the Ziegler-Nichols tuning parameters. However, because the Ziegler-Nichols tuning parameters is undesirable for the CC-CV charging algorithm a less aggressive Tyreus-Luyben modified Ziegler-Nichols tuning rules were applied. The Ziegler-Nichols tuning parameters has a decay ratio which produces a response with overshoot and short settling time which often result in over-voltage or over-current. The Tyreus-Luyben tuning parameters are given in Table 3. In Figure 5, the program flow of the implementation of the microcontroller firmware is presented. Based on the experimentally obtained ultimate gain and ultimate period, the Tyreus-Luyben tuning parameters were calculated as follows: K c = 0.313 K u = 0.04773, τ I = 2.2P u = 0.01144 and τ D = P u 6.3 = 0.00008254. Thereafter, the PID control is implemented through the PIC32MX220F032 microcontroller. Antenna (FU2.2) A stacked micro-strip patch antenna was selected to be used as the antenna type. The antenna was simulated using FEKO with various materials. After several iterations of design, a 0.7 mm thick copper conductor was selected and polystyrene was selected for the substrate. However, the stacked micro-strip patch antenna did not meet the requirements of the system. Therefore, the approach was adjusted and a Yagi antenna [39], which consists of the stacking of ∓1/2 dipoles was designed. The antenna implementation flow diagram is shown in Figure 6. A stacked micro-strip patch antenna was selected to be used as the antenna type. The antenna was simulated using FEKO with various materials. After several iterations of design, a 0.7 mm thick copper conductor was selected and polystyrene was selected for the substrate. However, the stacked micro-strip patch antenna did not meet the requirements of the system. Therefore, the approach was adjusted and a Yagi antenna [39], which consists of the stacking of ∓ 1 2 ⁄ dipoles was designed. The antenna implementation flow diagram is shown in Figure 6. Yagi Antenna Design The starting point for the Yagi antenna design was obtained from the National Bureau of Standards (NBS) paper [40] with adjustment of the reflector spacing to 0.25 lambda which has been found to be a near optimum value [41]. The thickness, length and spacing of the antenna elements affect the overall performance of the antenna. It was decided to set a fixed element thickness based on availability of materials. Using this element thickness, the element spacing and lengths were optimized. It was found that aluminum rods were available with an outer diameter of 6.35 mm and copper rods were available with an outer diameter of 6 mm. The active element (folded dipole) was to be constructed with copper rod, and the parasitic elements (reflector and directors) were to be constructed with aluminum rod. The Yagi antenna was simulated using the FEKO electromagnetic simulation software. The Yagi antenna was simulated with perfect electrical conductors (PEC) as the resistance present on aluminum and copper rods is sufficiently small to be considered zero. The antenna was simulated without the conducting aluminum tube boom. This was found to be computationally expensive and impractical for optimization searches. As such, the antenna was optimized using free-standing elements. A boom correction factor obtained from experimental data was applied to the elements before construction of the antenna. An optimization search was set up to optimize the realized gain and return loss at 920 MHz. It was found that optimizing for total gain as opposed to realized gain would compromise the return loss goal. The length and spacing of the director, reflector and driven elements were entered as parameters in the optimization search. The final iteration of the Yagi antenna optimization had a simulated gain of 15.0108 dBi and a return loss of −21.7774 dB at 920 MHz. The Yagi antenna dimensions are summarized in Table 4. The driven element length is given from tip to tip which is more significant than the total length of the folded rod. A table saw was used to cut the aluminum tube and rod into the desired sizes. Pieces of wood were cut and measured to obtain the desired length before cutting the aluminum. The director and reflector elements were filed and measured with a Vernier gauge to obtain an accuracy of within 0.2 mm. A drill press was used to drill holes into the Aluminum tube and the director and reflector elements were tapped into their respective holes. The copper rod was cut using a hack saw and was bent using a wooden jig. The folded dipole was attached to the aluminum tube using metal corner pieces, hose clamps, and a nut and bolt. Figure 7 shows the designed Yagi antenna. Energy Harvesting Circuit (FU2.2) The intended achieved gain for the antenna was 6 dBi. Assuming that this was achieved, an input power of −11.44 dBm would be available when the sensor node is placed 15 m from the RF source. This input power does not require a highly sensitive RF-to-DC converter. As such, the diodebased Dickson RF-DC converter (city, state abbrev if USA, country) was chosen. The Dickson RF-DC rectifier was simulated using the HSMS-2860 Schottky diode model and ideal capacitors. When tested, the Dickson RF-DC converter did not have a high enough efficiency. The approach was adjusted, and a transistor based fully cross-coupled rectifier circuit was designed. The fully crosscoupled rectifier circuit was simulated using HFA3096 transistors (Renesas, city, state abbrev if USA, country_ and ideal capacitors. Simulations of the energy harvesting circuit were done in LTSpice XVII, a high performance SPICE simulation software. LTSpice XVII has circuit capture and waveform viewer functionality. The RF-DC flow diagram is shown in Figure 8. Fully Cross-Coupled RF-DC Converter A transistor-based fully cross-coupled radio frequency-direct current (RF-DC) converter circuit was simulated using LTSpice XVII. Additionally, a Renesas HFA3096BZ ultra high frequency transistor arrays were selected for the fully cross coupled RF-DC converter. Furthermore, the fully cross-coupled RF-DC converter was simulated with one and two stages. However, it was found that Energy Harvesting Circuit (FU2.2) The intended achieved gain for the antenna was 6 dBi. Assuming that this was achieved, an input power of −11.44 dBm would be available when the sensor node is placed 15 m from the RF source. This input power does not require a highly sensitive RF-to-DC converter. As such, the diode-based Dickson RF-DC converter (city, state abbrev if USA, country) was chosen. The Dickson RF-DC rectifier was simulated using the HSMS-2860 Schottky diode model and ideal capacitors. When tested, the Dickson RF-DC converter did not have a high enough efficiency. The approach was adjusted, and a transistor based fully cross-coupled rectifier circuit was designed. The fully cross-coupled rectifier circuit was simulated using HFA3096 transistors (Renesas, city, state abbrev if USA, country_ and ideal capacitors. Simulations of the energy harvesting circuit were done in LTSpice XVII, a high performance SPICE simulation software. LTSpice XVII has circuit capture and waveform viewer functionality. The RF-DC flow diagram is shown in Figure 8. Energy Harvesting Circuit (FU2.2) The intended achieved gain for the antenna was 6 dBi. Assuming that this was achieved, an input power of −11.44 dBm would be available when the sensor node is placed 15 m from the RF source. This input power does not require a highly sensitive RF-to-DC converter. As such, the diodebased Dickson RF-DC converter (city, state abbrev if USA, country) was chosen. The Dickson RF-DC rectifier was simulated using the HSMS-2860 Schottky diode model and ideal capacitors. When tested, the Dickson RF-DC converter did not have a high enough efficiency. The approach was adjusted, and a transistor based fully cross-coupled rectifier circuit was designed. The fully crosscoupled rectifier circuit was simulated using HFA3096 transistors (Renesas, city, state abbrev if USA, country_ and ideal capacitors. Simulations of the energy harvesting circuit were done in LTSpice XVII, a high performance SPICE simulation software. LTSpice XVII has circuit capture and waveform viewer functionality. The RF-DC flow diagram is shown in Figure 8. Fully Cross-Coupled RF-DC Converter A transistor-based fully cross-coupled radio frequency-direct current (RF-DC) converter circuit was simulated using LTSpice XVII. Additionally, a Renesas HFA3096BZ ultra high frequency transistor arrays were selected for the fully cross coupled RF-DC converter. Furthermore, the fully cross-coupled RF-DC converter was simulated with one and two stages. However, it was found that Fully Cross-Coupled RF-DC Converter A transistor-based fully cross-coupled radio frequency-direct current (RF-DC) converter circuit was simulated using LTSpice XVII. Additionally, a Renesas HFA3096BZ ultra high frequency transistor arrays were selected for the fully cross coupled RF-DC converter. Furthermore, the fully cross-coupled RF-DC converter was simulated with one and two stages. However, it was found that the 2-stage circuit was approximately half as efficient and did not provide a significant increase in voltage across the storage capacitor. Therefore, a single stage fully cross-coupled RF-DC converter circuit with Renesas HFA3096BZ transistor is implemented. The implemented circuit is presented in Figure 9. Step-Up DC-DC Converter The boost converter was designed to be controlled by a microcontroller. In order to provide the microcontroller with the necessary voltage, a 551 timer is turned on through the use of a momentary switch. The 551 timer drives the boost converter MOSFET while the momentary switch is on. The boost converter outputs a voltage which is regulated to 3.3 V and fed to the microcontroller. Once the momentary switch is released (off), the MOSFET is driven by the microcontroller. The purpose of the DC-DC converter is to boost the voltage stored on the supercapacitor to a voltage usable by the sensor node (≥3.3 V). The step-up converter was designed to be connected to a 3.3 low dropout voltage regulator. The converter is controlled using a high frequency PWM signal generated by a microcontroller. The duty ratio of the PWM signal determines the output voltage of the converter. The frequency of the PWM signal has an inverse relationship on the output current and voltage ripple; An increase in the switching frequency results in a decrease in ripple. The frequency is directly proportional to switching losses of the MOSFET, and hysteresis losses of the inductor. The voltage regulator was assumed to have a 250 mV dropout while delivering the required current. The MOSFET is assumed to have a drain-source voltage drop of = 42 mV. The diode is assumed to have a forward voltage drop of = 320 mV. The resistance of the inductor is assumed to be = 250 mΩ. The boost converter was simulated with a switching frequency of = 1 kHz. The step-up DC-DC converter was simulated using LTSpiceXVII. The simulation was used to evaluate 16 different MOSFET options, and nine different diode options. Once the most suitable MOSFET and diode options were chosen, the circuit was simulated using ideal voltage controlled switches to charge a capacitor representing a supercapacitor, and to subsequently provide the converter with an input Step-Up DC-DC Converter The boost converter was designed to be controlled by a microcontroller. In order to provide the microcontroller with the necessary voltage, a 551 timer is turned on through the use of a momentary switch. The 551 timer drives the boost converter MOSFET while the momentary switch is on. The boost converter outputs a voltage which is regulated to 3.3 V and fed to the microcontroller. Once the momentary switch is released (off), the MOSFET is driven by the microcontroller. The purpose of the DC-DC converter is to boost the voltage stored on the supercapacitor to a voltage usable by the sensor node (≥3.3 V). The step-up converter was designed to be connected to a 3.3 V low dropout voltage regulator. The converter is controlled using a high frequency PWM signal generated by a microcontroller. The duty ratio of the PWM signal determines the output voltage of the converter. The frequency of the PWM signal has an inverse relationship on the output current and voltage ripple; An increase in the switching frequency results in a decrease in ripple. The frequency is directly proportional to switching losses of the MOSFET, and hysteresis losses of the inductor. The voltage regulator was assumed to have a 250 mV dropout while delivering the required current. The MOSFET is assumed to have a drain-source voltage drop of V DS = 42 mV. The diode is assumed to have a forward voltage drop of V f wd = 320 mV. The resistance of the inductor is assumed to be RL = 250 mΩ. The boost converter was simulated with a switching frequency of f sw = 1 kHz. The step-up DC-DC converter was simulated using LTSpiceXVII. The simulation was used to evaluate 16 different MOSFET options, and nine different diode options. Once the most suitable MOSFET and diode options were chosen, the circuit was simulated using ideal voltage controlled switches to charge a capacitor representing a supercapacitor, and to subsequently provide the converter with an input voltage from the supercapacitor. The MOSFET was controlled using a 3.3 V PWM voltage source. A constant power load was powered by the boost converter through a 3.3 V voltage regulator. The power consumption of the load was calculated using the expected power draw of the sensor node as well as the efficiency of the voltage regulator which lies between the step-up converter and the MDOT module. The duty cycle of the PWM voltage source, inductance and capacitance were adjusted to optimize the time that the load could be supplied with a voltage greater than 3 V. Using the selected components, the step-up converter was simulated to determine the optimum size of the capacitor. The boost converter was simulated using the maximum effective constant duty cycle to determine the minimum size supercapacitor required to provide 140.0 mW of power for a duration of 2.14 s. It was found that the optimum output capacitance was 200 µF and that the minimum capacitance of the supercapacitor was 2 F. Figure 10 shows the sensor node circuit containing the fully cross-coupled RF-DC converter, the step-up DC-DC converter, the MDOT LoRa module [42,43], and the Microchip MCP9700 temperature sensor. Energies 2020, 13, x FOR PEER REVIEW 14 of 21 the minimum size supercapacitor required to provide 140.0 mW of power for a duration of 2.14 . It was found that the optimum output capacitance was 200 μF and that the minimum capacitance of the supercapacitor was 2 . Figure 10 shows the sensor node circuit containing the fully crosscoupled RF-DC converter, the step-up DC-DC converter, the MDOT LoRa module [42,43], and the Microchip MCP9700 temperature sensor. Design of the Multitech LoRa Communication To implement LoRa communication, two LoRa MDOT-868 modules (Multitech, Mounds View, MN 55112, USA) were used. The Multitech MTUDK2-ST-MDOT development board was used to program the LoRa modules. The arm mbed development platform's compiler was used to alter and compile code. The MDOTs work in peer-to-peer mode using example code from Multitech. In peerto-peer mode, the MDOTs are constantly in receive mode, unless transmitting data. Sensor Node Apart from the transmission of data to the user, the main objective in the design of the sensor node MDOT firmware was to transmit data in the shortest timeframe possible and thus decrease the overall energy requirement of the sensor node. The join delay was set to 1 ms. The default join delay is 5 . The transmit power was set to the lowest level of 2 dBm. The data rate was set to the fastest available data rate for use in the Europe region. Although the higher data rate consumes more power, the MDOT operates for a shorter duration which improves overall energy consumption. Two possible temperature sensor ICs were investigated. Table 5 shows a comparison of the features of the Dallas 18B20 and Microchip MCP9700 temperature sensors. Design of the Multitech LoRa Communication To implement LoRa communication, two LoRa MDOT-868 modules (Multitech, Mounds View, MN 55112, USA) were used. The Multitech MTUDK2-ST-MDOT development board was used to program the LoRa modules. The arm mbed development platform's compiler was used to alter and compile code. The MDOTs work in peer-to-peer mode using example code from Multitech. In peer-to-peer mode, the MDOTs are constantly in receive mode, unless transmitting data. Sensor Node Apart from the transmission of data to the user, the main objective in the design of the sensor node MDOT firmware was to transmit data in the shortest timeframe possible and thus decrease the overall energy requirement of the sensor node. The join delay was set to 1 ms. The default join delay is 5 s. The transmit power was set to the lowest level of 2 dBm. The data rate was set to the fastest available data rate for use in the Europe region. Although the higher data rate consumes more power, the MDOT operates for a shorter duration which improves overall energy consumption. Two possible temperature sensor ICs were investigated. Table 5 shows a comparison of the features of the Dallas 18B20 and Microchip MCP9700 temperature sensors. The Microchip MCP9700 was selected for use in the sensor node due to its low active current and lack of conversion time. The sensor node MTDOT obtains a temperature reading from the Microchip MCP9700 temperature sensor via the mbed AnalogIn Driver Application Programming Interface (API). The temperature float variable is split into two integers, one representing the whole number and the other representing the fraction. The two integers are concatenated into one longer integer and sent via LoRa to the user base station MDOT. For debugging purposes during the development of the MDOT firmware, the temperature value read from the temperature sensor was outputted to the serial port. Results and Discussion The section presents result of qualification tests performed to validate the performance of the proposed system. Full System Test The main objectives of this test were to test the system as a whole and to determine over what period of time the sensor node can harvest enough energy to obtain and send data to the user base station. The following procedure was used to conduct the test: (1) The storage capacitors of the sensor node were discharged using a 2.7 Ω resistor, (2) The frequency synthesizer was set to output a 920 MHz continuous wave (CW) signal at an output power of 2 dBm, (3) The sensor node was placed within line-of-sight 15 m from the power base station, (4) The direction and polarity of the sensor node antenna were adjusted to be in the direction and polarity of the RF source, (5) The frequency synthesizer was set to output a 920 MHz CW signal at an output power of 2 dBm, (6) The RF amplifier was turned on and the timer was started, (7) The voltage across the terminals of the storage capacitor were measured at 10-min intervals, (8) Once the voltage reached 1 V, the step-up DC-DC converter was switched on using a mechanical switch powering the MDOT LoRa module and MCP9700 temperature sensor, (9) The voltage across the storage was recorded after the voltage collapse of the step-up converter. Figure 11 shows the voltage profile over period of 360 min. Energies 2020, 13, x FOR PEER REVIEW 15 of 21 development of the MDOT firmware, the temperature value read from the temperature sensor was outputted to the serial port. Results and Discussion The section presents result of qualification tests performed to validate the performance of the proposed system. Full System Test The main objectives of this test were to test the system as a whole and to determine over what period of time the sensor node can harvest enough energy to obtain and send data to the user base station. The following procedure was used to conduct the test: (1) The storage capacitors of the sensor node were discharged using a 2.7 Ω resistor, (2) The frequency synthesizer was set to output a 920 MHz continuous wave (CW) signal at an output power of 2 dBm, (3) The sensor node was placed within line-of-sight 15 m from the power base station, (4) The direction and polarity of the sensor node antenna were adjusted to be in the direction and polarity of the RF source, (5) The frequency synthesizer was set to output a 920 MHz CW signal at an output power of 2 dBm, (6) The RF amplifier was turned on and the timer was started, (7) The voltage across the terminals of the storage capacitor were measured at 10-min intervals, (8) Once the voltage reached 1 V, the step-up DC-DC converter was switched on using a mechanical switch powering the MDOT LoRa module and MCP9700 temperature sensor, (9) The voltage across the storage was recorded after the voltage collapse of the step-up converter. Figure 11 shows the voltage profile over period of 360 min. The step-up converter was switched on with an input voltage of 1 V. The voltage stored on the storage capacitors after the step-up converter experienced output voltage collapse was 600 mV. The test confirms that the system is able to operate with a 15 m distance between the sensor node and RF The step-up converter was switched on with an input voltage of 1 V. The voltage stored on the storage capacitors after the step-up converter experienced output voltage collapse was 600 mV. The test confirms that the system is able to operate with a 15 m distance between the sensor node and RF source. The test shows that the sensor node is able to harvest enough energy to obtain and transmit data over a period of 156 min. The sensor node sent data to the user base station successfully. The data was displayed on the user base station LCD display. Yagi Antenna Gain Test The main objective of this test was to measure the gain relative to a lossless isotropic antenna (dBi) performance of the implemented Yagi antenna. The following procedure was used to conduct the test: (1) The antenna was mounted on the aluminium rod, (2) The antenna was connected to the testing apparatus via an SMA cable, (3) The frequency range over which the gain would be measured was set, (4) The direction of the main lobe of the antenna was adjusted to direct the maximum gain at 0.92 GHz towards the center of the reflector, (5) A frequency sweep was performed, (6) The Yagi antenna was removed and a reference horn antenna was connected, (7) The direction of the main lobe of the horn antenna was adjusted to direct the maximum gain a 0.92 GHz towards the center of the reflector, (8) A frequency sweep was performed, and (9). The results were obtained by comparing the Yagi antenna and reference antenna measurements using MATLAB. The results were stored in a text file. Figure 12 shows the measured gain (dBi) versus frequency (GHz) of the Yagi antenna. The gain of the antenna at the desired frequency was measured as 12.62 dBi. This complies with the proposed specification of at least 1 dBi. The test was a success. Yagi Antenna Return Loss Test The main objective of this test was to measure the return loss (dB), and the Voltage Standing Wave Ratio (VSWR, unitless) performance parameters of the implemented Yagi antenna. The following procedure was used to conduct the test: (1) The antenna was placed on a wooden stool, (2) The antenna was connected to the testing apparatus via an SMA cable, (3) The frequency range over which the gain would be measured was set, (4) The spectrum analyzer test was performed, (5) The results were recorded in a text file. Figures 13 and 14 show the return loss versus the frequency and the VSWR versus frequency respectively. The gain of the antenna at the desired frequency was measured as 12.62 dBi. This complies with the proposed specification of at least 1 dBi. The test was a success. Yagi Antenna Return Loss Test The main objective of this test was to measure the return loss (dB), and the Voltage Standing Wave Ratio (VSWR, unitless) performance parameters of the implemented Yagi antenna. The following procedure was used to conduct the test: (1) The antenna was placed on a wooden stool, (2) The antenna was connected to the testing apparatus via an SMA cable, (3) The frequency range over which the gain would be measured was set, (4) The spectrum analyzer test was performed, (5) The results were recorded in a text file. Figures 13 and 14 show the return loss versus the frequency and the VSWR versus frequency respectively. The return loss and VSWR measurements show that the antenna has the highest efficiency from 0.92 GHz to 0.9332 GHz. At 0.92 GHz the return loss was measured as −14.11 dB and the VSWR was measured as 1.4906. following procedure was used to conduct the test: (1) The antenna was placed on a wooden stool, (2) The antenna was connected to the testing apparatus via an SMA cable, (3) The frequency range over which the gain would be measured was set, (4) The spectrum analyzer test was performed, (5) The results were recorded in a text file. Figures 13 and 14 show the return loss versus the frequency and the VSWR versus frequency respectively. The return loss and VSWR measurements show that the antenna has the highest efficiency from 0.92 GHz to 0.9332 GHz. At 0.92 GHz the return loss was measured as −14.11 dB and the VSWR was measured as 1.4906. CC-CV Charging of Li-Ion Battery Test The main objective of this test was to determine whether the designed CC-CV charging algorithm implemented through a step-down DC-DC converter adhered to the correct voltage and current profiles while charging the Li-ion batteries. The following procedure was used to conduct the test: (1) The Li-ion batteries were discharged to 3.0 V per battery, just above their discharge cut-off voltage of 2.75 V, (2) The step-down DC-DC converter was connected to the DC power supply, (3) The voltage probe was connected across the terminals of the Li-ion batteries, (4) The current probe was setup to measure the current flowing through the shunt resistor, (5) The DC power supply was switched on, and (6) The voltage across, and current through the Li-ion batteries was recorded at 2min intervals. Figure 15 shows the voltage and current profile over a 150 min of the implementing the CC-CV charging algorithm. CC-CV Charging of Li-Ion Battery Test The main objective of this test was to determine whether the designed CC-CV charging algorithm implemented through a step-down DC-DC converter adhered to the correct voltage and current profiles while charging the Li-ion batteries. The following procedure was used to conduct the test: (1) The Li-ion batteries were discharged to 3.0 V per battery, just above their discharge cut-off voltage of 2.75 V, (2) The step-down DC-DC converter was connected to the DC power supply, (3) The voltage probe was connected across the terminals of the Li-ion batteries, (4) The current probe was setup to measure the current flowing through the shunt resistor, (5) The DC power supply was switched on, and (6) The voltage across, and current through the Li-ion batteries was recorded at 2-min intervals. Figure 15 shows the voltage and current profile over a 150 min of the implementing the CC-CV charging algorithm. The charging system started in constant current mode with a constant current of I 0 = 1. voltage of 2.75 V, (2) The step-down DC-DC converter was connected to the DC power supply, (3) The voltage probe was connected across the terminals of the Li-ion batteries, (4) The current probe was setup to measure the current flowing through the shunt resistor, (5) The DC power supply was switched on, and (6) The voltage across, and current through the Li-ion batteries was recorded at 2min intervals. Figure 15 shows the voltage and current profile over a 150 min of the implementing the CC-CV charging algorithm. Summary of Results Achieved This section summarizes all the expectations of the project and the results achieved in Table 6. Table 6. Summary of expected outcomes and achieved outcomes. S/N Intended Outcome Actual Outcome 1 The power base station should be remote (does not have access to the grid) and derive its power via solar energy. The power base station was able to derive its power from the solar panel. The batteries were charged correctly through the implementation of a CC-CV charging algorithm controlling the DC-DC converter. 3 The power base station should provide RF energy to the sensor node. The power base station provided RF energy to the sensor node. 4 The sensor node should be battery-less and wireless. The sensor node was battery-less and wireless 5 Energy stored by the energy harvesting system should provide power to the sensor node's temperature sensor, microcontroller, and LoRa transceiver. Energy stored by the energy harvesting system provided power to the sensor node's temperature sensor, microcontroller, and LoRa transceiver. 6 The sensor node should collect data (the sensed temperature) and communicate this data to the user base station using the LoRa LPWAN protocol. The sensor node collected data and communicated the data to the user base station. 7 The system should measure temperature once every 60 min. The system measured temperature once every 156 min. 8 The gain of the rectenna should be at least 1 dBi. The gain of the antenna was 12.62 dBi 9 The voltage applied to the terminals of the battery while charging should not exceed the rated battery charging voltage. For Li-ion (Lithium-ion), this value is 4.2 V. The voltage applied to the terminals of the battery did not exceed 4.21 V per battery. 10 The current delivered to the battery while charging should not exceed the rated battery charging current. For Li-ion batteries this value is half of the rated battery capacity given in mAh. The current delivered to the battery while charging did not exceed 1.11 A. 11 The energy harvesting method should provide a minimum voltage of 1.8 V. The energy harvesting circuit was able to provide 3.3 V. 12 The temperature should be measured once every 60 min. The temperature was measured once every 156 min 13 The frequency of the RF transmitter should be 920 MHz ± 1.5 kHz. The transmitted RF should be CW (continuous wave) RF. The frequency of the RF transmitter was 920 MHz ± 1.5 kHz and was a CW. 14 The step-down DC-DC converter, CC-CV charging control system hardware and software, sensor antenna, RF energy harvesting circuit, sensor node energy storage circuitry, sensor node microcontroller firmware should be designed and implemented. The step-down DC-DC converter, CC-CV charging control system hardware and software, sensor antenna, RF energy harvesting circuit, sensor node energy storage circuitry, sensor node microcontroller firmware were all designed and implemented. Conclusions In this work, a system that provides a sensor node with energy through RF energy harvesting for a complete off the grid IoT solution has been developed. The hardware and software for the CCCV charging algorithm implemented through the control of the step-down DC-DC converter was designed from first principles. Central to the control of the DC-DC converter was the Microchip PIC32MX220F032 Energies 2020, 13, 5402 19 of 21 microcontroller. The continuous cycling method was used to determine the ultimate gain and ultimate period of the step-down converter. Tyreus-Luyben PID tuning parameters were used in the DC-DC converter control system. The Yagi antenna was simulated and optimized using FEKO. FEKO was used more proficiently in the design and optimization of the Yagi antenna. The fully cross-coupled RF-DC circuit was simulated using LTSpice XVII. Renesas ultra-high frequency transistors were used in the implementation of the fully cross-coupled RF-DC circuit. A low input voltage step-up DC-DC converter was designed to provide the Multitech MDOT LoRa module and the Microchip MCP9700 temperature sensor with 3.3 V. The firmware residing on the microcontroller contained PD control to ensure that the output is maintained upon the introduction of load. Correct and safe charging of the lithium-ion batteries that power the power base station using solar power and the powering of a sensor node using harvested RF energy over a distance 15 m was achieved. The Yagi antenna achieved a gain of 12.62 dBi and a return loss of −14.11 dB at 920 MH.
13,622
2020-10-16T00:00:00.000
[ "Computer Science", "Engineering" ]
A Novel Method for Night-Time Single Image Dehazing Images acquired under deprived weather environment are frequently corrupted due to the presence of haze, mist, fog or other aerosols in a form of noise. Haze elimination is essential in computer vision and computational photography applications. Generally, there is the existence of numerous approaches towards haze removal which are mostly meant for hazy images under daytime environments. Although the potency of these proposed approaches has been comprehensively established on daylight hazy images. However these procedures inherit significant limitations on images influenced by night-time hazy environments. Since night time haze removal dehazing remains an ill-posed problem, we proposed a novel method for night-time single image dehazing which is efficient under night-time environments. The proposed scheme is a dark channel-based local image dehazing procedure that locally estimates the atmospheric intensity for each selected mask on a corrupted image independently and not the entire image. This is done in order to overcome the challenge of night-scenes that are exposed to mul-tiple/artificial lights source and spatially non-uniform environmental illumination. We performed an adaptive filtering on the combined dehazed masks to improve the degraded image. We validated the supremacy of the proposed approach in terms of speed and robustness through computer-based experi-ments. Conclusively, we displayed comparison results with state-of-the-art and extensively emphasized the comparative advantage of our scheme. absorption and the scattering of light that travels from the point or scene of interest to the acquisition device caused by the existence of aerosols thus direct attenuation. Simultaneously, the presence of the aerosols also scatters the airlight through the transmission from the point of interest to the observer, subsequently leads to color distortions, inferior visibility and low contrast on captured images. However such factors make it an ill-posed challenge of capturing clear image under deprived environment. Numerous techniques of haze removal in computer vision and image processing have been established over the last decades. These techniques encompass various procedures used to retrieve information such as contrast extraction, feature extraction, scene depth, color channels and others. The corruption process associated with images acquired under hazy environment has been well recognized by computer vision and computer graphics in some existing works [1] [2] [3] [4], where the degradation process can be formulated as: where I denotes the intensity observed of the haze image. x represents pixel's index. J represents the scene radiance of the haze-free image which is anticipated to be attained by dehazing procedures. t represents transmission medium which is the part of light that does not scatter and reaches the observer. A represents the global atmospheric light. The main objective of haze removal is to recover J, A and t from I. Under day-time hazy images, the atmospheric light is primarily established by sky light and incidental sunlight that has been scattered by aerosols or clouds. In computer vision, graphic systems and other related applications, image corruption persistently continues to be a significant challenge to be resolved. Outdoor systems such as traffic monitoring [5], autonomous self-driving cars [6], outdoor security and safety surveillance [7] are commonly affected by image corruption due to adverse weather conditions. Under such uncontrolled weather or environment of outdoor systems, the image acquisition process is certainly influenced adverse weather and the aerosol such as haze, fog, smoke and other atmospheric particles. For this reason several dehazing methods have been established to deal with haze removal by applying multiple images or additional information. For instance, images using diverse degrees of polarization [8], multiple images can be used to estimate object depth under different weather conditions as proposed in [9]. Or image of the scene in addition to the corrupted image [10]. Background Generally, images taken under night time conditions are mostly influenced by low natural illumination and non-uniform artificial light or incident light as a result of the existence multiple light source which usually unveil some poor properties like low overall brightness and non-uniform illumination. In addition, the haze may degrade the image quality for its scattering and attenuation effects [11]. Consequently, images under such conditions are normally affected by low contrast and data loss. Additionally, the color of artificial light source affects the dehazing results and the quality of the recovered image. Generally, haze removal process under hazy environment, the atmospheric scattering model has introduced some form of reputable patterns of decomposing and formulating the haze formation process. Primarily as proposed in [11] and advanced works over the years in [1] and [9], the atmospheric scattering model is generally formulated as denoted in the (1). In (1), t(x) denotes the portion of light that reaches to the acquisition device and can be expressed as: where d(x) denotes the distance between portion of interest in the target haze scene and the camera. β denotes the scattering coefficient of the medium. Although the formulation of (2) proposes that the distance, d(x) yields A = I(x) as it moves towards infinity, practical circumstances compel this distance to albeit large, real numbers. This distance inherits an inverse correlation with the transmission parameter, in this manner demonstrating one of the key challenges in image dehazing processes; effectively improving image features within distant scene areas. In general, when the atmospheric scattering coefficient A, and the scene transmission are determined, the factual scene is attainable and can be expressed as: Tan [2] proposed a procedure to tackle the challenge of corrupted image by increasing the image contrast in a spatially consistent way. This is inspired by the observation that hazy-free images have extra contrast than the hazy ones. Since it assumes the atmospheric intensity is globally uniform, it may fail under night time condition due to the presence of incident lights. Besides, the recov- Tang et al. [12] established a dehazing procedure through a learning framework. This technique assumes that transmittance is independent of scene content and within a small patch it is constant. This scheme is applied to synthetically construct hazy patches with several transmittance from haze-free natural image patches. At that point, a regression model is learned from this data to calculate transmittance. However, the procedure demonstrates color shifts due to the spatially varying lights present in night time hazy scenes. Tarel et al. [13] established an innovative procedure and variations of it for restoring scene visibility. The core benefit of this technique is its swiftness, which is the first method to allow visibility restoration in real-time domain. The procedure's intricacy relies linearly on the number of pixels in the image. However, the dehazing results demonstrate abnormal coloring or color distortion with some haze at the edges of the images. Gibson et al. [14] proposed a technique established on dark channel prior by applying a median filter as a substitute of a minimum filter, this technique is swift in recovering images. Besides, the restored images inherits low luminance and dark halos. Liu et al. [15] proposed a technique where a parameter that can automatically regulate the amount of haze required to be removed. Though it has the advantage of fast processing, the restored images are associated with color distortions. Perhaps aside prior-based feature methods, one of key impact in the single image dehazing are the Retinex-based Theory [16]. This is based on the main assumption that in any given image can be divided into reflectance and illumination. These assumptions present the fundamentals on which the schemes of image dehazing are capable of regulating the observed quality of the image. Some of the extended works in relation to this theory are the Single-Scale Retinex (SSR) [17] and the Multi-Scale Retinex (MSR) [18]. Although the potency of these proposed approaches has been comprehensively es-tablished on daylight hazy images, they inherit significant limitations on images under night-time hazy scenes. Image haze removal tends to be challenging under night time conditions due factors such as multiple light source in the haze image, inadequate brightness information of haze image and in some cases the presence of different artificial light colors causing non-uniform illumination. Since night time haze removal dehazing remains an ill-posed problem, researchers has express a lot of interest in this domain in order to solve this challenge. Quite a few methods have been proposed specifically for night time dehazing conditions, Pei et al. [19] proposed night-time image dehazing method that exploits the same imaging model applied on daytime dehazing and introduced a preprocessing phase. This color transfer preprocessing phase attempts to solve the color unfairness due to the presence of artificial light by altering the color information to that of a target image. Then a modified dark channel prior is applied to dehaze the image followed by local contrast enhancement via bilateral filtering which results in an absolute grayish scene. However, the output of this method may be different from the expected illumination balanced one and will affect the final dehazed result. Zhang et al. [20] established a new imaging model to account for spatially varying atmospheric light. Their preprocessing phase compensates the incident light intensity by applying Retinex technique and also improves the colors of incident light before applying dark channel prior for dehazing. This method is a relaxed model, which estimates the atmospheric light using local neighborhood instead of computing it globally. However, dehazing results demonstrates obvious glow effects in the restored image. Li et al. [21] proposed a method of haze removal of night-time images by reducing the halos caused by multiple scattering of light near the light sources. Meng et al. [22] proposed a method based on the color transfer theory, where the illumination level of nighttime hazy image can be artificially improved through adaptably choosing the reference image, in divergence to the classical model of color transfer with the approach of overall to overall transfer. The modified model highlights on the diverse features of various areas on the original image. Besides it performs perfectly even though the nighttime image is restricted by the existence of numerous artificial light sources. Moreover, [22] enhanced dehazing scheme is supported and based on the theory of guided image filtering which is implemented, subsequently the significant parameters of dehazing scheme using the atmospheric degradation model are challenging to achieve in the environments of nighttime imaging. In addition, the significant model parameters of guided image filter are nominated in correspondence to the boundary information of initial image instead of the initial image itself, which makes it more beneficial for dehazing image acquired under night time environments. Proposed Scheme This section introduces the overall framework of the proposed night-time single image dehazing scheme. From the illustration Figure 1 Additionally, it is capable of overcoming the challenge of night-scenes that is considered to inherit artificial and spatially non-uniform environmental illumination. The proposed scheme estimates the atmospheric light locally within a selected mask on the hazy image independently. We leveraged the adaptive filtering capacity of guided filter [23] to acquire a fixed value μ x at each local region centered at x. More specifically, we partitioned the image into a grid of cells, each cell with a size l × w. Then, guided filtering with a filter size of 3 × 3 was performed on each grid with a stepping value n. Following this, we estimate the pixels within local mask of hazy image that fall within the range target to be attributed to haze pixels. The pixels estimated in the dehazed masks are incremented by the value of 1 at each iteration for each mask. The scheme performs an adaptive dark channel filtering on the reconstructed mask and stored in an iterative manner. Applying the acquired estimated atmospheric intensity map, we estimate the transmission through the dark channel prior, this can be expressed as: where Ω denotes small patch, and x is the location index inside the patch. The upper RGB threshold is represented as max (RGB). Based on the step value n, we ensured that k + n is below the max threshold values. Whereas this condition tends to be true and all iterations are completed. The scheme continues with a mask combination operation for all the reconstructed masks of the night time haze image to produce an enhanced image under nighttime environment. Moreover, the reconstructed mask has the advantage of estimating the atmospheric in-tensity evenly across the local mask. Hence the combined mask operation yields an enhanced reconstructed and uniform image under night time environment. Experimental Verification and Evaluation Results This section discusses the evaluation and the verification of various dehazing methods. In image dehazing, it is mostly difficult to evaluate the performance and efficiency simply by human vision. For this reason, it is essential to compile all the characters of the various dehazing algorithms and analyze them in general. To ensure impartiality, all the corresponding algorithms are realized on the same matlab platform. Qualitative visual Assessment is a subjective approach based on the visual qualities of dehazed images whiles Quantitative Experimental Evaluation represents an objective assessment approach based on intrinsic properties or metrics. 1) Qualitative Visual Evaluation In this section, we discuss the qualitative visual assessment of the proposed scheme which performs efficiently in restoring the visual quality of the image within the RGB channels. Besides, the scheme is capable of improving the depth information within the scene is significantly. As illustrated in Figure 2 below where (a) is the input haze image (b) is the depth map of the hazy scene (c) is the dehazed image and (d) represents the enhanced depth map of the dehazed scene. Based on human observation of these images with their corresponding depth maps, the refined image inherits good quality with a bright corresponding depth map as compared to the hazy image and its depth map. Furthermore, by input (b) represents He et al. [3] (c) represents Li et al. [20], (d) represents Meng et al. [22], (e) represents Zhang et al. [21] and (f) presents our proposed scheme. We discuss and highlight on the visual qualities between proposed scheme and the corresponding state of the art image dehazing methods which is based on the visual perception of the refined image by human observation. From Figure 3, the output of (b) He et al. appears to inherit dim properties associated with halos in the dehazed image. The output of (c) Li et al. is quite substantial. However, it is associated with color distortions and presence of haze on the edges of the dehazed image. The output of (d) Meng et al. [22] presents astonishing results however the restored image inherits an overestimation quality, which results in color infidelities. The output of (d) presents is quite reasonable however recovered Image appears to have low illumination and unnatural colors. Finally, the output of (f) the proposed scheme is astonishing which gains superiority over most of the state of the art. However, for high depth ranges dehazing results are only effective for close-ranged patches while a significant volume of haze continues to remain in distant regions. This drawback associated with the lack of effective depth-modeling strategies which are capable of intuitively adjusting parameters for the numerous depth patches within the image. This has numerous adverse effects on higher level machine vision or learning schemes which may only rely on single images from the scene for the realization of feature extraction. Finally, due to the stringency in filtering that is applied in some of the state-of-the-art, some results may end up with smoothing effects that may remove certain crucial edge and boundary features. In contrast with the state-of-the-art, the proposed method achieves results with high levels of visual clarity and color fidelity. 2) Quantitative Experimental Evaluation In this section of the paper, we conduct as well as discuss an objective evaluation procedure. This is used to evaluate quantitative data based on their corresponding objective metrics. On the contrary to the previous section above where we highlight the visual properties and comparisons of various state of the art dehazing algorithms based on visual opinion of a user subjectively. However, it is not an adequate assessment to be concluded on. Since image dehazing algorithms are not constrained to only image restoration but also very potential in the improvement of the intrinsic properties of the restored image. For this reason, in order to determine the performances of intrinsic properties of the output images of various algorithms, the intrinsic metrics used to compare and evaluate quantitative data comprises Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR) and Structural Similarity Index Measure (SSIM) respectively. Nonetheless, the cognitive challenge of objective evaluation or quantitative assessment is the unavailability of reference image. Since the computation of previously stated metrics demands the both the original image and reference image before the corruption of haze. The tables below represent the objective metrics performances as well as comparisons of the selected state-of-the-art of unconstraint night time haze removal procedures. The quantitative results presented in Table 1 emphasize on the performances and the effectiveness of the selected dehazing procedures under night-time hazy conditions by assessing the recovered images based on their intrinsic image properties respectively. Our proposed scheme distinctively achieves superiority and an excellent results over the selected state-of-the-art on the MSE and the PSNR metric indicated in bold, since its capable of achieving less mean squared error and a better peak signal-to-noise ratio than the state-of-the-art, also the proposed scheme is capable of achieving a satisfactory results with the signal-to-noise ratio and structural similarity index measure (SSIM) metric over the existing methods. Additionally, we take into consideration real-time system requirements of these methods which makes it needful to address the computational complexities of dehazing approaches. We establish comparisons of the state-of-the-art and the proposed scheme based on computational complexity in Table 2 below. From the results achieved indicates that zhang et al. [21] attains superiority over all the experimented approaches followed by our proposed scheme which attains impressive computational speeds at the second place over the other state-of-the-art. Li et al. [20] 1079.22 Zhang et al. [21] 6.11 Meng et al. [22] 124.76 Proposed Scheme 50.63 Conclusion The work in this paper emphasizes an efficient and attractive single image haze removal procedure which is capable of image restoration under night time hazy environment. Our proposed scheme initiates by generating masks in a given night time hazy image, inspired by dark channel prior which is capable of estimating depth information and restoring depth features within a local mask of a target haze image. The approach takes into consideration of the non-uniform illumination which is due to the presence of an artificial light source or multiple light sources in images captured under night environment. Besides, the proposed scheme addresses haze effects due to scattering and attenuation of light. Since the proposed scheme is performed on a local mask iteratively on a given haze image. In addition, experimental verifications and comparisons were conducted on the proposed scheme and the state-of-the-art. In the experimental qualitative analysis, the proposed scheme achieves superiority in terms of clarity and vividness of image over the selected state-of-the-art. Similarly, with the quantitative experimental analysis, the proposed scheme distinctively achieves superiority over the selected state-ofthe-art on the MSE and the PSNR metric. Subsequently, it's capable of achieving less mean squared error as well as a better peak signal-to-noise ratio than the stateof-the-art. Besides, the proposed scheme has the advantage of achieving a satisfactory result with the signal-to-noise ratio and structural similarity index measure (SSIM) metric over the existing methods. Our proposed scheme demonstrates impressive results in terms of computational speeds over the state-of-the-art. Comparatively, the proposed scheme achieves less computational complexities, which makes it more feasible in real time domain than the selected state-of-art.
4,551.4
2019-11-04T00:00:00.000
[ "Environmental Science", "Computer Science" ]
A Comparison of the Population Genetic Structure and Diversity between a Common (Chrysemys p. picta) and an Endangered (Clemmys guttata) Freshwater Turtle The northeastern United States has experienced dramatic alteration to its landscape since the time of European settlement. This alteration has had major impacts on the distribution and abundance of wildlife populations, but the legacy of this landscape change remains largely unexplored for most species of freshwater turtles. We used microsatellite markers to characterize and compare the population genetic structure and diversity between an abundant generalist, the eastern painted turtle (Chrysemys p. picta), and the rare, more specialized, spotted turtle (Clemmys guttata) in Rhode Island, USA. We predicted that because spotted turtles have disproportionately experienced the detrimental e↵ects of habitat loss and fragmentation associated with landscape change, that these e↵ects would manifest in the form of higher inbreeding, less diversity, and greater population genetic structure compared to eastern painted turtles. As expected, eastern painted turtles exhibited little population genetic structure, showed no evidence of inbreeding, and little di↵erentiation among sampling sites. For spotted turtles, however, results were consistent with certain predictions and inconsistent with others. We found evidence of modest inbreeding, as well as tentative evidence of recent population declines. However, genetic diversity and di↵erentiation among sites were comparable between species. As our results do not suggest any major signals of genetic degradation in spotted turtles, the southern region of Rhode Island may serve as a regional conservation reserve network, where the maintenance of population viability and connectivity should be prioritized. Introduction Rhode Island, a small state in the northeastern United States, has experienced intensive and large-scale landscape alteration in the last several centuries. Clearing of the land for timber and agriculture began in the 17th century and peaked in the mid-19th century, when approximately 70% of the state was deforested [1]. Freshwater wetlands have undergone immense alteration during this time, as well. Drainage, filling, damming, and channelization occurred for centuries without regulation, resulting in the loss of an estimated 37% of the wetlands in Rhode Island between 1780 and 1980 [2,3]. Undoubtedly, these human activities have had major impacts on the distribution, abundance, and connectivity of populations of wildlife throughout the state and region, but for most species the legacy of this change remains largely anecdotal or completely unexplored. Populations of freshwater turtles in the region have certainly been impacted by these alterations, but not necessarily in a uniform fashion across species. Some species have experienced declines with the more common and abundant painted turtle. We made several predictions based on the insight that spotted turtles occur in smaller, more isolated populations, and that they probably exhibit reduced rates of gene flow compared to painted turtles. We predicted that spotted turtles would have (1) less genetic diversity, (2) higher inbreeding, (3) greater differentiation among sites, and (4) recently undergone reductions in effective population size (i.e., a population bottleneck), as compared to painted turtles. Study Area and Sampling Our study was conducted throughout the state of Rhode Island located in southeastern New England. Rhode Island is the smallest state geographically in the United States (approximately 2700 square kilometers, when excluding coastal waterways), but ranks second highest in population density [34]. The highest levels of land development and human population densities occur along the south coast and around Narraganset Bay in the eastern part of the state. Approximately 54% of the state is forested, with pine, oak, and maple forests dominating the western part of the state [35]. Mean elevation is approximately 60 m, with a highest point of 247 m. Rhode Island experienced repeated glaciation during the Pleistocene Epoch, the most recent of which was the Laurentide Glacier. This glacier reached a terminus about 20 km south of Block Island between 21,000-24,000 years ago, and subsequently retreated northward leaving Rhode Island ice free by 16,000 years before present [36,37]. Today, Block Island is a 284 km 2 island located approximately 15 km south of the Rhode Island coast. Block Island has existed as an island for approximately 15,000 years since sea level rises associated with the retreat of the Laurentide Glacier caused the catastrophic drainage of glacial lakes along the southern New England terminal moraine [36]. From 2013-2015, small (0.1-1.8 ha), hydrologically isolated (i.e., discrete, non-riparian) wetlands throughout the state were randomly selected across a gradient of forest cover for a mark-recapture study focusing on occupancy and demography [25] (see reference for additional information on site selection and sampling methodology). Tissue collection for genetic analysis took place concurrently at a subset of these wetlands. Painted turtle tissue was collected from a group of wetlands that was representative of the conditions along this gradient and would ensure an adequate number of individuals for population genetics analysis [38]. One additional wetland was sampled for painted turtles on Block Island to serve as an outgroup. Because spotted turtles were relatively rare, tissue was collected from all individuals encountered during the study, and several additional wetlands known to contain the species were also sampled in order to augment the dataset. Two of these additional wetlands deviated from the other wetlands in notable ways. Site 24 was a slow-moving riparian wetland with peripheral freshwater marshes and adjacent forested vernal pools. Turtles were sampled from within an approximately 15 ha area that contained both the vernal pools and the riparian wetlands. Site 29 consisted of a matrix of permanent bog and forested vernal pools within a 2.5 ha area. Using historic aerial imagery, we determined that 6/33 (~18%) of the wetlands sampled were manmade or heavily modified after 1939, the year of the oldest available imagery [39]. These were sites 4, 7, 12, 15, 18, and 21. These did not include any of the wetlands where spotted turtles were sampled. For all individuals, less than 1 mL of blood was collected from the sub-carapacial vein using a 25 gauge sterile needle and a 3 mL syringe and placed immediately on a Whatman FTA sample collection card (GE Healthcare, Buckinghamshire, UK). These cards were stored at room temperature and used for subsequent DNA extraction. All individuals were released at the site of capture. The Institutional Animal Care and Use Committee of the University of Rhode Island approved our methods (protocol #12-11-005). All work was carried out under scientific collecting permits (numbers 2013-12, 2014-25, and 2015-5) of the Rhode Island Department of Environmental Management. Microsatellite Genotyping We used the DNEasy Blood and Tissue Kit (Qiagen Corporation, Valencia, CA, USA) to extract DNA using the standard protocol. For both species, we amplified previously described microsatellite loci [40,41]. We amplified 18 loci for painted turtles and 17 loci for spotted turtles, organizing these into 6 and 5 multiplexes, respectively. We carried out polymerase chain reaction (PCR) using the Qiagen Type-it Microsatellite PCR Kit under the conditions recommended in King and Julian [41] but with a modified initial denaturing step of 95 • C for 5 min. We used negative controls on PCR plates to identify any potential contamination. Fragment size analysis of PCR products was conducted at the DNA Analysis Facility on Science Hill at Yale University on a 3730xl DNA Analyzer with a 96-capillary array, using GeneScan 600 LIZ dye size standard (Applied Biosystems, Foster City, CA, USA). Allele peaks were visualized and called using Geneious 7.0.6 [42]. We used Geneious and MICRO-CHECKER [43] to search for genotyping errors. We re-ran PCR and repeated genotyping for approximately 4% of our samples to calculate a genotyping error rate. Genetic Diversity and Differentiation We used a variety of packages developed for the R statistical platform v.3.3.3 [44] to estimate population genetic statistics. We used the poppr package [45] to quantify missing data and to test for linkage disequilibrium among loci. We used the pegas package [46] to test for deviations from Hardy-Weinberg Equilibrium (HWE) for each locus, and for each combination of locus and sampling site, using an exact test based on 10,000 Monte Carlo permutations of alleles. P-values were assessed after Bonferroni correction in which the alpha level (0.05) was divided by the number of tests. We used the popgenreport package [47] to estimate the frequency of null alleles for each locus [48], private alleles per site, and mean allelic richness using the rarefaction method to correct for variation in sample size [49]. We calculated expected heterozygosity (H e ), observed heterozygosity (H o ), and inbreeding coefficients (F IS ) for each site, and calculated 95% confidence intervals for F IS estimates using 10,000 bootstrap iterations, all using the diveRsity package [50]. We used the diveRsity package to calculate the global measures of F IS and F ST , and to calculate pairwise F ST values for all sites. All F-statistics used the bias-corrected formulation of Weir and Cockerham [51]. As an alternative measure of population differentiation and to maximize comparability with other studies, we also used the diveRsity package to calculate pairwise values of the bias-corrected Jost's D est [52,53]. The diveRsity package was used to estimate 95% confidence intervals for all measures of differentiation using 10,000 bootstrap iterations. We used the poppr package to perform an analysis of molecular variance (AMOVA). We conducted the test with two stratifications such that variance of allele frequencies was partitioned within and among sites [54]. For the global F-statistics and AMOVA analyses, we excluded the Block Island site for painted turtles, and included only the five spotted turtle sites with sample sizes >4 to limit confounding factors, such as outliers and small sample sizes [55], and thereby maximized the comparative inference between the two species. Population Structure We used the ade4 package to perform a Mantel test with 10,000 permutations to test for genetic isolation by distance. We used Nei's standard [56] measure of genetic distance to create the genetic matrix, and geographic locations centered on individual wetlands or on a geographic mean when turtles were sampled from multiple wetlands, to create the Euclidean distance matrix. For painted turtles, we did not include the Block Island site, and for spotted turtles we included only the five sites, with sample sizes >4 to avoid falsely inflating measures of genetic distance. We used the program STRUCTURE v.2.3.4 [57] to characterize the population genetic structure of both species [58] and to test our prediction of a greater degree of subpopulation structure in spotted turtles. For all runs, we assumed an admixture model with correlated allele frequencies and employed the LOCPRIOR parameter using sampling location as the additional sample information. The LOCPRIOR parameter is informative in situations with weak population structure, such as that which may be expected given the spatial scale of our study [58,59]. In all cases, we performed 20 independent iterations of runs consisting of a burn-in of 200,000, followed by 500,000 MCMC repetitions, which was sufficient for all runs to reach convergence. For painted turtles, we ran an initial analysis with all individuals included (hereafter complete analysis) and a second analysis with a maximum of 25 individuals selected randomly (hereafter subset analysis) from each site to ensure that sample size unevenness was not influencing results [60]. We specified the range of K as 1-10 for both runs. For spotted turtles, we ran an initial analysis with all individuals from all sites (i.e., a complete analysis), and a second analysis with only sites with more than 9 individuals, while also limiting site 29 to only 30 randomly selected individuals (i.e., a subset analysis). We specified the range of K as 1-11 for the complete analysis, and 1-4 for the subset analysis. We considered both the ln Pr(X|K) and the ∆K method [61] with STRUCTURE Harvester [62] to evaluate the most likely number of clusters. We used CLUMPP v.1.1.2 [63] and distruct v.1.1 [64] software for post-hoc data processing and visualization. Comparison of Pooled Groups In order to more directly compare the genetic statuses of the two species, we created one geographically defined pooled group consisting of multiple sites, for each species. The geographic extent of the pooled groups was defined such that it would include the vast majority of spotted turtle samples and maximize the parity in sample size between the two species ( Figure 1). For spotted turtles, this included sites 24, 25, 26, 27, 29, and 30. For painted turtles, this included sites 5, 7, 8, 9, 10, and 11. For each pooled group, we used the diveRsity package to estimate H e , H o , and F IS , and used popgenreport to estimate mean allelic richness. For each pooled group, we used the program BOTTLENECK v.1.2.02 [65] to test the prediction that spotted turtles were more likely than painted turtles to have undergone recent reductions in effective population size. To test for the signature of heterozygosity excess, we considered results from both a two-tailed sign test [66] and a one-tailed Wilcoxon signed-rank test using the two-phase mutation model (TPM), with 10,000 iterations used to generate a distribution of expected equilibrium heterozygosity. Following the recommendations of Peery et al. [67], we used a value of 3.1 for the mean size of multi-step mutations, which was used to specify a variance for the TPM [68]. We then conducted separate tests using values of 0.05, 0.15, 0.25, and 0.35 for the proportion of multi-step mutations in the TPM. To estimate the effective population size (N e ) for each pooled group, we used the program NeEstimator v.2.1 [69], using the linkage disequilibrium method under the assumption of random mating. We performed estimates using all possible alleles, and excluding alleles with a frequency <0.05. We report both parametric and jackknife 95% confidence intervals for all estimates. Sampling and Genotyping We collected tissue samples from 647 painted turtles from 22 sites (mean = 29.7 individuals/site, SE = 2.2), and 148 spotted turtles from 11 sites, but only five of these 11 sites yielded enough individuals for the majority of population genetics analyses (mean = 27.4 individuals/site, SE = 6.4, n = 5; Figures 1 and 2). We retained 12 of 18 microsatellite loci for painted turtles (Table S1). Excluded loci were GmuB67 and GmuA32, which were monomorphic, GmuD87 and Cp10, which had high levels of missing data (> 13%) and high frequencies of null alleles (0.120 and 0.219, respectively), and loci GmuD121 and Cp2, which had high frequencies of null alleles (0.205 and 0.158, respectively). GmuD87, GmuD121, and Cp10 deviated most consistently from HWE among the sampling sites ( Figure S1). For retained loci, the total missing data was 3.6%. We retained 16 of 17 loci for spotted turtles (Table S1). We removed the locus GmuD28, which had a high frequency of null alleles (0.174). For the retained loci, the total missing data was 0.6%. There was no evidence of linkage disequilibrium among retained loci for either species. The genotyping error rate was approximately 2.3%. For each pooled group, we used the program BOTTLENECK v.1.2.02 [65] to test the prediction that spotted turtles were more likely than painted turtles to have undergone recent reductions in effective population size. To test for the signature of heterozygosity excess, we considered results from both a two-tailed sign test [66] and a one-tailed Wilcoxon signed-rank test using the two-phase mutation model (TPM), with 10,000 iterations used to generate a distribution of expected equilibrium heterozygosity. Following the recommendations of Peery et al. [67], we used a value of 3.1 for the mean size of multi-step mutations, which was used to specify a variance for the TPM [68]. We then conducted separate tests using values of 0.05, 0.15, 0.25, and 0.35 for the proportion of multi-step mutations in the TPM. To estimate the effective population size (Ne) for each pooled group, we used Comparison of Pooled Groups For the painted turtle pooled group, He was 0.64, Ho was 0.66, FIS was −0.026 (−0.051-−0.001), and mean allelic richness was 10.27. For the spotted turtle pooled group, He was 0.68, Ho was 0.66, FIS was 0.039 (0.015-0.064), and mean allelic richness was 8.59 (Table 1). Painted turtles exhibited no evidence of a recent genetic bottleneck, with all tests returning non-significant results. For spotted turtles, both the sign test and Wilcoxon test at the highest TPM level returned P-values <0.05, suggesting the signal of a recent population decline ( Table 2). Estimates of effective population size were higher for painted turtles, especially when all alleles were included in the analysis, but there was substantial overlap in confidence intervals between the two species (Table 3). The vast majority of genetic variance occurred within sites for both species (AMOVA: painted turtle = 96.5%; spotted turtle = 97.9%), with the remaining variance partitioned among sites, and we found no evidence for isolation by distance in painted turtles (r = 0.097, p = 0.128, 21 sites) or spotted turtles (r = −0.454, p = 0.926, 5 sites). STRUCTURE results for painted turtles clearly distinguished the Block Island site from all mainland sampling locations in all runs. In the complete analysis, the ∆K method suggested two clusters, and the ln Pr(X|K) method suggested six clusters. In the subset analysis, both K selection methods suggested four clusters (Figure 2A,B; Figure S2). In both analyses in which K ≥ 4, the majority of sites showed a lack of definitive assignment of individuals to a particular cluster, but several sites did show a relatively high probability of assignment to a particular cluster. In the complete analysis, sites 2, 12, 15, 18, and 21 exhibited the highest probabilities of belonging to independent clusters. In the subset analysis, sites 12, 15, and 21 exhibited the highest probabilities of belonging to independent clusters. STRUCTURE results for spotted turtles suggested two genetic clusters in the complete analysis, with site 29 distinguished from the other sites. In the subset analysis, this relationship did not persist and the ∆K and ln Pr(X|K) methods suggested different numbers of clusters ( Figure 2C-D; Figure S2). The ln Pr(X|K) method suggested no structure (i.e, K = 1), and the ∆K method suggested three clusters with a greater amount of admixture in sites 24 and 27. (Table 1). Painted turtles exhibited no evidence of a recent genetic bottleneck, with all tests returning non-significant results. For spotted turtles, both the sign test and Wilcoxon test at the highest TPM level returned P-values <0.05, suggesting the signal of a recent population decline ( Table 2). Estimates of effective population size were higher for painted turtles, especially when all alleles were included in the analysis, but there was substantial overlap in confidence intervals between the two species (Table 3). Discussion Measures of genetic diversity were mixed, with observed heterozygosity similar for both species, but spotted turtles exhibited a lower allelic richness. Population genetic structure was comparable for both species, highlighted by little differentiation among sites and no evidence of isolation by distance for either species. However, it should be noted that the Mantel test for spotted turtles included only five sites, thereby limiting the statistical power of the test and our ability to detect a trend. For painted turtles, both global and pooled group F IS was less than zero, suggesting outbreeding. For spotted turtles, however, global and pooled group F IS was greater than zero, indicating a modest amount of inbreeding. There was tentative evidence of a recent population decline in the spotted turtle pooled group, whereas there was no evidence of a population decline in the painted turtle pooled group. Overall, the results were consistent with some predictions and inconsistent with others. We interpret this as limited evidence that the spotted turtle has experienced some, albeit modest, genetic degradation in our study area. Genetic Diversity Lower mean allelic richness in spotted turtles suggests less genetic diversity compared to painted turtles, yet the observed heterozygosity was identical in the two pooled groups. For both species, estimates of observed and expected heterozygosity and allelic richness were comparable to those from other studies of turtles using microsatellites [70], which suggests no significant depletion of genetic diversity. However, long-lived species can mask declines in genetic diversity even after prolonged population declines, making interpretation difficult [71]. A comparison of genetic diversity in fragmented populations of spotted turtles and midland painted turtles (C. picta marginata) in Indiana found lower diversity in spotted turtles [72]. The authors identify a smaller habitat patch size, lower population density, and greater isolation of spotted turtle populations as potential factors, but low sample sizes and the possibility of different mutation rates of the genetic markers used for the different species limit strong conclusions from this study. In a Wisconsin study, genetic diversity was highest in painted turtles, intermediate in snapping turtles (Chelydra serpentina), and lowest in Blanding's turtles (Emydoidea blandingii) [73]. This was consistent with the prediction that genetic diversity would decrease with reduced mobility and greater habitat specialization among these turtle species. A study comparing the same three species in Illinois yielded similar results, with populations of Blanding's turtles, snapping turtles, and painted turtles exhibiting increasing allelic richness and heterozygosity, respectively [74]. The study in Illinois did not detect intraspecific differences between fragmented and relatively undisturbed sites, however. While some studies have demonstrated strong empirical evidence of a relationship between genetic diversity, life history, and the landscape, it remains difficult to compare genetic diversity directly between species when different loci are used, as these loci can influence estimates [75,76]. Standardized approaches for comparing genetic diversity among species and studies are needed so that conservation scientists can better resolve causality for this important measure. Population Structure We documented weak, but existing, differentiation among some painted turtle sampling sites. The Block Island site was only moderately differentiated, despite very limited opportunity for gene flow with the mainland since the Pleistocene [77]. The post-glacial colonization of the northeastern United States by painted turtles occurred as populations expanded from southern refugia after glaciers retreated [78]. Painted turtles are physiologically well adapted to cold climates [79,80] and, along with snapping turtles, were the first turtles to expand northward into formerly glaciated areas [81]. The exact time at which these species first colonized what is now Block Island and mainland Rhode Island is not known, but it probably took place between 10,000 and 15,000 years ago [78,81]. A characteristic reduction in genetic diversity associated with this relatively recent post-glacial range expansion [82,83], along with high rates of contemporary gene flow, may be responsible for the lack of pronounced population genetic structure. The STRUCTURE results indicated that the majority of painted turtle sites were assigned to multiple genetic clusters, a common signature of weak population structure [58]. However, sites 12, 15, and 21 did exhibit consistent signals of substructure, both in pairwise measures of differentiation and in STRUCTURE results. Under both scenarios where K ≥ 4, these sites contained the highest probabilities of belonging to the distinct clusters (Figure 2A-B). Interestingly, these three sites are all manmade or heavily modified [25]. Sites 12 and 15 were both constructed between 1972 and 1976, whereas site 21 predates the earliest available aerial imagery but is clearly a pool that formed when a former stream was bisected by a road. These three sites also contain plentiful nesting habitats immediately adjacent to the wetland. Recent colonization by a small number of individuals (i.e., a founder effect), followed by a rapid expansion in population size due to recruitment, may be responsible for this marked differentiation. Ultimately, however, we cannot say with certainty what is causing the observed genetic distinctiveness of these populations. Future studies comparing population genetics in manmade and natural wetlands would be instructive. Contrary to our predictions, we detected very little differentiation among spotted turtle sites. In fact, a smaller percentage of sites exhibited significant pairwise differentiation compared to painted turtles, but direct comparison is difficult because of the disparity in sample size (spotted turtle = 20 pairwise comparisons, painted turtle = 420 pairwise comparisons). All significant spotted turtle pairwise comparisons included site 29 and this site was also differentiated in the complete STRUCTURE analysis. Adults from site 29 were radiotracked for two years as part of another study and were found to exhibit limited movements and high levels of home range fidelity [84]. Given that dispersal is a requisite process for gene flow, if dispersal rates to neighboring wetlands are indeed low, limited gene flow could explain the higher differentiation. The spotted turtle STRUCTURE subset analysis resulted in a more ambiguous pattern of differentiation, and the fact that the ∆K and ln Pr(X|K) methods resulted in disparate results makes this difficult to interpret. Population Bottleneck and Effective Population Size We documented tentative evidence for a recent population decline in the pooled group of spotted turtles. We ran multiple tests under a range of different multi-step mutation model proportions to assess the robustness of the results [67]. Statistical evidence for a population bottleneck occurred at the higher proportion of the multi-step model in the TPM, where the test is most vulnerable to Type I error [68]. Thus, our results should be interpreted with caution. Bottleneck tests can be difficult to interpret, but results comparable to ours have been interpreted in a similar way as those for other species of turtles [71]. Due to overlapping and sometimes wide confidence intervals, the interpretation of effective population size estimates for the pooled groups proved difficult. Estimates for painted turtles were higher, especially under the all alleles scenario in which the painted turtle estimate was more than double that of the spotted turtle estimate. However, a jack-knife 95% confidence interval that has no upper limit precludes clear interpretation. Scope and Limitations For both species, the magnitude of the genetic structure that we did detect was very modest. Given the limited spatial scale of our study and the fact that we expected these sampling sites to be of post-Pleistocene origin and feature admixing to some degree, it should be emphasized that we were indeed seeking fine-scale genetic structure. Moreover, in our study area, the impact of human activities we intended to explore has occurred in the evolutionarily recent past (~250 years) and intensified only in the last~75 years. The number of painted turtle generations since the more intense period of human influence began is probably 4-7 generations and 12-25 generations for the longer period. The number of spotted turtle generations is probably 2-4 for the shorter period and 8-12 for the longer period. As it can be difficult to detect the effects of genetic drift in long-lived organisms, the spatial and temporal scales (i.e., time since habitat loss and fragmentation) of our investigation may have limited our ability to detect genetic differentiation and demographic events that have occurred in the recent past, particularly for spotted turtles, given their longer generation time and the smaller geographic range from which they were sampled. Simulation studies have demonstrated that F ST is relatively insensitive to disruptions in gene flow, especially when dispersal is limited and that other population-based metrics may be superior in detecting changes that have occurred in the recent past [85]. Compounding the issue, turtle DNA mutates slowly relative to that of other vertebrates [86,87]. Other studies of population genetics in freshwater turtles have failed to detect predicted genetic structure, even when there is strong empirical evidence of the effects of historic habitat fragmentation [74,88]. Nonetheless, the ability to detect strong genetic structure among sites in as few as 1-10 generations after fragmentation has been demonstrated in reptiles [89][90][91]. Given an ample number of generations, the same should be possible in turtles, but it is not yet clear how many generations are necessary, and this number likely varies among species. When working on such limited spatial and temporal scales, adequate sample size, number of markers used, and mutation rates of markers need to be considered to maximize the resolution of analyses [38,55,92,93]. Direct comparisons among studies can also be difficult, and standardized approaches and accepted minimums of markers and sample sizes would be helpful in improving the interpretability and context of individual studies. Conclusions Painted turtles are one of the most well-studied freshwater turtle species, largely because they are widespread and abundant. Our analysis confirms that they exhibit little population genetic structure across Rhode Island, making for an appropriate contrast with a far less abundant species. Some sites did exhibit modest genetic differentiation, but the reasons why remain elusive and warrant further investigation. Our study suggests that spotted turtles exhibit little population genetic structure at the spatial scale explored. These results reinforce that, from a genetic perspective, these species should be managed as contiguous populations at the landscape scale and that future studies of population genetics in freshwater turtles that wish to delineate differentiation should be carried out at an appropriately large scale. Our analysis provides some evidence that spotted turtles have experienced a greater degree of inbreeding and may have experienced population declines in the recent past in our study area. However, overall, diversity and population genetic structure in this species remain comparable to that of painted turtles. As we were unable to find strong evidence for genetic degradation in spotted turtles, the southern region of Rhode Island may be well suited to serve as a regional conservation reserve network where the maintenance of populations and connectivity among wetlands should be prioritized [94,95]. Relatively little is known about spotted turtle population genetics and how genetic structure varies range-wide. Much of what has been inferred is derived from studies of different species of freshwater turtles considered ecologically similar. Understanding the legacy, significant or not, of habitat loss and fragmentation on population genetic structure is critical for effective management and conservation of this species. Additional population genetic studies, at both local and regional scales, will help improve our understanding of the potential vulnerabilities to environmental and genetic stochasticity in this species of conservation concern. Supplementary Materials: The following are available online at http://www.mdpi.com/1424-2818/11/7/99/s1, Figure S1. P-values for all loci by sampling site combinations, Figure S2. ∆K and ln Pr(X|K) STRUCTURE Harvester results, Table S1. Summary statistics for all loci for painted turtles and spotted turtles, Table S2. Pairwise F ST (below diagonal) and Jost's Dest (above diagonal) measures of differentiation for all combinations of sampling sites of painted turtles, Table S3. Pairwise F ST (below diagonal) and Jost's Dest (above diagonal) measures of differentiation for all combinations of sampling sites of spotted turtles.
7,023.8
2019-06-26T00:00:00.000
[ "Environmental Science", "Biology" ]
A genome-wide search for gene-by-obesity interaction loci of dyslipidemia in Koreans shows diverse genetic risk alleles Dyslipidemia is a well-established risk factor for cardiovascular disease. Studies suggest that similar fat accumulation in a given population might result in different levels of dyslipidemia risk among individuals; for example, despite similar or leaner body composition compared with Caucasians, Asians of Korean descent experience a higher prevalence of dyslipidemia. These variations imply a possible role of gene-obesity interactions on lipid profiles. Genome-wide association studies have identified more than 500 loci regulating plasma lipids, but the interaction structure between genes and obesity traits remains unclear. We hypothesized that some loci modify the effects of obesity on dyslipidemia risk and analyzed extensive gene-environment interactions (GxEs) at genome-wide levels to search for replicated gene-obesity interactive single-nucleotide polymorphisms (SNPs). In four Korean cohorts (n=18,025), we identified and replicated 20 gene-obesity interactions, including novel variants ( SCN1A and SLC12A8 ) and known lipid-associated variants ( APOA5 , BUD13 , ZNF259 , and HMGCR ). When we estimated the additional heritability of dyslipidemia by considering GxEs, the gain was substantial for triglycerides (TGs) but mild for low-density lipoprotein cholesterol (LDL-C) and total cholesterol (Total-C); the interaction explained up to 18.7% of TG, 2.4% of LDL-C, and 1.9% of Total-C heritability associated with waist-hip ratio. Our findings suggest that some individuals are prone to develop abnormal lipid profiles, particularly with regard to TGs, even with slight increases in obesity indices; ethnic diversities in the risk alleles might partly explain the differential dyslipidemia risk between populations. Research about these interacting variables may facilitate knowledge-based approaches to personalize health guidelines according to individual genetic profiles. 10 2p(1-p)(log(OR)) 2 , where p is the MAF of a variant and OR is the estimated odds ratio from a logistic regression model for marginal associations (47). The contribution of each GxE marker was estimated using 2p(1-p)(log(ORG)) 2 /VP+2ep((1-p)+2p(e-1) 2 )(log(ORGxE)) 2 /VP. In this equation, e is the prevalence of an environmental factor, VP is the phenotypic variance, and ORG and ORGxE are estimated additive and geneobesity interactive ORs from a logistic regression model for GxEs. We used GenABEL, the R package for genome-wide association analyses (48), to estimate the total heritability of dyslipidemia from the Healthy Twin Study, a family-based cohort study in Korea (Supplemental Table S2.e) (49, 50). GCTA, an analysis tool for genome-wide complex traits (51), was also used to estimate the SNP-based heritability attributable to all GWAS variants genotyped on a microarray. To transform the estimate of variance explained on the observed scale to that on the underlying scale, we assessed the prevalence of dyslipidemia using data from the Korean National Health and Nutrition Examination Survey (KNHANES) (52). Table 1 shows the baseline characteristics of the participants in each Korean genome cohort. We observed the age, sex, obesity-related traits, and age-and sex-standardized plasma lipid levels of the cohorts; all features were stratified by obesity status into subgroups based on BMI, WC, and WHR (Supplemental Table S1 and S2). We focused on the adjusted lipid concentrations to assess the trends of lipids in each obesity and abdominal obesity subgroup. As expected, age-and sex-adjusted lipid levels significantly worsened as the degree of obesity status increased in the combined Korean cohort (Supplemental Figure S1). Results We identified 55 SNPs showing genome-wide significant GxE effects on the risk of abnormal lipid profiles with at least one of the six obesity traits (Supplemental Table S3). By conducting LD clumping based on the genetic contribution to the risk of dyslipidemia, we detected 20 gene-obesity interactions due to novel SNPs near SCN1A and SLC12A8 and to lipid-associated SNPs near APOA5, BUD13, ZNF259, and HMGCR that were reported in previous GWASs. Table 2 shows the marginal and gene-obesity interactive effects of the newly identified variants on the risk of dyslipidemia; we summarized the novel GxEs according to the discriminators of obesity traits such as BMI, WC, and WHR. Figure 1 (Supplemental Table S4) describes the risk of abnormal lipid profiles for each genetic and environmental factor; we estimated the OR as the ratio of the probability of dyslipidemia occurring in each exposed group (G≠0 or E≠0) to the probability in a non-exposed group (G=0 and E=0). We identified three novel SNPs interacting with obesity traits to modify the risk of abnormal elevation of Total-C: rs2878417, rs7702895, and rs7733436. In particular, COL4A3BP exhibited synergistic effects with BMI and WC on the risk of abnormalities in Total-C. For the interplay between HMGCR and WHR, the marginal odds ratio (ORD) was 0.81 (95% CI, 0.78-0.84); ORG and ORGxE were 0.72 (95% CI, 0.68-0.77) and 1.22 (95% CI, 1.13-1.30), respectively. As shown in Figure 1.a (Supplemental Table S4.a), the multiplicative effect of abdominal obesity was 1.12 (95% CI, 0.96-1.31) for individuals with two wild-type alleles at rs7702895. The magnitude of the effect of abdominal obesity, however, increased with the number of minor alleles, with values of 1.46 (95% CI, 1.28-1.67) for heterozygous and 1.57 (95% CI, 1.26-1.95) for homozygous minor alleles. Table S4.e) describes the gene-obesity interactive effect on the risk of abnormal HDL-C reduction; the multiplicative effect for common homozygous or heterozygous and rare homozygous genotypes was 1.42 (95% CI, 1.22-1.65), 1.99 (95% CI, 1.57-2.52), and 6.24 (95% CI, 4.03-9.64), respectively. Table S4.b), the multiplicative effect of overweight class 1 was 1.00 (95% CI, 0.54-1.82) for rare homozygous genotypes. For common homozygous or heterozygous genotypes, on the other hand, obesity acted as a risk factor for abnormalities in LDL-C; the multiplicative effect was 1.82 (95% CI, 1.61-2.06) and 1.34 (95% CI, 1.12-1.61), respectively. with our previous findings. On the other hand, we could not detect any interactions of GxE markers with BMI; only the variants, previously detected with multiple analytical methods for testing GxEs (Table 2), were consistently identified. Similarly, we identified only one GxE SNP of dyslipidemia, located on APOA5, by using the alternative definition of obesity, the highest quintile of BMI or WC or WHR (Supplemental Table S8); we could not find any loci interacting with BMI or WC. Table 3 shows the contributions of marginal associations and gene-obesity interactions to abnormal lipid profiles; we present the proportion of total heritability for each lipid explained by GWAS-identified SNPs, novel GxE loci, and the combined set of both lipid-associated and gene-obesity interactive variants (total genetic impact). The total and SNP-based heritabilities of the risk of abnormalities in Total-C were approximately 35.5% and 17.7-24.9%, respectively, after adjusting the risk of dyslipidemia by age, age 2 , and sex. The genetic contributions increased when we considered both marginal associations and geneobesity interactions, with differences between the GWAS-identified and total genetic impact of 1.1-1.9%. The total and SNP-based heritabilities of the risk of LDL-C abnormalities, on the other hand, were approximately 31.7% and 17.2-25.6%, respectively. For each obesity trait, the total genetic contributions including gene-obesity interactions to the risk of dyslipidemia were 0.9-2.4% higher than the marginal impact due only to direct associations. The contributions of the combined set of both GWAS-identified and GxE variants were markedly higher when several independent gene-obesity interactive loci were present for each pair of lipid traits and environmental factors. Figure 3.a, 3.b, and 3.c (Table 3) present the risk of abnormal elevation of TG. Genetic factors accounted for approximately 38.3% of the total variance of the risk of abnormalities in TG after adjusting the risk by age, age 2 , and sex. Genetic markers located on the genome-wide dense SNP microarray accounted for 18.4-26.4% of the overall variance for the risk of hypertriglyceridemia. Approximately 36.6% of the total heritability was due to 40 independent GWAS-identified SNPs only; the genetic contribution increased to 47.1% when we considered the interactions of APOA5 or BUD13 with WC. Similarly, the total genetic impact increased from 39.3% to 58.0% when we considered both marginal associations and newly found genetic interactions attributable to WHR. For Caucasians, the additional TG heritability due to the interactions of GxE variants with WC or WHR was 5.8% and 9.1%; the gain was 10.6% and 18.7% for Koreans, respectively. The genetic contributions to the risk of abnormal elevation of Remnant-C are described in Figure 3.d (Table 3). Genetic factors explained approximately 48.6% of the total variance after adjusting for age, age 2 , and sex. Genotyped loci on the SNP microarray accounted for 11.3-14.2% of the overall variance for Remnant-C. Approximately 38.5% of the total heritability was explained by 59 independent GWAS-identified SNPs only. When both marginal associations and interactions of APOA5 or BUD13 with WHR that modify the risk of abnormal Remnant-C were considered, the genetic contribution increased to 47.8%; the difference between marginal and total genetic impact was 9.3%. For Caucasians, on the other hand, the additional heritability for Remnant-C was just 5.1%. Discussion One of the main purposes of human genome studies is to personalize treatment and health guidelines according to an individual's genetic constitution. GWISs are approaches intended for achieving this end, particularly when genetic loci interacting with modifiable risk factors are examined at a genome-wide level. Such studies permit the identification of higher-or lower-risk individuals depending on changes in known risk factors. In this study, we identified novel and known genes interacting with obesity indices to modify the risk of dyslipidemia. We also replicated our findings using independent genome cohorts and assessed how much phenotypic variance or heritability was additionally explained by considering the gene-obesity interactions. Our study focused on increasing power to detect gene-obesity interactions by applying a variety of strategies for testing GxEs. We carried out emerging exhaustive scans and two-step methods in parallel because each analytical model provided differential power to detect GxEs, mainly according to marginal genetic and GxE effects. We tested interactions of SNPs at a genome-wide scale with several obesity traits, including Koreanspecific parameters defined by additional ranges of BMI and WC. Besides, we adopted liberal cut-offs and stepwise penalties due to marginal p-values as well as the standard genome-wide significance level to find gene-obesity interactive loci influencing the risk of dyslipidemia. Type 1 errors are generally considered to be less problematic than possible underpowered findings (27); it is recommended to use multiple models for GxEs. Further verification can be done by replication and stratified analyses for candidate GxE regions. Our findings reveal a genome-wide set of variants with a wide range of marginal effects on the risk of dyslipidemia. We identified novel GxE markers near SCN1A and SLC12A8 with little or no direct association with lipid parameters as well as gene-obesity interactions related to lipid-associated loci reported in previous GWASs on lipids: APOA5, BUD13, ZNF259, and HMGCR. We identified SCN1A and SLC12A8 through exhaustive CO analyses, while all other GxEs due to lipid-associated loci were detected using two-step methods due to the marginal effects of each locus in the first step. These trends are consistent with the results of an earlier simulation study of statistical power for GxE detection, which showed that exhaustive CO analysis is more powerful than other two-step methods when the marginal effects of genetic variants are small (43). To our knowledge, SCN1A and SLC12A8 have not been previously associated with any lipid parameters. We replicated the novel findings in four Korean genome cohorts; one strength of using cohorts formulated on identical protocols is the ability to examine gene-obesity interactions with high-quality health outcomes, genetic and environmental factors. In addition, conducting meta-analyses with the independent Korean cohorts permitted the estimation of more precise effects of susceptibility loci interacting with obesity traits. We also classified individuals in this study into three groups according to the number of risk alleles at GxE loci and compared the changes in lipid levels when BMI, WC, and WHR increased by one unit between the three groups. This comparison reconfirmed the identified gene-obesity interactions from different points of view, as the changes in lipids due to the elevation of obesity indices worsened as the number of risk alleles increased. Although generating interesting findings, our approaches for testing gene-obesity interactive effects on lipid profiles are not free of limitations. Our study did not include GxEs due to loci marked by rare variants (MAF<0.01) and other essential obesity indices, such as body fat percentage and visceral fat level. We focused only on the GxE effects due to a set of common variants since our study populations did not include enough information for rare genetic variants. In addition, current analytical methods do not provide the GxE analyses using TG levels quantitatively (Supplemental Table S6). For GWISs, it is more important to use multiple analytical models as possible to generate a consistent and powerful result. For categorical analyses, the cut-offs were decided by clinical guidelines for managing hyperlipidemia and obesity. For HDL-C, on the other hand, we used quantiles because the clinical cut-off points (HDL-C<1.03 mmol/L for males, 1.29 mmol/L for females) resulted in too many dyslipidemia cases (46.8%) in our study population; the arbitrary cut-offs could affect the results for interactions. To clarify the issue, we conducted GWISs using two different methods of categorization: by clinical guidelines and by the quantile distribution in our datasets. Compared with using quantiles to determine dyslipidemia and obesity traits, the categorical GxE analyses using clinical thresholds could find out a more extensive range of gene-obesity interactions (Supplemental Table S7 and S8). The clinical cut-off points, commonly accepted and ascertained by several epidemiological studies, were more appropriate to detect interactions at a genome-wide scale than quantiles dividing the study population into equal-sized bins for phenotypes. Our ability to extend these novel findings from Korean populations to other ethnic groups is limited by differences in the MAFs of genetic markers, distributions of obesity traits, and prevalences of each lipid abnormality. Our results were estimated and reconfirmed in four independent Korean cohorts sharing phenotyping and genotyping protocols, and the identified gene-obesity interactions in the risk of dyslipidemia might not be supported when racial differences in lipid profiles and the distribution of genetic and environmental factors are considered. ZNF259 marked by rs2075291, for example, could be a useful therapeutic target for managing TG in Korean population; the ZNF259-WHR interactive impact on the risk of hypertriglyceridemia was 3.4%. This genetic variant, however, is not a suitable target for the other ethnic groups; the minor allele of rs2075291 is infrequent for South Asians, and too rare for Europeans, Americans, and Africans (Table 3) Table S4.
3,320.8
2019-10-29T00:00:00.000
[ "Biology" ]
Simple parametrization for the ground-state energy of the infinite Hubbard chain incorporating Mott physics, spin-dependent phenomena and spatial inhomogeneity Simple analytical parametrizations for the ground-state energy of the one-dimensional repulsive Hubbard model are developed. The charge-dependence of the energy is parametrized using exact results extracted from the Bethe-Ansatz. The resulting parametrization is shown to be in better agreement with highly precise data obtained from fully numerical solution of the Bethe-Ansatz equations than previous expressions [Lima et al., Phys. Rev. Lett. 90, 146402 (2003)]. Unlike these earlier proposals, the present parametrization correctly predicts a positive Mott gap at half filling for any U>0. The construction is extended to spin-dependent phenomena by parametrizing the magnetization-dependence of the ground-state energy using further exact results and numerical benchmarking. Lastly, the parametrizations developed for the spatially uniform model are extended by means of a simple local-density-type approximation to spatially inhomogeneous models, e.g., in the presence of impurities, external fields or trapping potentials. Results are shown to be in excellent agreement with independent many-body calculations, at a fraction of the computational cost. Introduction The archetypical strong-correlation phenomenon is the Mott insulator [1]. It is well known that the first-principles description of such systems by means of densityfunctional theory (DFT) encounters severe difficulties. The single-particle (Kohn-Sham) gap calculated with standard DFT methodology is not the same as the many-body gap, even in principle and if no approximations at all are made during the calculation. In the particular case of the Mott insulator, it is known that a proper description of the Mott gap is obtained by adding to the single-particle Kohn-Sham gap a correction arising from the derivative discontinuity of the exchange-correlation functional [2][3][4][5][6][7]. Many modern density functionals do have an implicit derivative discontinuity due to their orbital dependence, but this affords at best an incomplete description of the Mott state and the Mott gap [3,6,7]. Only very few density functionals have an explicit discontinuity as a function of the density, among them the 2-electron functional of Mori-Sanchez, Cohen and Yang [3] which is based on the so-called flat-plane condition, devised by the same authors, [3][4][5] and the Bethe-Ansatz LDA (BALDA) of Lima et al. [8,9], which is based on an approximate analytical parametrization of the Bethe-Ansatz solution for the ground-state energy of the one-dimensional Hubbard model that becomes exact in several important limits. Both of these functionals do properly account for the Mott insulator. Only the BALDA, however, has been parametrized in a way that allows its application to a wide range of physical systems. As a consequence, BALDA and variations thereof has been applied to inhomogeneous correlated manyelectron systems as well as to correlated many-atom systems in optical lattices and traps [9][10][11][12][13][14][15][16][17][18]. However, in these applications it has become clear that the parametrization by Lima et al., also referred to as LSOC parametrization, after the initials of its developers, still does not provide a fully correct description of the Mott gap. In particular, at small U, the LSOC expression for E c has a derivative discontinuity, but the resulting Mott gap is negative, effectively predicting the Mott state to be a (strange) metal instead of an insulator. It is not trivial to correct this behaviour, as any change to the LSOC expression must preserve the exact limits and properties already built into it. Thus, instead of merely algebraic adjustments, it becomes necessary to understand and improve the physics missing from the LSOC parametrization in this regime. A key aspect of the Mott insulator is that it is a nonmagnetic state of matter, i.e., the insulating nature is not the result of antiferromagnetism. Therefore, the solution to the problem just described must be expressed, within DFT, in terms of the charge-density only, and cannot make use of spin densities and spin-density-functional theory (SDFT). On the other hand, many correlated systems do have magnetic phases. Therefore, a more complete description of strong correlations must properly account for both, the insulating state and various types of magnetic states, as well as their possible coexistence. While such description is possible within Bethe-Ansatz based SDFT by performing a fully numerical solution of the spin-resolved Bethe-Ansatz equations and interpolating between the resulting data points every time the exchange-correlation energy needs to be evaluated, this procedure is very inconvenient for Kohn-Sham calculations, where the functional must be evaluated hundreds or thousands of times during the iterations towards selfconsistency. A simple parametrization of the spin dependence would allow straightforward application of BA-DFT methodology to spindependent phenomena in electronic systems and hyperfine-label-dependent phenomena in optical lattices. The present paper reports progress along both of these lines. In Section 2 we identify a shortcoming of the standard parametrization used in BALDA and propose a simple, ad hoc but physically motivated, variation of it that is more accurate and whose Mott gap properly remains positive for all values of the interaction U. The revised parametrization also considerably improves the description of the metallic (e.g., Luttinger liquid) phases. In Section 3 we employ exact analytical results extracted from the Bethe Ansatz to further generalize this revised parametrization to spin-dependent situations. In Section 4 we use a simple local-density type approximation to extend our results to spatially nonuniform systems. Density-Matrix Renormalization Group (DMRG) and Lanczos calculations are performed to test and validate this approximation. Section 5 contains a brief summary. Improved description of the Mott gap The task at hand is to obtain a simple and accurate analytical approximation for the ground-state energy E 0 of the inhomogeneous one-dimensional Hubbard model (1DHM), wheren iσ =ĉ † iσĉ iσ is the spin-resolved particle-density operator at site i,ĉ † iσ andĉ iσ are fermionic creation and annihilation operators, t is the hopping parameter (taken to be the unit of energy), U the on-site interaction and V i an on-site potential that makes the system spatially inhomogeneous. In applications to electrons in crystal lattices, V i can describe inequivalent atoms in the lattice, while in applications to cold atoms in optical lattices it accounts for the trapping potential. The commonly used expression for the per-site ground-state energy (e 0 ) of the 1DHM is the LSOC parametrization, given by [8] where n = n ↑ + n ↓ is the charge density and the interaction U enters e 0 (n, U) through the interaction function β(U), which is determined from By construction, this expression becomes exact for U → 0 and any n (where β = 2), for U → ∞ and any n (where β = 1), and for n = 1 and any U (where 0 ≤ β ≤ 1), and provides a reasonable approximation to the full Bethe-Ansatz solution inbetween. In the LSOC parametrization, the interaction function β(U) is independent of the particle density and of spin. This independence is algebraically very convenient, as it allows one to determine β(U) outside the selfconsistency cycle of DFT instead of having to recalculate it any time the charge (or spin) density changes. However, it is physically incorrect, as the relation between the bare interaction parameter U and the correlation energy E c must depend on the charge density, e.g. through screening. As a consequence of the density-independence of β, LSOC retains the sinusoidal density-dependence of the energy, which is correct only at U = 0 and U → ∞, also for all intermediate values of U. Thus, it becomes necessary to modify the interaction function in a way that allows it to change with the density. The additional density dependence, however must not spoil the behaviour in the three limits exactly recovered by LSOC. This condition severely restricts the modifications that could possibly be made to the LSOC parametrization. The form we adopt is where β(n, U) = β(U) α(n,U ) and α(n, U) = n 3 √ U /8 . We stress that this particular form has not been derived from first principles but is a physically motivated ad hoc modification designed to restore the density-dependence of the interaction function through the replacement β(U) → β(n, U), while preserving all exact limits obeyed by the LSOC expression. The specific form chosen for the exponent α(n, U) is a consequence of the later generalization to spin-dependent phenomena, as explained in Sec. 3. Figure 1 compares the present parametrization (4) to data obtained from a fully numerical (FN) solution of the Bethe-Ansatz integral equations. For comparison purposes, the earlier LSOC parametrization (2) is also included. To distinguish the present from the LSOC expression, we label the curves corresponding to the former by FVC. As the insets show, the relative deviation of FVC data from FN data is typically less than 2%, and at most ∼ 4%. This is the same size of error of the local-density approximation itself (quantified by comparing BALDA/FN data for small Hubbard chains to results from exact diagonalization), so that to within the accuracy of the LDA the present parametrization is a faithful representation of the full BA solution for the entire parameter range, including values of U ≫ 6t that cannot be realized in solids but occur in systems of trapped cold atoms. Our main motivation for developing Eq. (4), however, was the incorrectly negative Mott gap predicted by the LSOC expression between U = 0 and U = 2t. The Mott gap E gap can be evaluated analytically from the expression for e 0 (n, U), either by explicitly calculating the derivative discontinuity or by taking total-energy differences [9]. From the LSOC parametrization one obtains [9] E LSOC gap (U) = U + 4 cos π β(U) , whereas the present parametrization leads to As illustrated in Fig. 2, the additional terms arising from the present expression push the gap upward, avoiding that it becomes negative between U = 0 and U = 2t as was, incorrectly, the case within the LSOC expression [9]. For U > 2t both the LSOC and the present (FVC) gap are positive, the latter being significantly closer to the numerical BA results than the former, although neither reproduces the subtle nonperturbative behavior for U → 0, in spite of the logarithmic term in Eq. (6). Only within the latter, however, the Mott gap is everywhere positive. Extension to spin-dependent phenomena In spin-polarized situations, the energy depends on the spin density m = n ↑ − n ↓ , in addition to the charge density n = n ↑ + n ↓ and the interaction U. This dependence is not included in the LSOC parametrization, which can therefore not be applied to study spin-dependent phenomena for electrons in solids or hyperfinepolarization-dependent phenomena for atoms in optical lattices. We note that BALSDA calculations for the 1DHM have already been performed in the context of cold atoms in optical lattices [19][20][21]. In lieu of an analytical parametrization, these works resorted to a fully numerical (FN) solution of the BA integral equations. An analytical parametrization would substantially simplify this approach, making the BALSDA as easily implementable as the BALDA one. An additional advantage of the analytical approach over the numerical one is that in the solution of the BA integral equations one cannot specify from the outset the density and magnetization of the system one is interested in. Rather, one has to specify the upper and lower limits of the integrals, and obtains the densities as part of the solution. This is clearly inconvenient for DFT, where the energies have to be evaluated as functions of the densities. More generally, it is desirable to be able to specify the system under study in terms of physical observables, such as the densities n and m, instead of in terms of auxiliary quantities, as the limits of the BA integrals. Analytical expressions also permit one to derive further analytical results for other quantities. Our present derivation of closed expressions for the Mott gap is one example, and the analytical derivation and solution of Euler equations determining the phase diagram of harmonically trapped fermions on optical lattices [17] is another. For all these reasons, a simple but reliable parametrization of e 0 (n, m, U) can be useful for various types of calculations, within SDFT and beyond. Therefore, we next present an analytical parametrization of e 0 (n, m, U) and use it to construct a Bethe-Ansatz local-spin-density approximation (BALSDA) for the 1DHM. In order to generalize the LSOC and FVC expressions to spin-dependent situations, we one more time follow the basic philosophy of constructing an analytical interpolation function recovering exactly known limits as function of the physically relevant variables. Several such exact results for e 0 (n, m, U) are known [22][23][24]. For non-interacting systems (U = 0), For infinite interaction (U → ∞), For half-filled unpolarized systems (n = 1, m = 0), Finally, for maximum magnetization (m = n), e 0 (n, m = n, U) = − 2 π sin(πn). The LSOC and FVC parametrizations take m = 0, and express e(n, U) as a controlled analytical interpolation between (8), (9) and the m = 0 limit of (7). The construction of a more complete interpolation, recovering all four limits as functions of n, m and U, is strongly constrained by these limits, but still not unique. We therefore impose five additional common-sense criteria: (i) avoid high-order polynomials, which can produce unphysical wiggles, (ii) avoid unusual special functions, and (iii) keep the form similar to the LSOC parametrization. This third condition is useful because by now the LSOC parametrization has been implemented and used by many groups [9][10][11][12][13][14], so it will be easier to update to a new parametrization which has a similar form to the old one. However, as we have argued above, the LSOC expression does not properly describe the density-dependence of the ground-state energy at intermediate U. Therefore, we also require, as condition (iv), our spin-dependent generalization to reduce to the present Eq. (4) for m = 0 instead of to Eq. (2). The final, fifth, additional condition is motivated by computational efficiency and is explained in Sec. 4 below. Even with these additional common-sense criteria, the form of the parametrization is not uniquely determined, and a very large function space can still be explored. The particular choice made below was obtained by starting from the LSOC form and then building in, one-by-one, the additional exact limits and criteria. Many different variations have been explored, but we stopped when arriving at one whose deviation from the fully numerical solution of the Bethe-Ansatz equations was less than the typical error bar of the local-density approximation for this type of system. At this point, further improvements in the form of the parametrization become indistinguishable, in applications to inhomogeneous systems, from the intrinsic error of the LDA. All four exact limits and five supplementary conditions are incorporated by the choice where β(n, m, U) = β(U) α(n,m,U ) , and α(n, m, U) = n 2 − m 2 n 15/8 Here, β(U) is the same quantity employed in the LSOC parametrization. For zero magnetization, α(n, m, U) reduces to α(n, U) used in Eq. (4). Equation (11) is valid for U ≥ 0 and n ≤ 1, but it can be extended to n > 1 and to U < 0 by standard particle-hole transformations [9,17,22,24]. (10) is of particular interest because this condition has no counterpart in the spin-independent situation. Moreover, it establishes a coupling between the charge and the spin dependence. Physically, maximum spin means that the Pauli principle keeps all fermions maximally apart for any U, just as infinitely repulsive interactions (U → ∞) do for any degree of spin-polarization. For n = m the spin-dependent parametrization must thus reduce to the same limit as for U → ∞ and recover the value β = 1. On the other hand, for m = 0 the expression should reduce to the earlier form (4). This double requirement is the explanation of the particular form chosen for α(n, m, U) of Eq. (14) and, consequently, for α(n, U) of Eq. (4). Figure 3 presents a comparison of the spin-dependence of this parametrization with data obtained from a fully numerical solution of the BA integral equations for intermediate parameter values, where the expression is not exact already by construction. Clearly, the spin-dependence of the homogeneous system is recovered to within the same precision as the charge dependence. Local approximation for inhomogeneous systems The main use of DFT and SDFT is in calculations for spatially inhomogeneous systems, where translational symmetry is broken and the density becomes position dependent. In the case of lattice models, inhomogeneity means that not all sites are equivalent. By means of the L(S)DA prescription, expressions for the energy of the homogeneous system, such as those described in the preceding section, can be used on a site-by-site basis to approximate the corresponding energy of the inhomogeneous system. Explicitly, the local-spin-density approximation to any energy component E is given by where L is the number of lattice sites, and e = lim L→∞ E(L)/L is the per-site energy of the homogeneous system. This expression approximates the energy of the inhomogeneous system, where n i and m i vary from site to site, by evaluating the energy density of the homogeneous system site by site at the densities of the inhomogeneous one. In DFT, including model DFT, this prescription is usually applied to the correlation energy, which for the Hubbard model can be defined as E c = E 0 − E M F , where E M F is the mean-field approximation to the ground-state energy E 0 . Since E M F is simple to obtain, the task to approximate the correlation energy E c [n i , m i , U] of the inhomogeneous Hubbard model, is thus reduced to that of approximating the per-site ground-state energy of the homogeneous one, e 0 (n, m, U). This is the quantity that we extracted above from the Bethe-Ansatz equations. The minimization of the resulting energy functional is conveniently carried out via selfconsistent Kohn-Sham (KS) calculations of the charge and spin densities and of the resulting ground-state energies. In such KS calculations the correlation potentials (obtained by differentiating the correlation energy with respect to n and m, or, equivalently, to n ↑ and n ↓ ) are evaluated once in each of the iterations of the selfconsistency cycle. In these iterations, the densities n i and m i change, but the interaction U, being a parameter of the Hamiltonian, does not. We have therefore expressed the integral in Eq.(3) and the resulting transcendental equation for β exclusively in terms of U, in order to guarantee that the only slightly timeconsuming steps of the calculation take place only once in each calculation, outside the selfconsistency cycle. This is the fifth additional condition on the spin-dependent parametrization, alluded to above. As a simple example of such KS calculations, which serves to illustrate all essential aspects, we consider open Hubbard chains, where the spatial inhomogeneity stems from the boundaries, which give rise to charge and spin-density oscillations in the bulk. Representative results for a chain with L = 100 sites are displayed in Fig. 4, where we compare the ground-state density profile ( Fig.4-a) and spin-density profile ( Fig.4b) obtained from DMRG to LSDA profiles obtained from using the fully numerical solution to the BA integral equations and from the present parametrization. The BALSDA/FVC and BALSDA/FN ground-state energies of the same system deviate from the DMRG energy by 0.01% and 0.64%, respectively. The local densities follow the same trend, and deviate from DMRG by 0.42% and 0.58%. For the local magnetization, the corresponding numbers are 2.20% for BALSDA/FVC and 5.86% for BALSDA/FN. Remarkably, for all three quantities the parametrized results are closer to the DMRG benchmark data than the numerically defined BALDA. This shows that the particular form chosen for the proposed parametrizations allows for considerable error cancellation. On 32 processors the DMRG calculation took approximately 17 hours, while the BALSDA calculations required approximately 40 seconds. Of course, for high-precison calculations, as well as for the calculation of quantities that are not easily extracted from densities and energies, DMRG is still essential. A more complex case is depicted in Fig. 5, which shows density and magnetization profiles for parabolically confined systems in a periodic chain. For two different values (k = 0.05 and k = 0.5) of the curvature of the confining potential (whose form is schematically indicated by the dashed (green) curve), the data points show the siteresolved particle density and spin density obtained by exact (Lanczcos) diagonalization, fully numerical BA-LSDA and our presently proposed parametrization. To within the accuracy that can be expected from a local-density approximation for this type of system (a few percent) the agreement between all three sets of calculations is excelent, for both charge and spin distributions. The amplitude of the magnetization-density oscillations is overestimated by both flavours of local approximations, which is consistent with previous observations for similar approximations and systems [20]. Next, we compare, in Fig. 6, the numerically exact ground-state energy obtained from Lanczos diagonalization of a small open Hubbard chain with L = 15 sites to the LSDA ground-state energies obtained from using the fully numerical solution to the BA integral equations and from using the present parametrization. Up to U ∼ 4t the BALSDA/FN data are almost identical to the exact data, which attests to the quality of the local approximation. For U larger than ∼ 5t, the present parametrization is, once again, better than the conceptually superior fully numerical LSDA, due to error cancellation. The inset shows that for larger systems the behaviour is qualitatively the same. Finally, we point out that successful applications of our (then unpublished) spin-Energy parametrizations for the 1D Hubbard model: gaps, spin and inhomogeneity 12 dependent expression (11) to the study of spin-polarized transport across a correlated nanoconstriction [11] and to the calculation of occupation probabilities of exotic superfluids in spin-imbalanced systems [25] have already been reported. Summary In summary, we have constructed a simple and reliable parametrization for the groundstate energy of the homogeneous one-dimensional Hubbard model, for arbitrary fillings, spin-polarizations and interactions. For the first time, a qualitatively and quantitatively correct description of the Mott gap is obtained from a simple density functional with a proper explicit derivative discontinuity. This parametrization can be used in its own right, for the homogeneous model, whenever simple expressions for the ground-state energy and for the resulting Mott gap are required. However, its main application is as input for local-density and local-spin-density approximations, which allow one to efficiently minimize the energy and extract energies, density profiles and related quantities for spatially inhomogeneous models. Since in KS calculations one never diagonalizes the interacting Hamiltonian, but only the auxiliary noninteracting one, systems with thousands of sites can be dealt with, even in the absence of any simplifying symmetry.
5,303.2
2011-02-24T00:00:00.000
[ "Physics" ]
Evaluation of a Rehabilitation System for the Elderly in a Day Care Center : This paper presents a rehabilitation system based on a customizable exergame protocol to prevent falls in the elderly population. The system is based on depth sensors and exergames. The experiments carried out with several seniors, in a day care center, make it possible to evaluate the usability and the efficiency of the system. The outcomes highlight the user-friendliness, the very good usability of the developed system and the significant enhancement of the elderly in maintaining a physical activity. The performance of the postural response is improved by an average of 80%. Introduction The average age of the global population continues to grow thanks to a longer life expectancy.The World Health Organization (WHO) calculates that between 2015 and 2050, the world's population over 60 years of age will double [1].At the same time, the global healthcare system has to adapt to the new demographic conditions.Falls are the leading cause of accidental injury, particularly when it comes to elderly people.Around 28-35% of people, over 65 years of age, fall each year increasing to 32-42% for people over 70 years of age [2].Falls are responsible for approximately 40% of all accidental deaths [2].Moreover, falls represent an important economic impact that affects directly the healthcare system and indirectly individuals, family, and communities.For example, the direct healthcare system costs are $1049 in Finland and $3611 in Australia per one fall [1].In order to reduce falls in elderly people physical exercises to improve strength and balance are of fundamental importance. This study aims to present and to investigate a rehabilitation system based on a customizable exergame protocol to improve joint flexibility, coordination, balance, and muscle strength of elderly people.In this study, the main contributions are the following: 1. it proposes a new physical rehabilitation exergame protocol for the elderly; 2. the exergame protocol is multilevel and customizable according to the physical capabilities and clinical needs of the elderly by physiotherapists; 3. the system is based on low-cost depth sensors and no calibration, or training phases are required making the system feasible in home settings; 4. the proposed system was qualitatively evaluated on healthy adult volunteers; 5. the proposed system was qualitatively and quantitatively evaluated on elderly people; 6. the evaluation of the system was carried out in a real scenario; 7. long-term follow-up (six months); 8. this study provides the clinical perspectives on exergames in rehabilitation systems for the elderly. The rehabilitation system and the customized exergame protocol is a part of the European project called KINOPTIM project.The KINOPTIM project aims at developing an innovative system for fall prevention and rehabilitation for the elderly.This paper is structured as follows.Section 2 presents an overview of the KINOPTIM system.Section 3 provides a description of the implemented exergames and details each posture required by the system.Section 4 describes the experiments and Section 5 illustrates the experimental results.Finally, discussion and conclusion are given in Section 6. Related Work Recent studies showed that physical exercise programs reduce the risk of falling and improve appreciably the motor functions of the elderly [3][4][5][6] However, motivation to engage in physical exercises is scarce in the elderly [7] and new methods to engage elderly people in physical exercises should be used.Currently, new measurements technologies (portable, inexpensive, with network connectivity, and also for complex body movement [8]) are applied to the field of game and augmented reality [9].These new devices in the context of augmented reality suggest that these tools can be used to prevent falls in the elderly population thanks to the regular monitoring of fall risk [10][11][12][13].Physical exercise protocols carried out by using video games combining exercise, known as exergames [14,15], generate a lot of interest in the field of fall prevention especially when it is based on low-cost devices such as Nintendo Wii, Microsoft Kinect, or Asus Xtion.The prevention of fall, carried out by using exergames at home, has several advantages over the conventional protocols.It makes the prevention accessible to more seniors.Exergames enable the seniors to take part of their own health management.They are motivated through an enjoyable game interface that trains motor abilities but also cognitive skills.Moreover, exergames could be implemented with different levels of difficulty, allowing each senior to begin with a comfortable level and then proceed gradually to a more difficult level [16]. In very recent years, numerous study addressed solutions relating to the use of the exergames in fall rehabilitation and prevention [17].Smeddinck et al. [18] conducted a study of five weeks where exergames have been integrated into a traditional rehabilitation protocol.Results have indicated an increase in motivational aspects, engagement, and autonomy.The system of Brox et al. [19] proposed mini exergames that can motivate the elderly to exercise and increase their self-efficacy.They reported an increase of the balance capabilities and a significant improvement of the fun in the exergames.In [20] authors also highlighted the creation of the games can be difficult since no specific guidelines exist.Eltoukhy et al. [21] evaluated the use of the Kinect during a static single leg stand exercise and a dynamic balance tests for elderly people.Kawamoto and da Silva [22] used a depth sensors to develop games that promote quality of life among the elderly.In [23] a study aimed at determining use and perceptions of exergames during a 5-week experiment is presented.The outcomes stated that exergames are useful for improving engagement in physical activity.Ejupi et al. [3] examined the feasibility of a Kinect based fall risk system.The findings showed the feasibility of their system for fall risk assessment but the authors underlined that further investigations should be necessary.As reported in the literature, even if there is great interest in the use of exergame to improve the motor functions of the elderly, further and deeper studies are still necessary to assess the true potential of exergames addressing fall prevention and rehabilitation issues. System Overview The KINOPTIM system is comprised of multiple modules in order to provide a holistic fall management service.The KINOPTIM project is not a simple collection of technologies but an integration of highly innovative solutions that supports the elderly in a friendly manner.The proposed system uses a depth sensor, a RGB camera, a display, and a workstation.The system is composed of the following modules: 1. a Rehabilitation and Gaming (RG) module; a Medical Business Intelligence (MBI) module. A detailed description of the algorithms used in each module of the KINOPTIM system is out of the scope of this paper but details can be found in [24][25][26][27]. Tele-Monitoring Module The TM module uses video frames from the webcam to achieve several functionalities.The first one is to evaluate the motion kinematics and the temporal evolution of the skeletal articulation structure of a senior in order to extract gait features.Two sub-modules based on image processing algorithms are necessary to extract gait features [24,25]: a detection and tracking sub-module and a gait feature extraction sub-module.The second functionality is to provide a Fall Risk Index (FRI) [26].The FRI is used to help the physiotherapist to define a customized exergame protocol for the seniors. Rehabilitation and Gaming Module This module implements the KINOPTIM exergame protocol.Once the elderly have accessed to their individually customized exergame protocols, they play exergames and at the same time they visualize in real-time the posture performance and the score.The RG module uses depth frames to estimate the body posture of the senior and compares it to the posture required by the exergame.A score (RG_SCORE), which indicates how well a person plays an exergame, is assigned to each user at the end of the exergame [17].The Rehabilitation and Gaming module calculates the RG_SCORE using the body posture of the participant (coordinates x, y, z of each skeleton joint) and its matching (each skeleton joint) with the posture required by the exergame in respect of the exergame level chosen by the physiotherapist. Medical Business Intelligence Module The MBI module assists the physiotherapist in creating the exergame protocol for the elderly [27].This module provides several services through a web interface called the KINOPTIM web portal.The latter is a graphical user interface that makes possible the access and the query of the MBI.The MBI contains database and data analysis functionality useful for the follow up of the training. KINOPTIM Exergame Protocol Implemented in the RG module, the protocol is based on exergames that engage people in an entertaining way while offering motivation for physical exercising [25].An exergame is split into a set of static postures to be performed.The senior has to reach the desired posture in a given time defined according to the difficulty level of the exergame.The quality of the performance between the desired postures and the ones realized by the senior is scored (RG_SCORE) and ranked in stars.Stars are a graphical evaluation of the matching quality of the postures.The exergames are conceived, by the physiotherapist, to improve balance/coordination, gait, muscle strength and joint flexibility of the senior.The exergames are inspired by published literature regarding the physical activity and exercise for older adults [28][29][30] (http://www.nhs.uk/exercises-for-older-people). The KINOPTIM exergame protocol consists of five exercises (three kind of exercises) with different levels of difficulty: easy, medium, and hard.The five exercises are: arms extension, left knee extension, right knee extension, sideways left leg lift, and sideway right leg lift.The aim of the first exercise is to improve the arm extension.The other exercises focus on improving the flexibility of the knees and legs joints.Considering a set of multilevel exercises and according to the senior's physical condition, the physiotherapist can create a customized exergame protocol for the senior.For example, a senior, showing difficulties with moving left leg, will need to follow a protocol that does not include left leg exergames.All exercises start with a rest posture.The senior is standing in the front of the system.Feet flat on the floor and shoulder-width apart.If needed, the senior may use the back of a chair to improve their balance. Extension of the Arms The arms extension exergame is conceived to achieve a better muscle strength and coordination.The senior slowly breathes out when he/she raises both arms to the side, keeping the shoulders as low as possible (see Figure 1).Then the senior holds the position for 5 s (easy level) and breathes in while slowly lowering their arms to the sides.The senior repeats this (easy level) arm movement five times.In the medium level the senior has to hold the position for 10 s and has to repeat the postures five times (10 times in the hard level). Extension of the Knees The knee extension exergame is conceived to maintain a sufficient active range of motion for the knee joint and to improve ability to stand and balance.This exergame is very important to strengthen the lower part of the body.The senior bends the (right or left) knee backward (45 degrees at the easy level, up to 90 at the hard level) and returns to the starting position (see Figure 2-Left).He/she repeats five times (easy level).In the medium level the senior has to repeat the extension 10 times 20 times in the hard level). Sideways Lift of the Legs The sideways leg lift exergame is conceived to strengthen muscles, in particular quadriceps, hip flexor, and abdominals.The senior raises the (right or left) leg to the side (15 degrees at the easy level, up to 30 degrees at the hard level), while maintaining the back and the hips straight (see Figure 2-Right).Then, the senior returns to the starting position and repeats five times (easy level).In the medium level the senior has to repeat the lift 10 times (20 times in the hard level). Experiments The evaluation of the rehabilitation system consists of: (i) a qualitative pilot study with healthy adults; (ii) and a quantitative and qualitative evaluation in order to assess the usability, the interaction quality and the efficiency of the system regarding the physical activity of the elderly. Experimental Setup The experimental room is located at the "La Maison Felippa", an elderly day care center based in Paris.It is 4 m long, 3 m wide and 3 m high.It has been equipped with six power outlets.Lighting was adjusted according to the recommended values to ensure the proper functioning of the depth sensor.Figure 2 shows the experimental room.The posture of the senior is captured by two devices: an Asus Xtion PRO LIVE (Taipei, Taiwan) and a Full HD 1080p Logitech C920 webcam (Lausanne, Switzerland) which were respectively selected as depth sensor and RGB camera.A 27 LCD monitor is used to display exergames to the elderly with a good quality.The monitor also integrates speakers useful during the exergames.The workstation, a Dell Precision Tower 7910, is equipped with an Intel Dual Xeon (3.5 GHz) (Santa Clara, CA, USA), with 64 GB of RAM, 2TB of storage and a NVIDIA Quadro M4000 with 8 GB of memory as graphic card.The workstation uses the OpenNI SDK to communicate with the Asus Xtion PRO LIVE in order to grab video frames from the camera.The OpenNI SDK acquires 640 × 480 depth stream at 30 fps.The monitor has been placed 0.2 m behind the depth sensor (Asus Xtion PRO LIVE).The distance between the display and the elderly person has been fixed at 2 m from the depth sensor.The KINOPTIM system runs on different virtual machines of the same physical workstation using a virtual local network, therefore no Internet connection or wireless local area network is required, according to the ethical and legal guidelines of the KINOPTIM project. Procedure The experiment procedure has been submitted and approved by the French National Commission on Informatics and Liberty (CNIL) with the registered number 1967550v0.The personal data related to the elderly have been anonymized so that the individual identity can not be revealed.The anonymization provides a safeguard against accidental or mischievous release of confidential information.The name of the seniors and other personal information appeared just on the consent forms, of which one copy has been kept by the authors and the other one by the participant of the experiment. The instructions have been written out in full in order to ensure that all the participants have the same information about the experiment.Each participant signed an informed consent, written in French (the participants' mother tongue).Participants have been asked to perform nine exergame sessions (S1, S2, S3, S4, S5, S6, S7, S8, and S9) with the rehabilitation system.Each participant attends sessions approximately every 20 days in order to avoid muscular memory effect.Participants are free to leave the experiment whenever they want.At the beginning, in the first session, two questionnaires have been administered to the participants enrolled in the experiment: the questionnaire about the Activities of Daily Living (ADL) [31] and the Nottingham Health Profile (NHP) questionnaire [32].The 6-item Katz Index of ADL scale is used in healthcare to refer to people's dependence in daily self care activities.The Katz ADL range is 0-6, a higher score shows a greater degree of independence (6 completely independent).The NHP is designed to measure how a patient views his/her health status.It consists of 38 items dealing with physical abilities, social isolation, pain, emotional reaction, energy level and sleep.The NHP range varies from 0 (good quality of life) to 100 (worst quality).According to the ADL and NHP questionnaire results of the senior, the physiotherapist defined the exergame protocol activity.Each senior played five exergames (arms extension, knees extension (2) and sideways leg lift (2)) for each session at the suited level.The physiotherapist defined for all participants to use the easy level of each exergame.All the seniors participated together in a 5 min long familiarization session.The physiotherapist introduced the rehabilitation system providing a simple physical description of it.The participants were admitted one at a time in the experimental room.Once they entered the experimental room, they were placed in front of the display.Subsequently, once the elderly person felt comfortable, they started the first exergame session (S1) under the supervision of a physiotherapist or an ICT professional [17].At the end of each session (lasted 15 min), a debriefing was given to each participant.At the end of the S9, the participant answered to a questionnaire aiming at qualitatively evaluating the system (usability and interaction quality).All sessions have been performed under the supervision of a physiotherapist or an ICT professional. Participants Participants were recruited at the "La Maison Felippa" where the experiment has been carried out.Six voluntary seniors (n = 6, three women and three men to obtain a balanced sex group) aged 80.33 (SD = 4.27) (for details see Table 1) were selected.To be included in the study, participants had to be in the range 65-85 years of age, be able to walk without any support, with no cognitive impairments and no falls episodes in the last month.Participants enrolled in the experiment were administered the NHP questionnaire.The scores reached by each participant for the different parts of the NHP questionnaire are reported in Table 2. Pilot Study The elderly are not familiar with technologies such as depth sensor or exergames.These difficulties in the use of technologies lead to a preliminary pilot study.The system usability of the proposed system, according to the user-centered design methodology, has been evaluated with a formative usability test.The test consisted in carrying out two exergame sessions (S1, S2) with the system.Each session has been composed of all the five exergames of the KINOPTIM protocol.The scope of the pilot study is to collect feedback on the exergames usability from participants of this phase.The feedback provided the possibility to improve the final version of both exergames and the overall system.In order to perform the pilot experiment, users without experience in the use of exergames and depth sensor have been selected to use the system.The proposed system has been tested with eight persons, divided into two groups.The first one was composed by six graduate participants, the second group was composed by two healthcare professionals.All the participants were trying the system for the first time, after being properly informed, they had familiarity with IT systems and none of them had interacted earlier with depth cameras.Results showed that all participants have completed the two sessions successfully.At the end of the test, a questionnaire based on 5-point Likert scale (from 1 to 5) has been administered to each participant.The questions are reported below: Q1: Overall, I am satisfied with how easy it is to use this system Q2: It was simple to use this system Q3: I can effectively complete my physical exercises using this system Q4: I am able to complete my physical exercises more quickly using this system Q5: I am able to efficiently complete my physical exercises using this system Q6: I feel comfortable using this system Q7: It was easy to learn to use this system Q8: The interface of this system is pleasant At the end of the questionnaire, a debriefing was given to each participant.Participants were asked to express their opinion about the system and to make proposal on how to improve it.All the participants confirmed that they were able to easily use the KINOPTIM system and that they felt comfortable enough.However, some improvements in terms of graphical interface have been necessary [33].Therefore, the overall average and standard deviation scores for the system usability (M = 4.06, SD = 0.55) have been calculated and can be considered as very satisfactory for the system evaluation. Quantitative Evaluation of the System In order to perform a quantitative evaluation of the system, nine experimental sessions have been performed by the elderly.Generally speaking, the system was able to operate well in the experimental environment for all the participants.The system performed all exergames without stopping and the workstation recorded all the elderly's postures correctly.To evaluate the responses of the elderly when they were interacting with the system, the focus has been put on the score (RG_SCORE) reached to perform a correct posture in a right way.The RG_SCORE reached in each exergame indicates the matching between the requested posture and the posture realized by the senior.A RG_SCORE equal to 1000 (ScoreMAX) indicates that the posture has been perfectly imitated.Conversely, a low score indicates that the exergame has been poorly-executed.With respect to the RG_SCORE, it has been observed that all the participants have increased their scores after following the KINOPTIM exergame protocol and they have improved the performance of their postural response.In Figure 3 the RG_SCORE achieved by each senior on the whole set of exergames (ALL) during nine sessions (S1, S2, S3, S4, S5, S6, S7, S8, and S9) is reported.The hypothesis is that the RG_SCORE is higher in S9 than in S1.This means that, after a repeated exposure to the system, the elderly improved their postural responses.The t-test was used to explore changes in the RG_SCORE from the S1 to S9 for each patient, results are reported in Table 3.In Table 4 there is a comparison of RG_SCORE in S1 and S9 for each exergame.All participants, except P2, showed a significant difference between the RG_SCORE in S1 and S9.P2 did not show a significant difference.According to healthcare professionals of the day care center, P2 was afraid of performing exergames due to the numbers of falls in the last 12 months.4) were used to determine the slope of improvement in RG_SCORE over six months for all participants: β = 0.54 and correlation coefficient r = 0.57 that showed a moderate correlation.All the participants showed an improvement in the average RG_SCORE among the sessions as reported in Table 5.Moreover, respect to the RG_SCORE, it has been observed that, in percentage, the improvement in postural response of the participants had a rate equal to 63.14% in P1, 32.12% in P2, 80.92% in P3, 95.98% in P4, 33.00% in P5, and 180.31% in P6.During and after six months from the end of the program none of the participants reported falls.According to the user-centered design methodology, the usability, the user-friendliness and the interaction quality of the proposed system have been evaluated thanks to a questionnaire [34].This questionnaire has been administered to all the participants at the end of S9.The questionnaire consisted of two parts.The first five questions aimed at evaluating the general usability, the user-friendliness of the system and the ability of the elderly to use the system in an autonomous way.The second five questions of the questionnaire was related to the evaluation of the overall quality of the human-computer interaction (HCI), considering the interest enhanced in the elderly, or the benefit of using the rehabilitation system instead of physical exercises.The questionnaire reported below consists in 10 Likert-type items, with a score ranging from 1 to 5. Q1: The KINOPTIM system is easy to use: Q2: The KINOPTIM system is comfortable: Q3: The KINOPTIM system is easy to learn: Q4: The KINOPTIM system is efficient to complete the exercises: Q5: The graphical interface of this system is pleasant: Q6: The system stimulates the level of involvement: Q7: You are interested in the KINOPTIM system: Q8: The system helps in the physical activity: Q9: The system enhances physical activity: Q10: The presence of the score helps to improve the postures: Means and standard deviations have been used to describe the data.All the participants involved in the experiment assigned a high score to the usability items and to the interaction quality items.Therefore, the overall average and standard deviation scores for the usability (M = 4.6, SD = 0.10) and for the interaction quality (M = 4.9, SD = 0.11) have been calculated and can be considered as very satisfactory for the system evaluation.The answers to the questionnaire have been depicted in Figure 5. Discussion Although the seniors are not familiar with technologies such as depth sensors, cameras or graphical user interfaces, they found very satisfactory the usability and the quality of the interaction with the system.We excluded a quite large number of participants due to neurological and physical diseases as stated in exclusion criteria.Since the intervention was considered to be new and participants were asked to play in a supervised play to reduce the risk of injury. Qualitative results shown that the KINOPTIM exergame protocol generates a lot of interest in the elderly and it has multiple advantages over the conventional protocols in the literature.The KINOPTIM exergame protocol: (i) enables the seniors to take part in their own health management; (ii) increases the seniors' motivation through an enjoyable game interface that trains motor abilities; (iii) makes the seniors focus of attention on the funny part of the rehabilitation protocol (score of the exergames) and not on the postures themselves. Concerning the efficiency and the quantitative results of the KINOPTIM system, it was shown that all the participants have increased their scores after following the KINOPTIM exergame protocol whatever the kind of exercises (arms, knees or legs).It has been observed that the performance of the postural response, over six months training, has been improved of around 80% for each participant.All participants showed statistically significant differences between the RG_SCORE in S1 and S9.The only exception is for P2 during leg exercises because according to the physiotherapist her falling episodes in the last 12 months and her fear during the exergames.The progressive improvements in the RG_SCORE over the six month period may be due to a gain in experience with the exergames.During and after six months from the end of the training none of the participants reported falls. Due to the great difference between studies on exergames in literature in terms of methodology, intervention protocols and measures, it is pretty difficult to compare our results to previous work.However, in [35] knees extension exergames did not reach a statistically significant improvement whereas our system was able to show a statistically significant improvement.In [36] two exergames, one for the arms and one for the legs, have been developed, as in our study, but nothing statistically significant results have been presented.In [37] experiments on leg lift exergame provide significant improvements for the right and the left leg, but no results are provided on knees or arms exergames as in our study.Sato et al. [38] showed leg muscles improvement but they did not provide any improvement in upper-body and arms exergames training as in our study.In [39] results indicated that arms gesture to initialize the tracker module is too complex for the elderly wheres in our study arm extension exergame is correctly performed since the beginning (S1).In the exergames used in the study of Konstantinidis et al. [19] outcomes showed increased significantly balance capabilities but there are not details on legs or keens extension improvements. Conclusions The use of new and low-cost technologies that allow the implementation of exegames for the elderly population is spreading worldwide.In the context of fall prevention for the elderly, the KINOPTIM system developed in this study proposed an efficient solution based on a customized exergame protocol that can be applied in a range of therapeutic environments.The system does not require complex materials, extended lengths of time for administration and it is low-cost.The KINOPTIM exergame protocol is defined by the physiotherapist using multilevel exercises according to the elderly's physical status.The system has been deployed in a real context: an elderly day care center in Paris.This study suggests that it is safe and feasible for the elderly to participate in a 6-month exergame protocol using a low cost device and the KINOPTIM system.We were able to detail information on each exergame session: date, time, duration, and score of exergames.These data are inaccessible from other commercial devices that cause problems with data analysis.In future investigations, we will propose to the physiotherapist to customize dynamically the level of the exergame.Globally, the rehabilitation system encountered a great success in the elderly both for its user-friendliness and for its efficiency in maintaining a physical activity.Further investigations will be extended involving a greater number of seniors in order to establish the efficiency of the rehabilitation system by comparing the obtained performance with a control group.An important point will also be to create and evaluate empathic behavior of the participants [40,41].In order to achieve better qualitative and quantitative scientific evidences, it is necessary that future studies use statistical analyses and sample size calculations to appreciate minimal clinically important difference. Figure 1 . Figure 1.Arm Extension of the KINOPTIM exergame protocol.In the figure is depicted the frontal view, the lateral view, and the top view of the body posture.In red the posture of the participant and in gray the required posture. Figure 2 . Figure 2. Overview of the KINOPTIM system during two exergames (Left: knee extension; Right: sideway right leg lift).The experimental room is equipped with technical devices. Figure 4 . Figure 4. Linear regression for all participants and for all exergames.On the x-axe there are RG_SCORE obtained by all participants during S1 (first exergames session), RG_SCORE obtained during S9 (last exergames session) by all participants are on the y-axe. Figure 5 . Figure 5. Results of the questionnaire for the elderly.Top: answers to the questions on general usability (first five questions).Bottom: answers to the questions on quality of interaction (second five questions). Table 1 . Details on the participants involved in the experiment. Table 5 . RG_SCORE for each session of the whole set of exercises (ALL).
6,734
2018-12-22T00:00:00.000
[ "Medicine", "Engineering" ]
Progesterone Has No Impact on the Beneficial Effects of Estradiol Treatment in High-Fat-Fed Ovariectomized Mice In recent decades, clinical and experimental studies have revealed that estradiol contributes enormously to glycemic homeostasis. However, the same consensus does not exist in women during menopause who undergo replacement with progesterone or conjugated estradiol and progesterone. Since most hormone replacement treatments in menopausal women are performed with estradiol (E2) and progesterone (P4) combined, this work aimed to investigate the effects of progesterone on energy metabolism and insulin resistance in an experimental model of menopause (ovariectomized female mice—OVX mice) fed a high-fat diet (HFD). OVX mice were treated with E2 or P4 (or both combined). OVX mice treated with E2 alone or combined with P4 displayed reduced body weight after six weeks of HFD feeding compared to OVX mice and OVX mice treated with P4 alone. These data were associated with improved glucose tolerance and insulin sensitivity in OVX mice treated with E2 (alone or combined with P4) compared to OVX and P4-treated mice. Additionally, E2 treatment (alone or combined with P4) reduced both hepatic and muscle triglyceride content compared with OVX control mice and OVX + P4 mice. There were no differences between groups regarding hepatic enzymes in plasma and inflammatory markers. Therefore, our results revealed that progesterone replacement alone does not seem to influence glucose homeostasis and ectopic lipid accumulation in OVX mice. These results will help expand knowledge about hormone replacement in postmenopausal women associated with metabolic syndrome and non-alcoholic fatty liver disease. Introduction Metabolic syndrome (MetSyn) can be characterized by a complex pathophysiological state which originates from several imbalances associated with caloric intake and energy expenditure. However, it is also affected by genetic/epigenetic factors and the predominance of a sedentary lifestyle over physical activity, among other factors such as food quality and composition, intestinal microbiota composition, and quality of life [1][2][3]. Met-Syn can be described as a group of metabolic conditions that occur together and promote the development of physiological and pathophysiological disorders such as atherogenic dyslipidemia (elevated serum triglycerides (TAG), reduced high-density lipoprotein (HDL), and increased cholesterol), high blood pressure, cardiovascular diseases, and type 2 diabetes mellitus (T2DM) [1,2]. Generally, the critical component of this syndrome is the development of insulin resistance associated with obesity [1,2]. Global data on MetSyn are considered difficult to measure, and prevalence estimates variability based on the criteria used to define MetSyn [1]. However, since MetSyn is about three times more common than diabetes, it is estimated that the global prevalence may be about a quarter of the world's adult population [1]. The prevalence of MetSyn has increased during the last three decades [1], but the understanding of the biology of the syndrome has also been expanded. Currently, several biological mechanisms are considered to cause insulin resistance in MetSyn [3]. Among the proposed mechanisms are endoplasmic reticulum stress [4], inflammation [5], and mitochondrial dysfunction [6], in addition to abnormal lipid metabolism [7] and ectopic accumulation [8]. MetSyn is mainly associated with the development of T2DM and is linked to characteristic dyslipidemia. Most obese individuals with T2DM and insulin resistance have an abnormal accumulation of TAG in their livers, characteristic of non-alcoholic fatty liver disease (NAFLD) [7,8]. NAFLD can be determined as the hepatic manifestation of MetSyn and is commonly associated with adjacent metabolic risk factors [7]. The progression of NAFLD can lead to hepatic steatosis, generally recognized as a benign disease, but can progress to non-alcoholic steatohepatitis (NASH). In addition to the presence of steatosis, NASH is typically characterized by lobular inflammation, hepatocyte ballooning, and perisinusoidal fibrosis [9,10]. Additionally, NASH can be a precursor to more severe liver diseases, such as cirrhosis and hepatocellular carcinoma [11]. Therefore, new approaches to the prevention and treatment of MetSyn are necessary to reduce the consequences of MetSyn, improving the quality of life of these individuals. NAFLD is more prevalent in men than women of reproductive age [12]. However, in postmenopausal women, the prevalence of the disease becomes similar to that in men of the same age [12,13]. In recent decades, data from both clinical and experimental studies have revealed that endogenous steroid hormones such as estradiol (E2) contribute enormously to glycemic homeostasis [14,15]. Clinical studies also suggest the pivotal role of E2 in energy metabolism, since decreased estrogen levels during menopause are associated with increased visceral fat and, in turn, metabolic diseases such as insulin resistance, T2DM, and cardiovascular disease. In addition, women after menopause have an increased risk of glucose intolerance, insulin resistance, hyperlipidemia, and visceral fat accumulation [16][17][18]. Clinical studies have also shown that estrogen replacement therapy in postmenopausal women reduces the incidence of T2DM [19]. All of this evidence is closely related to NAFLD [12,20]. Progestogens are also a class of steroid hormones that bind to the nuclear progesterone (P4) receptor (PR) [21]. In addition to this, putative P4 membrane receptors PGRMC (P4 receptor membrane component) 1 and 2 have been identified in various human tissues, including the liver [22]. P4 is the body's primary and most crucial progestogen and is essential in both female and male reproductive systems [23]. However, the literature regarding the effects of P4 on metabolic homeostasis is scarce, and few studies have focused on understanding such effects. It is also known that combined estrogen + progestogen therapy [24] is very effective in controlling the effects of estrogen deprivation during menopause. Because the excessive proliferation of endometrial cells leads to endometrial hyperplasia and cancer, which can result from estrogen-only therapy, progestogens have been administered continuously or sequentially in combination with estrogen to inhibit unwanted endometrial growth [25]. Based on what was observed in clinical studies with postmenopausal women and experimental studies with animals, and knowing that there is a significant gap in the literature on the knowledge of P4 effects on energy metabolism and insulin resistance and that most hormone replacement treatments in women are conducted with conjugate hormones E2 and P4, the main objective of this study was to investigate the potential effects of P4 on energy metabolism and insulin resistance in an animal model of menopause (ovariectomized mice) fed a high-fat diet (HFD), mimicking the effects of most risk factors associated with MetSyn and especially NAFLD. Animals Female mice with a C57BL/6J background were used. They were kept in a temperaturecontrolled room at 22 ± 2 • C with ad libitum access to food and water and submitted to a 12 h light-dark cycle (light from 6 a.m. to 6 p.m.). At eight weeks of age, the female mice were anaesthetized with isoflurane (~3%) and were ovariectomized (OVX). Then, they were randomly divided into 4 groups: OVX, OVX treated with E2 (OVX-E2), OVX treated with P4 (OVX-P4), and OVX treated with both E2 and P4 (OVX-E2-P4). To study the effect of chronic administration of E2 and/or P4, OVX mice were implanted subcutaneously with pellets releasing placebo or E2 or P4 (or both hormones) (0.05 mg/pellet of E2 for 60 days; 15.0 mg/pellet of P4 for 60 days; America's Innovative Research, Sarasota, FL, USA) at the same time as the ovariectomy. A high-fat diet (HFD) (45% fat, D12451; Research Diets, New Brunswick, NJ, USA) was provided after ovariectomy and pellet implantation and continued for 6 weeks before the experiments. Since it was previously described that OVX mice and E2 treatment do not affect whole-body insulin sensitivity in regular chow-fed mice, it was decided to study just mice fed with HFD in this work [25]. All experiments carried out here were previously approved following the guidelines of the Ethics Committee of the Ribeirao Preto Medical School, University of São Paulo (CEUA-038/2021). Glucose Tolerance Test After 6 h of food restriction, mice were injected intraperitoneally (i.p.) with glucose (1 mg/kg body weight-10% dextrose). Blood samples for measuring glucose and plasma insulin were taken by tail bleeding at 0, 15, 30, 45, 60, 90, and 120 min after injection as previously described [26]. Plasma insulin was measured using a commercial ELISA kit (Mercodia, Winston Salem, NC, USA). The area under the curve (AUC) was calculated using the statistical software GraphPad Prism 9.0 in order to use it for statistical analysis. Liver and Skeletal Muscle Lipid Measurement After 6 h of food restriction, the animals were euthanized and the tissues were removed for lipid content analysis (liver and skeletal muscle-gastrocnemius). Tissue TAGs were extracted using the method of Bligh and Dyer [27] and measured using a TAG reagent (Bioclin, Brazil). Aspartate Aminotransferase and Alanine Aminotransferase Measurement Plasma was removed for analysis of liver enzymes (aspartate aminotransferase (AST) and alanine aminotransferase (ALT) using the commercial LabTest kit (LabTest, Belo Horizonte, Brazil). RT-qPCR The liver tissue was used for RT-PCR. The tissue was removed and 50 mg of the sample was homogenized in 1 mL of trizol (Life Technologies) for mRNA extraction. The sample was incubated for 5 min at room temperature (25 • C), and 200 µL of chloroform was added and incubated for 15 min at room temperature and centrifuged for 15 min at 2 • C at 12,000 rpm. The aqueous phase containing the RNA was separated; then, 500 µL of isopropanol was added and the sample was placed in a −20 • C freezer for 1 h. The sample was centrifuged for 10 min at 4 • C at 12,000 rpm and then underwent a washing process, the supernatant was discarded, and 1 mL of 75% alcohol was added and centrifuged for 10 min at 4 • C at 12,000 rpm; this step was performed 2 times in a row. The supernatant was discarded and the RNA went through a dissolution step, and 50 µL of RNAse-free water was added. The RNA concentration reading was evaluated at 260 nm, and the purity from the 260/280 nm ratio was analyzed with the nanodrop device (DeNovix). Then, the cDNA was prepared through a reverse transcription reaction (High-Capacity DNA kit, Applied Biosystems). A mix containing 10× RT buffer, 25× 100 Mm mixdNTP, 10× RT primers, reverse transcriptase, RNase inhibitor, and RNase-free water was prepared and added to the sample. The sample was taken to the thermal cycler. Gene expression was analyzed by RT-PCR (Rotor Gene Q-Qiagen) and SYBR Green fluorescent probe (Platinum ® SYBR ® Green qPCR Supermix UDG, Invitrogen). Statistical Analysis Results were analyzed using GraphPad Prism version 9.0 (GraphPad Software, La Jolla, CA, USA). The minimum number of samples per group was defined by an n sufficient to perform the sample distribution analysis using the D'Agostino-Pearson omnibus normality test recommended by the GraphPad Prism 9.0 program. Results were expressed as means ± SD. Each experimental group had between 8 and 10 animals. Statistical analyses were performed using Bartlett's test for homogeneity of variances followed by one-way analysis of variance (ANOVA) and Bonferroni multiple comparison test. The minimum acceptable significance level was p < 0.05. Results The performed ovariectomy was preceded by sedation and anesthesia through the monitored inhalation of isoflurane. After surgical recovery, the animals were fed a HFD for six weeks. The success of the ovariectomy was verified during the euthanasia of the animals, due to the absence of the ovaries and the intense atrophy of the uterine tubes ( Figure 1) [24]. The animals were weighed before starting the diet and after six weeks on an HFD. Our results revealed that in the initial body weights, before the ovariectomy surgery, there were no statistically significant differences between the groups (p > 0.05) (Figure 2A). At the end of 6 weeks on an HFD, there were statistically significant differences in body weight in the OVX + E2 and OVX + E2 + P4 groups (p < 0.05) when compared with the OVX control group and the OVX + P4 group ( Figure 2B). To characterize the metabolic phenotype, we performed the glucose tolerance test (GTT). This test measured changes in glucose levels in fasted animals (six hours) within a two-hour interval after the administration of 1 mg/kg of glucose, and the area under the curve (AUC) calculation was considered for statistical analysis. Measurements of insulin levels were also performed during this test, as well as the AUC calculation. Our results show that there were no statistical differences in baseline glucose levels between the groups ( Figure 3A). The GTT test revealed differences between the groups (Figure 3B,C; OVX vs. OVX + E2; OVX vs. OVX + E2 + P4 and OVX + P4 vs. OVX + E2; OVX + P4 vs. OVX + E2 + P4), showing that the groups treated with E2 displayed improved glucose tolerance. Basal insulin did not show statistical differences between the groups ( Figure 3D). Finally, plasma insulin levels during the GTT were also significantly lower in the OVX + E2 and OVX + E2 + P4 groups, revealing an indication of greater insulin sensitivity when compared with the OVX control and OVX + P4 groups. This was reflected in the AUC data for insulin ( Figure 3E). There were no statistically significant differences between the OVX + E2 and OVX + E2 + P4 groups during the GTT ( Figure 3E). There are several hypotheses of the mechanisms leading to insulin resistance. Among these hypotheses, the accumulation of lipids is one of the most important [8]. Thus, we understand the significance of evaluating the TAG content in the liver, skeletal muscle, and plasma. Our results demonstrated that the OVX + E2 and OVX + E2 + P4 animals presented a significantly lower TAG content in the liver compared with control OVX and OVX + P4 animals ( Figure 4A). In skeletal muscle, OVX + E2 and OVX + E2 + P4 mice also displayed reduced TAG content ( Figure 4B). There were no statistically significant differences in the plasma TAG concentration among the groups ( Figure 4C). AST and ALT are enzymes found primarily in the liver. These enzymes are used along with others to monitor the course of various liver disorders [28]. Our results did not indicate any significant difference between the groups in the analysis of these enzymes ( Figure 4D,E). Another hypothesis about the mechanisms related to insulin resistance is inflammation. In our study, we evaluated the expression by RT-PCR of anti-inflammatory cytokine markers such as transforming growth factor β (TGF-β) and pro-inflammatory cytokines such as interleukin 1 β (IL-1β), tumor necrosis factor-alpha (TNF-α), and F4/80 to check the evidence of infiltration and increased recruitment by macrophages [29,30]. Our results did not show significant differences between the described markers ( Figure 5). Discussion Several studies have shown the effects of steroid hormones on metabolic homeostasis, most notably the effects of E2 [14,15,19,[31][32][33]. However, the effects of the P4 hormone remain relatively controversial, since the literature on the topic is quite sparse [34,35]. In this work, we reported that E2 treatment exerted effects on body weight control in female OVX mice fed an HFD, confirming the effects of this hormone and corroborating the existing literature [13,15,31,36,37], as well as the role of estrogens and their recognized effect on glucose metabolism and insulin sensitivity [14,15,19,[31][32][33]. Progesterone, administered to mice fed an HFD for six weeks, exerts no influence on body weight gain, showing less weight gain when combined with E2. Our results showed an improvement in glucose tolerance, insulin resistance, and TAG accumulation in the liver and muscle with the conjugated replacement of E2 + P4. This was not observed in animals with only P4 replacement, which was similar to the OVX control group, showing that P4 had little or no effect on glucose metabolism and insulin sensitivity in these animals. Menopausal women may have increased body fat mass and visceral fat [34,36]. According to Lovejoy et al. [34], the increase in fat mass can be explained by reduced energy expenditure in postmenopausal women. These observations in women also fit experimental models of female OVX mice. In other studies, such as Rogers et al. [35] and previous studies by our group [13], OVX animals reduced total body O2 consumption and CO2 production, and energy expenditure, leading to increased body weight associated with increased fat mass. This study also revealed that although OVX mice were obese compared with OVX + E2 mice, their food intake was slightly lower than OVX + E2 mice, suggesting greater energy efficiency in these mice [13]. Chronic E2 replacement therapy corrected the reduction in energy expenditure and body O2 consumption, and this study also indicated a negative global energy balance for the treatment group in relation to a positive global energy balance for the OVX animals. Camporez et al. [13] also showed an oxidative reduction in the metabolism of white adipose tissue in OVX animals, associated with reduced expression of UCP-1, Cidea, and PRDM16, which may explain the decreased energy expenditure in these animals, since it is known that these proteins can improve the whole-body energy metabolism when highly expressed in this tissue [13]. All these published data can explain what was observed in our study, with OVX animals treated with E2 displaying reduced body weight and ectopic lipid content. One of the major concerns with NAFLD is that ectopic lipid accumulation has been clearly linked to the development of hepatic insulin resistance and T2DM. Our results demonstrated that the OVX + E2 and OVX + E2 + P4 animals presented a significantly lower TAG content in the liver compared with OVX and OVX + P4 groups. In skeletal muscle, treatment with E2 and E2 + P4 also reduced the TAG content. Increased ectopic TAG content has been constantly related to increased diacylglycerol (DAG) content in the same tissues [8,33,38,39], which has been consistently associated with some isoforms of novel protein kinase C (PKCs) activation in muscle and liver in a state of insulin resistance. The activation of PKCθ associated with increased DAG content in skeletal muscle has been observed both in insulin-resistant rodents [32,40] and in humans [41]. In the liver, increased DAG content was associated with the activation of PKCε isoform both in experimental models of hepatic insulin resistance [40,41] and in humans with hepatic steatosis and insulin resistance [41], leading to impaired proximal insulin signaling. In hepatocytes, PKCε phosphorylates the insulin receptor (IR) in threonine, reducing its ability to autophosphorylate into tyrosine and trigger downstream insulin signaling, leading to hepatic insulin resistance [40]. Previous studies have already demonstrated the ability of E2 to reduce the accumulation of lipids and the activation of PKCε, which are observed in the animal model of menopause [15] and in HFD-fed male mice treated with E2 [31]. The results obtained in this work indicate that the hormonal replacement with P4 did not interfere with the ability of E2 to reduce ectopic lipid accumulation in muscle and liver. Despite the observed differences in ectopic TAG concentrations (muscle and liver), we did not observe differences in plasma TAG levels. This is in line with what we observed earlier, where OVX mice fed an HFD did not show differences in plasma TAG levels when compared to SHAM and E2-treated OVX animals [13]. This was also observed in other works using different rodent models of obesity and insulin resistance [32,33,42]. These data are also in agreement with previously published works that demonstrate that wild-type mice, unlike humans, are resistant to hypertriglyceridemia induced by an HFD, and these animals may even present a reduction in plasma TAG levels with a very prolonged HFD feeding [43][44][45]. For this reason, there is a widespread use of KO animals (ApoE -/and LDLR -/-) for the study of atherosclerosis [46]. Changes in the profiles of pro-inflammatory and anti-inflammatory cytokines, such as IL-1β TNF-α and TGf-β, have also been observed in NAFLD [47][48][49]. In our study, in addition to the markers mentioned here, we evaluated other markers for inflammation, such as F4/80, to check for signs of infiltration and increased recruitment by macrophages and/or possible liver injury, as well as the analysis of liver enzymes AST and ALT [30,50]. The results revealed no changes between groups. It is important to emphasize that even the results of clinical studies in patients with NAFLD and overweight or obese are asymptomatic and have normal liver function tests [50]. The commonly observed biochemical pattern of increased inflammation and these enzymes is more associated with NAFLD progress and NASH [50]. Obesity observed in OVX mice was also associated with whole-body insulin resistance and impaired glucose tolerance [13,26]. Previous studies by Riant et al. [25] and Camporez et al. [13] already showed glucose intolerance and insulin resistance in OVX mice fed an HFD, and E2 replacement reversed these effects. Studies regarding P4 and insulin resistance are scarce; however, studies by our group [13] showed that intact female mice were protected from HFD-induced insulin resistance by endogenous E2 only in skeletal muscle, exhibiting hepatic insulin resistance, as were OVX mice [13]. SHAM animals clearly exhibit higher P4 concentrations than OVX and OVX + E2 mice. Camporez et al. [13] proposed that higher plasma P4 levels in SHAM mice could nullify the endogenous beneficial effects of E2 on insulin action in the liver. The possible mechanism by which P4 could prevent the effects of E2 would be to increase the expression of the enzyme estrogen sulfotransferase, a primary enzyme responsible for estrogen inactivation induced by increased P4 [51,52]. However, our observations showed that E2 + P4 conjugated replacement showed an indication of improvement in glucose tolerance and insulin resistance, similar to replacement with E2 alone. Another work by Lee et al. [51] showed that P4 could suppress gluconeogenesis after plasma insulin induction under normal conditions in a mouse model. However, P4 can increase blood glucose via gluconeogenesis in parallel with increases in Pgrmc1 expression, a novel membrane receptor for P4, and by increases in the key enzyme mediator of gluconeogenesis, phosphoenolpyruvate carboxykinase (PEPCK), in mice under conditions of insulin deficiency and insulin resistance, which may exacerbate hyperglycemia in diabetes, where insulin action is limited [51]. Nonetheless, it was not possible to observe any effect of P4 in OVX mice in our study. Hormone replacement therapy is the most effective way to alleviate menopausal symptoms, such as vasomotor symptoms and genito-urinary menopause syndrome, and prevent bone loss and fracture [53]. However, the treatment must be individualized, and the patient's history must be considered, considering the benefit-risk ratio for the treatment. According to The American Menopause Society, combined treatment of E2 with P4 should always be indicated in the case of women with a uterus, alleviating the possible carcinogenic effects of E2 on this organ [53]. In addition, there is no indication for hormone replacement therapy with progesterone alone, such as the treatment performed in our study. Our work aimed solely at the mechanistic study of possible effects of progesterone on glucose metabolism and insulin resistance, under no circumstance indicating the treatments carried out in our work should be applied to humans. Therefore, our results revealed that in HFD-fed mice, P4 replacement alone does not seem to influence glucose homeostasis and ectopic lipid accumulation, showing similar results to OVX animals. Conjugated E2 + P4 hormone replacement showed improved glucose tolerance, insulin resistance, and reduced accumulation of ectopic lipids, similar to replacement with E2. P4 showed little or no effect on these tests when associated with E2. Consequently, we believe these results will help expand knowledge about hormone replacement in menopausal women and its effects on metabolic homeostasis implicated in pathophysiologies such as MetSyn and NAFLD. Institutional Review Board Statement: The animal study protocol was approved by the Institutional Ethics Committee of Ribeirao Preto Medical School of University of Sao Paulo (protocol CEUA-038/2021). Informed Consent Statement: Not applicable. Data Availability Statement: The data are not in a public archive database. Conflicts of Interest: The authors declare no conflict of interest.
5,496.4
2023-05-01T00:00:00.000
[ "Biology", "Medicine" ]
Applying the Rasch Growth Model for the Evaluation of Achievement Trajectories Considerable interest lies in the growth in educational achievement that occurs over the course of a child’s schooling. This paper demonstrates a simple but effective approach for the comparison of growth rates, drawing on a method first proposed some 80 years ago and applying it to data from the Australian National Assessment Program. The methodology involves the derivation of a ‘meta-metre’ – a quantitative mode of variation in growth – which permits comparison between groups defined by time-invariant characteristics. Emphasis is placed upon the novel characteristics of the method and the valuable information it can provide. Unlike complex modelling procedures, the approach provides a parsimonious model of growth suited to comparisons between groups. Introduction Recent government, public policy and academic recommendations have highlighted the need to measure student achievement over the course of schooling, rather than concentrating on a specific timepoint (e.g. Department of Education and Training, 2018; Goss et al., 2018;Masters, 2020;McGaw et al., 2020). Within the Australian educational landscape, the focus of school-age reporting of achievement has traditionally centred on performance at a single timepoint as measured by the National Assessment Program -Literacy and Numeracy (NAPLAN) (Department of Education and Training, 2018). While such approaches provide meaningful information on educational achievement relative to the population, they fail to address variation in the level of growth for students with differing levels of academic aptitude. They also mask the extent to which student proficiency increases relative to previous performance, particularly for students entering school with low levels of literacy and numeracy. Interest thus lies in a shift away from static measures to a focus on achievement over the course of a child's schooling. Such views are highlighted in several recent reports (e.g. Goss et al., 2018;Masters, 2020;McGaw et al., 2020) and emphasise a need to focus on variation in levels of achievement in early years of schooling and the subsequent progress that takes place over timefeatures, respectively, referred to as a student's 'initial status' and 'growth rate'. One area of interest raised by Masters (2020) is variability in growth between students from different backgrounds, with the possibility of increasing equity and inclusivity through targeted interventions. To implement such recommendations, whether through policy or pedagogy, methods appropriate for analysing growth between groups are required. Growth modelling approaches that accommodate varying starting points and trajectories, such as the Rasch Growth Model (RGM), are a logical consideration. To effectively measure the rate of growth across schooling, appropriate methods are required. Growth-oriented methodological approaches accommodate varying starting points and trajectories but are often complex in their specification and interpretation (Curran et al., 2010), requiring specialist statistical software. An alternative methodology, first proposed some 80 years ago, involves the derivation of 'salient features of growth' (Rao, 1958, p.1). This paper demonstrates how the application of this approach to a set of longitudinal student assessment data can provide statistics that lend themselves to a presentation of easily comparable linearised results. The approach provides valuable information regarding the overall trajectories of groups of students and demonstrates utility in the evaluation of achievement over time. The visual representation of the model results, as well as their summative properties, makes the RGM particularly useful for communication of information regarding student growth. The RGM proposes a mechanism for evaluating differences in growth across a variety of phenomena (Olsen, 2003). One application for which this approach may be appropriate is in the modelling of achievement trajectories for school-age children, an area for which Rasch's probabilistic models are most commonly associated (e.g. Rasch, 1961). Since children's literacy and numeracy abilities vary when they enter schooling and as a result of learning through schooling, it is feasible to measure and evaluate differences in these two features. To do so, one typically seeks to manifest a construct of interest through a set of assessment items (i.e. to elicit a behaviour which demonstrates the psychological phenomena, such as a performance of reading ability demonstrated through a reading test) and subsequently models the variation in change over time by applying a set of methodological approaches commonly referred to as growth modelling (Williamson, 2016). In doing so, the trajectories of achievement for individuals and groups may be compared. The following paragraphs provide a summary of the interdisciplinary features of the RGM, its relationship to educational assessment and measurement, and its embedding within Rasch's more widely recognised work. This approach is then described and applied, with the statistical and descriptive features of the model demonstrated using examples from NAPLAN. The Rasch growth model The RGM was first proposed by Georg Rasch in 1940 and subsequently articulated in a series of lectures presented at the 1951 meeting of the International Statistical Institute (Olsen, 2003). In both instances, Rasch's focus was on the physiological growth of animals and an attempt to derive a 'simple elementary growth law… [with] time expressed in the physiologically adequate unit of time' (Olsen, 2003, p.65). Rasch later applied the same methodology in the analysis of economic data, at which time he emphasised the key statistical features associated with the model (Rasch, 1972). In each application of the RGM, irrespective of the domain to which it was applied, the core feature lay in the capacity to derive the primary features of growth for the purpose of efficient comparisons between groups (Rao, 1958). The RGM approach attempts to derive an estimated rate of growth for each individual, proportional to the increase over time for the population (Rao, 1958). Underlying this is an assumption that observations represent functional changes alongside a continuous variable, time. Methods of this type provide a parsimonious summary of individual differences (McArdle & Nesselroade, 2003), albeit with known limitations in measurement at the individual level relative to more complex methodologies. The RGM aims to identify the principal sources of variation in growth, conveniently expressed as a set of derived variables. Such an approach is consistent with timeordered analyses that incorporate and acknowledge the role of both individual and group-level differences (Duncan & Duncan, 1995). This has relevance within the context of evaluating differences in developmental trajectories, a point highlighted by Meredith and Tisak (1990) in their call for wider recognition of such procedures by both biological and behavioural researchers. The Rasch measurement model The Rasch Measurement Model (RMM) serves a comparative function in which data derived from a testing instrument (e.g. an assessment) can be compared to expectations set under fundamental principles of measurement (Andrich, 2004). These fundamental principles relate to the specific structure of relations between attributes and those relations being wholly attributable to the measurement act itself (i.e. not derived from other measures). Such properties permit a comparison in the degree of differences between two measures (i.e. additivity), but not in the ratio of them (i.e. multiplicativity). In this way, such relations may be considered similar to what Stevens (1946) described as 'interval scale measurement'. Such features are common to the physical sciences (e.g. temperature as measured in Fahrenheit or Centigrade) and readily permit analysis using linear statistics. The RMM imposes a priori restrictions on both the model and parameters used to account for the observed structure of data (Andrich & Marais, 2019). In this way, a data setthe numerical summarisation of the qualitative aspects of a testing instrumentcan be said to meet the requirements for measurement when it conforms to the structure specified by the RMM, thus permitting quantitative conclusions (Duncan, 1984). It can be argued that the methodology applies an approach consistent with that espoused by Kuhn (1977), in which the merit of the procedure lies not in appraising the appropriateness of a set of models to fit the data, but instead assessing whether observed data suitably represent features expected under fundamental measurement through evaluation of conformity with a pre-specified model. In the dichotomous RMM, the probability that a person will respond correctly to an item is dictated by the interaction between the person taking the test and the item used to measure the underlying construct. The person's estimated level of ability determines the likelihood they will respond correctly to the item, given the item's level of difficulty. This is achieved by calculating the difference between ability and difficulty under certain algebraic constraints. Such an estimate of ability can be conceptualised as the individual's location along a trait continuum, varying according to their capacity. A test conforming to the RMM can be used to ascertain an estimate of ability on a construct that is independent of the specific set of items used to make that assessment. This fundamental feature of the RMMknown as specific objectivitystates that the comparison of two people should be independent of the items used to assess them, and similarly that the comparison of two items should be independent of the individuals used for the comparison (Rasch, 1977). The algebraic separation of person and item parameters underlies this notion, ensuring that each parameter can be eliminated in its counterpart's estimation. Interestingly, this same featureconsistent with requirements for objective measurementis present within the RGM (Olsen, 2003). Measuring growth in educational achievement The RMM can be implemented to evaluate whether quantities under investigation conform to fundamental properties of measurement when growth trajectories are estimated (Williamson, 2017). By applying the RGM to valid and reliable measures of educational achievement that meet the requirements of the RMM, an attempt can be made to derive a set of summative parameters that characterise the growth trajectories of both individuals and groups. This is undertaken under the assumption that performances observed from psychometrically-sound measures of educational achievement represent functional changes associated with skill acquisition through structured learning (i.e. time spent at school). Such an assertion is consistent with developmental theories that posit an asymptotic decrease in the rate of educational achievement over time (Francis et al., 1996). However, like all approaches that are guided by substantive theory, proposed models require subjection to empirical evaluation through data analysis (Williamson, 2016). Aim, objective and research question This article addresses the following question: Can Rasch's Growth Model be used to measure differences in achievement trajectories by representing growth as a function of time? Meaningful application of the RGM will be demonstrated through the use of an example comparing the initial status and growth rates of students in separate Australian states and territories for the reading domain of NAPLAN (i.e. the domain with the highest latent correlation between the NAPLAN assessment domains). Emphasis is placed upon the valuable information that can be derived about achievement trajectories using this model, as demonstrated by visualisation and comparison of growth parameters that are likely to be interpretable by a range of audiences. Methodology Data sources NAPLAN provides annual, point-in-time information regarding student achievement across four domainsreading, numeracy, conventions of language and writingat the level of the student, school, states and territories (referred to collectively as jurisdictions), and Australia as a whole. NAPLAN assessments are completed by students in Grades 3, 5, 7 and 9. The information collected as part of the program facilitates the monitoring and reporting of performance of specific groups and across all six states and two territories. While NAPLAN assesses performance across both primary and secondary education, the focus currently lies on reporting levels of achievement based on observations at a single timepoint, with limited supplementary reporting at two timepointsbetween grades three and five and between grades seven and nine (Australian Curriculum, Assessment and Reporting Authority [ACARA], 2013). By adopting a longitudinal approach to data analysis that incorporates individual-level data measured across all timepoints, a more cogent understanding of educational achievement trajectories in each jurisdiction can be developed. Authorisation for additional analysis and research using this data was provided through formal ACARA channels. Measures The measures of intereststudent achievement in reading, which typically demonstrates the highest latent correlation with other NAPLAN domains and similarly correlates with other standardised reading assessments, such as PISA (see e.g. Lumsden et al., 2015) took the form grade-specific weighted-likelihood estimates (Warm, 1989) of reading ability measured in logits, derived using the RMM, as applied across multiple years of NAPLAN assessment (ACARA, 2020). This fulfilled the requirement stipulated by Williamson (2017) that fundamental properties of measurement be attributable to base quantities when growth trajectories are estimated. These estimates were equated onto the NAPLAN reporting scale, which became the unit of analysis. The NAPLAN reporting scale spans across all tested grades (i.e. Grades 3, 5, 7 and 9) and is standardised using a mean of 500 and standard deviation of 100, with scores ranging from approximately 0 to 1000 (ACARA, 2020). This process places reported results across assessment years (e.g. 2013, 2015, 2017, 2019) on the same scale, thus permitting comparability. Data preparation and processing Prior to model implementation, data appropriate for longitudinal analysis was sourced and prepared. Separate data sets containing matched 'gain' data 1 from the 2013, 2015, 2017 and 2019 NAPLAN assessments were provided to the author for the purpose of data linkage and analysis. As current reporting and analysis techniques do not necessitate the linkage of student data across NAPLAN assessments, a considerable degree of variability exists in the consistency and use of common student identifiers. As a result, a small selection of the large number of variables pertaining to each student were used in the matching process. This was undertaken with the view to maximise the likelihood of a correct correspondence in assessment information included across timepoints. Data was first partitioned such that only cases from the appropriate grade and year were retained (i.e. only Grade 3 data was retained from the 2013 dataset, only Grade 5 data was retained from the 2015 dataset, etc.). The following variables were subsequently used in the matching process: -Student Identifier; 2 -Date of Birth; -School Identifier associated with the 2017 (Grade 7) and 2019 (Grade 9) dataset; 3 -Previous NAPLAN Reading score. As the RGM requires complete data across all timepoints of interest, cases with missing data were removed. There was considerable variability in the number students with complete data across all four timepoints in each jurisdiction. To address these imbalances and potential privacy-related issues, a senate-weighted, random sampling procedure was then undertaken in which 1000 students from each jurisdiction with complete data were selected. Due to issues associated with data access and its linkage, data from one jurisdiction was excluded, resulting in a total of 7000 students being included in the analysis sample. A single time-invariant variable was selected for further analysisjurisdiction. Jurisdictional data was re-categorised using an anonymised, numerical identifier. Comparisons were subsequently made of descriptive statistics for each of the variables relative to the 2019 'gain' data set, with approximate equivalence found between the two datasets. Computation of the level of negative 'gain'in which student progress decreases over the measures of interestwas also undertaken. During this process, distribution checks for concordance with assumptions of analysis of variance (ANOVA)including normality of distributionswere undertaken. Analytic methods The RGM proposes the existence of a quantitative mode of variation in growth, referred to as the 'meta-metre', that is common to all individuals, providing relevant information for the comparison of average growth curves (Rao, 1958). In deriving this 'age transforming function' as Rasch first described it (Olsen, 2003), we assert that the rate of growth of an individual is directly proportional to the meta-metre, thus allowing comparisons characterised by linear relationships (Rao, 1958). Such relationships can be expressed in terms of a single growth parameter, retaining dynamic consistency, whereby individual and group-level differences can be expressed by the same mathematical function (Keats, 1980). In this way, while there exists a trajectory for the population, each individual retains an estimate that varies relative to the group as a whole (McNeish & Matta, 2018). These variables are defined by the data, not a priori, allowing for the modelling of individual growth trajectories. In the context of educational achievement, the RGM asserts that whereY nt is domain-specific achievement for student n at timepoint t; a n is the overall achievement over the time period of interest; b n is the rate of growth in achievement for student n; and τðtÞ is the time transforming function, referred to by Rasch as the meta-metre (Olsen, 2003). This can be conceptualised as a single structural equation expressing growth rate, with coefficients a n and b n specifying the growth rate parameters (Stone, 2020). Unlike behavioural models which utilise multiple structural equationstypically requiring specialised software and knowledge for their specification and interpretation (Curran et al., 2010) the RGM models the group-level effect on the dependent variable, given the meta-metre (Stone, 2020). Through this approach, the RGM asserts that the derived meta-metre should be equivalent for all individuals, allowing one to generalise growth in academic achievement and thus meaningfully represent deviations from it (Stone, 2020). Estimation of the meta-metre and of the parameter estimates used for comparing relative rates of growth are presented in Appendix 1. These estimates permit the comparison of individuals and groups of interest. The independent separation of these parameters, which incorporate the repeated measurements, allows the use of univariate procedures for significance testing. Independent samples t-test, one-way ANOVA and Tukey's honestly significant difference (HSD) post hoc test were used to reveal the significance and degree of differences between parameter estimates between jurisdictions. These procedures provided tests of the null hypothesis that rates of growth and their initial status were the same (Greenland et al., 2016). The linear relationship between growth and time, conditional on the meta-metre, allowed the representation of growth to be presented as straight lines on a plot. These straight lines are easily interpretable as continuous changes, serving useful purposes in the identification of trends (Peebles & Ali, 2015). This approach avoids the issue of non-developmental behaviour (i.e. negative growth) characterised in quadratic representations (Williamson, 2016). As per the recommendations outlined in Nese et al., (2013), sets of growth plots, which permit more readily detected differences in growth, were subsequently used to present variation between groups. Differences in achievement may also be represented as a function of time, providing another effective and interpretable way to visualise growth trajectories (Singer & Willett, 2003). A feature consistent with the RGM is the capacity to transform time such thatby a common transformationindividual growth curves are linearised, with slight variations attributable to error (Rao, 1958). This can be achieved by taking each timepoint to its natural logarithm (i.e. the logarithm of each timepoint taken to the base of the constant e). To paraphrase Rasch, this processutilising the meta-metreallows one to measure time in a particular way, allowing a uniform description of growth curves for all individuals considered (Olsen, 2003). Estimates of initial achievement b A n and rate of growth in achievement b B n were calculated for all individuals within the sample, as well as for the overall sample and each jurisdiction. These calculations were performed using SPSS (IBM Corp., 2019), however, any non-specialist statistical software may be utilised. As outlined in the Analytic Methods section, the estimated normalised relative rate of growth for the overall sample had a mean of 1.0 (SD = 0.45) and the estimated normalised achievement over time (i.e. initial status) had a mean of 0.0 (SD = 251.41), consistent with expectations. Overall grade means evidenced an asymptotic decrease in the rate of growth over time (i.e. a decrease in the level of growth occurring between grades over time), prototypical of educational achievement trajectories, as shown in Table 1. Differences in jurisdictional growth Comparison of jurisdictional growth parameters. Grade means for each jurisdiction evidenced a similar asymptotic decrease in the rate of growth over time as that evidenced overall, as shown in Table 1. The normalised relative growth rate and initial status estimates for jurisdictions are presented in Table 2. These provide an indication of the comparability of jurisdictions in their level of growth and the level from which growth commences, which were subsequently tested for significance. Noting the linearisation of growth trajectories and independent separation of parameters, oneway ANOVA was used to ascertain whether observed differences in jurisdictional growth rates and initial status were statistically significant. There was a statistically significant difference observed between jurisdictional growth rates (F(6,6993) = 14.16, p < .05). Tukey's HSD post hoc test was applied to reveal the jurisdictions to which the statistically significant differences in growth rate applied and their degree. As shown in Table 3, 11 of the 21 paired-comparisons between jurisdictions showed significant differences in growth rate, the largest being 0.14 units. There was also a statistically significant difference observed between the initial status' of jurisdictions (F(6,6993) = 29.43, p < .05). Tukey's HSD post hoc test was applied to reveal the jurisdictions to which the statistically significant differences in initial status applied and their degree. As shown in Table 4, 13 of the 21 paired-comparisons between jurisdictions showed significant differences in initial status, the largest being 122.15 units. Visual representation of differences between jurisdictions. It is conceivable to represent differences in means simply by regressing the grade means for each jurisdiction on the overall sample means, as displayed in Figure 1. These correspond to the values obtained from Table 1. Alternatively, the linear relationship between achievement and time, conditional on the meta-metre, permits a representation of growth for separate groups as straight lines. Figure 2, Figure 3, Figure 4 and Figure 5 show the comparison of grade means for select pairs of jurisdictions, plotted against the transformed means. The line representing growth for each jurisdiction can be characterised by two parameters, identified within the equation on each of the plots, which are consistent with the growth rate and initial status estimates displayed in Table 2 with their corresponding x-axes obtained as the transformation of the sample means obtained from Table 1. By presenting the comparison of a pair of jurisdictions in each plot, the individual trajectories of growth for each jurisdiction may be compared. Growth trajectories can similarly be represented as a function of time. Attempting to characterise the grade means of jurisdictions from Table 1 as function of linear time (i.e. across the four timepoints) resulted in poor model fit, evidenced by large residuals and shown in Figure 6. An alternative approach is to fit a quadratic model representing a curvilinear functional form, thereby minimising residuals and increasing the variance explained by the model due to the monotonically decreasing rate of growth. This representation is shown in Figure 7. Characterising growth in two parameters By applying the RGM to a longitudinal data set containing individual measures of educational achievement in the form of NAPLAN scale scores and using time-invariant variables pertaining to student jurisdiction, it was possible to derive a pair of parameters by which growth rates and initial status could be comparedthe growth rate and initial status. These parameters are single-value summaries of the quality of the trajectories, allowing for a relatively straight-forward approach to comparison and interpretation. Utilising the independent separation of these parameters, univariate tests of significance and multiple comparisons (i.e. post hoc tests) were applied to determine the degree to which differences were statistically significant. These two parameter estimates were subsequently used to visually represent the variation in growth trajectories between each jurisdiction. The degree of variability in the initial status and growth of jurisdictions was considerable. By combining the use of statistical methods and visual representation, it was possible to observe the degree to which differences exist simply by comparing the linear trajectories. Results, presented graphically, visually emphasised areas of variability while also demonstrating qualities associated with the RGM. For instance, Figure 2 clearly portrays that while jurisdiction 2 starts with a higher rate of mean achievement as reflected in the grade three mean, the rate at which achievement grows in jurisdiction 1 was considerably greater. Confidence in this assertion was provided via the use of ANOVA. Such findings can be contrasted against those shown in Figure 3, in which the rates of growth between jurisdiction 1 and 6 do not differ (i.e. no statistically significant difference in growth rate was found) as exemplified by the parallel trajectories. Such descriptive measures provide a visually alluring method that can be used to effectively communicate key observations to a range of stakeholders. Furthermore, while descriptive in their presentation, the results point to areas for further investigation using both qualitative and quantitative methods. It is noted that there exists a wide variety of current growth models that incorporate individual variability and permit the modelling of complex developmental theories (Duncan & Duncan, 1995), including those embedded within an IRT framework (e.g. von Davier et al., 2011). Such approaches often draw on the traditions of hierarchical linear modelling (HLM; Raudenbush & Bryk, 2002) and structural equation modelling (SEM; Meredith & Tisak, 1990). While the RGM serves as an effective, niche method for summarising and describing resultsproviding what Goss et al. (2018) describe as a need for approaches that accommodate non-linear rates of growth for the purpose of comparing relative student progressadditional value could be sought through the comparison of group estimates derived under different methodologies. In the case of the RGM, the linearisation of growth rates via the instantiation of the meta-metre, conforming to properties consistent with objective measurement, provides a benefit that may not be realised in alternative approaches. Noting the use of the meta-metre in deriving comparative estimates, a focus for further research lies in the impact of violations of this time transforming function, and the conditions under which these are likely. While Rao (1958) describes a process in which the existence of a common transformation can be empirically tested, the degree to which such violations may impact parameter interpretation has not been investigated. Due to the novelty of these methods, further inquiry into such conditions would be an appropriate next step in evaluating the application of the RGM. Future applications would also benefit from exploration into the error terms associated with the RGM parameter estimates. Citing Brody (1993), Stone (2020) highlights that the approach used in the model essentially averages over incidentals (i.e. individual variation and error) with the view to overwhelm individual levels of variation and move to a superordinate level of aggregation. While this may provide appropriate summative properties, the implications of doing soas well as the consequences of the incorporation of measurement error into specific measurement types such as weighted-likelihood estimates (von Davier, Gonzales, & Mislevy, 2011)requires evaluation. Limitations of the RGM While the RGM provides an effective method through which group growth can be described and compared, the model itself is not explanatory. The purpose of the model is largely descriptive; therefore, no attempt is made to provide clarity on the multitude of possible factors contributing to observed differences. While providing an effective method for reporting and lending itself to use as a preliminary step in research activities, the testing of complex theories is reserved for alternative models. As broad sets of such methodological approaches exist, it is recommended that further research compare estimates derived from such models against those of the RGM, while also considering the properties associated with their outcomes. A further limitation of the RGM is the requirement for complete longitudinal data. While the model retains dynamic consistency (Keats, 1980) as a series of differential equations modelling rate of change as a function of the state of a variable, the incorporation of individual differences in the estimates of group-level summative parameters results in a requirement for non-missing data. One outcome of this is the possibility of correlations between rates of attrition and other variables of interest. While the current application pertains to a census-based assessment with very high participation rates (i.e. greater than 95%; ACARA, 2019), the possibility remains that high rates of withdrawal or attrition could feasibly give rise to systematic biases within sub-groups that may be explored, thus limiting the interpretability of results. Similarly, while the present analysis was undertaken predominantly for demonstrative purposes, it should be noted that it was applied to a limited data set. While efforts were made to retain consistency with the original data through a comparison of the distribution of variables of interest, the analysis and results may include sampling biases that obscure true results. For example, as no data was available one jurisdiction, the present derivations are based on a sub-set of the total population. Equally, there were data linkage challenges that resulted in a loss of cases through the matching of students across each NAPLAN cycle -2013, 2015, 2017 and 2019. To resolve this, the introduction of universal student identifiers for the purposes of data linkage would likely be required, particularly if such approaches were to be implemented at scale. While recommendations put to government have advocated for the implementation of these to serve the needs of longitudinal approaches (e.g. Department of Education and Training, 2018), such decisions may require further ethical examination prior to implementation (Arnold, 2013). Conclusion This research project investigated the application of the RGM for evaluating differences between jurisdictions. This came in response to calls for reporting of progress in student achievement throughout the course of schooling. It utilised the summative properties of the RGM and its capacity to represent growth in a manner interpretable to a range of audiences (e.g. educational researchers, statisticians and policy makers), to evaluate variation in NAPLAN reading achievement between jurisdictions. Drawing on the desire to measure growth over the course of schooling, this process of presenting results visually, supported by significance testing, permits an ease of use for both government and academic audiences. Despite the limitations associated with the uniqueness of this under-utilised method, the present investigation has provided support for the effectiveness and efficiency of the approach to growth modelling first proposed by Rasch. It has demonstrated that the RGM allows for the efficient comparison of jurisdictions in a manner that is both statistically rigorous and accessible. Such an approach provides both a novel and effective means by which growth may be reported and an avenue for further investigation into group-level differences via qualitative and quantitative research. Importantly, this can be expressed as is the estimated normalised, relative achievement over time (i.e. initial status); and, is the estimated normalised, relative rate of growth. This being due to It is noteworthy that it is feasible to estimate b A n and b B n either by regressing Y nt on ð b Y :t Þ, in which b A n and b B n are dependent, or by estimating b B n independently of b A n and then estimating b A n as expressed by y nt ¼ Y nt À Y nðtÀ1Þ ¼ A n þ B n Y :t À ½A n þ B n Y :ðtÀ1Þ , t ¼ 1; 2; 3…T (11) And the estimate b y nt given by
7,126
2023-04-21T00:00:00.000
[ "Education", "Economics" ]
Monitoring and meaning of vibrations in robot polishing . Robot polishing is increasingly used in the production of high-end glass work pieces such as astronomy mirrors, lithography lenses, laser gyroscopes or high-precision coordinate measuring machines. The quality of optical components such as lenses or mirrors can be described by shape errors and surface roughness. Whilst the trend towards sub nanometre level surfaces fi nishes and features progresses, matching both form and fi nish coherently in complex parts remains a major challenge. With larger or more precise optics, the in fl uence of process instabilities on the quality of the optics to be polished has a greater impact. Vibrations at a polishing head have a negative in fl uence on the polishing result. These vibrations are caused by bearing damage, motors and other excitations (e The present work combines the key technologies of the 21st century "Optical Technologies", "Condition-monitoring" and "Sensor use for 100% control" with the objective of contributing to an increase in the understanding of polishing process of optical surfaces and for process control.The experimental and theoretical investigations carried out within the framework of this work on the mechanisms of action of the process, as well as on the technological interrelationships of the influencing parameters for vibrations, are intended to provide further fundamentals for the robotic polishing of glass.Special attention is paid to the application of condition monitoring to the largely empirical process technology of glass polishing.On one hand, the use of sensors serves as business motivation such as plannable maintenance measures, shortening of downtimes, cost minimisation, especially predictive maintenance and condition monitoring.On the other hand, the sensors and actuators can be used to make scientific statements and achieve repeatable results, including the observation of new effects that remain hidden due to process divergence Introduction Today, polishing is still a very skilled process based mainly on experience and empiricism.Due to its complexity, the mechanism of action of the process is still not fully understood today [1][2][3][4][5][6][7].Compared to other ablative processes, such as grinding and milling [8], the influence of machine dynamics in polishing has been insufficiently scientifically investigated. According to the "Steering Committee Optical Technologies" (original: Lenkungskreis Optische Technologien), process control in polishing is a challenge for the 21st century.Due to the large number of process-relevant influencing variables, process control is difficult.The Steering Committee recommends, at least for preferred glasses, the investigation of the parameters, the monitoring of the polishing agent, as well as the integration of sensors and measuring technology for online surface assessment [9].The American counterpart "Harnessing Light" describes Computer Controlled Polishing (CCP) and the production of high-precision optics, for example for EUV technology (extreme ultraviolet radiation), as one of the key technologies of the 21st century.Here, roughnesses of 0.1 nm rms and 1 nm Peak-to-Valley (PV) are achieved [10].For sustained repeatability, the performance of the manufacturing processes must be increased [11]. The main component of the polishing process consists of a polishing tool that is passed over the glass surface.All material removal takes place in the polishing gap, the area between the polishing tool and the glass surface.The polishing tool usually consists of an elastomer and a polishing film, the viscoelastic polishing agent carrier.Due to the elastic behaviour of the material, the polishing tool clings to the glass surface, even if it is uneven.The polishing gap usually contains a polishing suspension of water and polishing grains: the amount of polishing grains in the gap determines the process, both for roughness and for material removal [12].Process vibrations lead to a pump-like effect, resulting in an increased polish feed in the gap.In addition, vibrations lead to higher local temperature input and greater process fluctuation of the normal force.Contact between the polishing tool and the workpiece can also occur: Ideally, both should only touch the polishing grains."Very slow and vibrationless speeds" are recommended for high quality surfaces with less roughness [13].Accordingly, controlled vibration support is recommended to maximise material removal.Due to the structure of moving elements, every optical production machine has frequencies and vibrations.Due to different construction types and manufacturing tolerances, these are individual for each machine.Similar frequencies are calculated for similar machines in the Fast Fourier Transformation (FFT). The objectives of this work are to strengthen the process understanding, the wear detection on the polishing head and the generation of better workpiece surfaces.Wear detection enables a maintenance plan, prediction of process variations and failure.Rolling bearings are widely used standard components in the mechanical implementation of rotating machinery.If one element of the bearing is damaged and comes into contact with another element of the bearing, an impact force is generated which leads to an impulsive reaction of the bearing.A defect on one of the elements transmits vibrations to all other rolling bearing components.Therefore, a vibration analysis of the process is useful for condition monitoring in order to detect damage and failures on the polishing head at an early stage [14].Vibration spectrum analysis is a popular technique alongside others such as time domain and time-frequency domain for tracking machine operating conditions.There are already some publications in condition monitoring by accelerators [15][16][17]. The wear of the components and the bearings has an influence on the vibrations in the process.In polishing, the damage types that primarily occur are washing out of the bearing grease, rust, wear, increased wear, and additional wear caused by polishing agents.In this paper, damage on the bearing outer ring, rust and a bearing without lubrication are compared.Especially a rusting bearing has an influence on the polishing process: Rust Fe 2 O 3 , also known as »polishing red« is used in polishing as an independent polishing agent.If SiO 2 recondenses on the glass during the polishing process, iron particles can be enclosed.This happens with all polishing grains and is a conventional process and is an explanation for the smoothing in the polishing process.In laser optics, these iron particles heat up more than the glass itself and thermal stress cracks occur.Therefore, it is recommended not to use polishing red for laser optics [18]. Vibrations are used strategically during grinding or polishing, for example, with ultra-sonic.Akbari investigated the ultrasonic vibration effects on grinding process of aluminum ceramic.The surface roughness get improved by 8% and the grinding forces up to 22% [19].Jianhua reduced the normal grinding forces on SiO 2 up to 65.6% and a larger feed rate could be adopted, which leads to material removal rate and machining efficiency [20].For a number of optical materials, a higher material removal rate and a better surface quality, i.e. lower sub-surface damages, could be achieved [21].This results from the superposition of the oscillating amplitude with the rotation of the tool.The total grinding/polishing speed can be calculated by using the following equation with the cutting velocity v c , the cutting velocity by ultrasonic tool oscillation v cUS , the tool diameter d, the rotational speed n, the amplitude of ultrasonic oscillation X max and the frequency f [22]: The ultrasonic support has mainly a positive effect for small to medium sized tool geometries and higher rotational speeds.By reducing the process forces, lower stresses are introduced, which means less sub-surface damage in the glass surface [20,23]. In the literature, there are already initial publications on primarily unwanted vibrations in glass polishing: Slow and »vibrationless« speeds are the requirements placed on machines for the production of high-precision surfaces by polishing [13].In contrast to the process vibrations in glass polishing (<10 nm [12]), frequencies with amplitudes of several hundred nanometres up to 40 lm occur in ultrasonic assisted polishing [24,25].Possible consequences for vibrations during the polishing process are a better supply of polishing slurry, larger loads and higher local temperature increases due to the dissipation of vibration energy.In addition, there is intermitted contact between the polishing tool and the glass surface due to strong vibrations.Due to theoretically possible large amplitudes of vibrations, pressure distributions can occur on the work piece surface.Machines with greater vibration amplitudes induced higher material removal due to enhanced localized micro pumping to the slurry.On the other side are higher surface roughness values caused by macro scale intermitted work piece tool contact [12]. Many parallels can be found between the polishing of optical surfaces and the chemical-mechanical planarisation of wafers, especially in the area of material removal hypotheses.In Chemical Mechanical Polishing (CMP), also called chemical mechanical planarisation, wafers of different materials (including monocrystalline silicon or silicon carbide) are polished to a thickness accuracy of ± 0.5 lm [26].By planarising, multi-layer microelectrical circuits can be realised on wafers.Due to the higher economic importance and the larger research community, there are a larger number of publications for chemical-mechanical planarising than for glass polishing.This polishing process differs from the glass polishing process in geometry (exclusively planar workpieces), workpiece size (a workpiece contour), relative speed, the movement system and the number of pieces.CMP of wafers is understood to be similar to the polishing of glass workpieces and is primarily heuristic, i.e. researched via trial and error [27].As long as the process can be held constant, the removal prediction is accurate.If parameters are changed in the process, no detailed statement about the process can be made [28].Another reason for the low understanding in CMP is also the low use of in situ sensors [29].Accelerators are used in CMP wafer polishing for condition monitoring and predictive maintenance.Currently, the condition of the polishing pad and the failure of individual polishing slurry nozzles can be determined predictively.Machine learning algorithms and big data analyses are used in the evaluation.These sensors are also used in the robot handling of wafers.This detects whether the robot indirectly damages the wafer due to incorrect handling [30]. The bearings provide relative positioning and rotational freedom, usually transferring a load between the shaft and housing.The geometry of such a rolling bearing, which consists of an outer and an inner ring, as well as the rolling elements and a cage, is shown in Figure 1.The latter positions the rolling elements at an even distance around the inner ring.Tolerances and damages of the rolling bearing generate a periodically occurring series of impacts, which are known as bearing frequencies. A total of four frequencies are usually distinguished; six frequencies are given on commercial websites.These result from the relative movements (ṽ rel ) of the individual ball bearing components outer ring, inner ring, rolling elements and cage.Due to the respective distance from the axis of rotation, the ball bearing inner ring, the ball bearing outer ring, the respective balls and the cage each have a different rotational speed.The frequencies depend on only three factors: the rolling element diameter, their running diameter and their number.For the frequencies, it is assumed that only one ring is moved and the other remains static.The rolling elements roll on the outer ring and provide the Ball Pass Frequency Factor Outer (BPFFO) or the Ball Pass Frequency Factor Inner (BPFFI) on the inner ring.The third important frequency is the Ball Spin Frequency Factor (BSFF), which is the rotational speed of the respective rolling elements around their own axis of rotation.As the rolling element rolls on both the inner and outer ring, the frequency Ring Pass Frequency Factor on Rolling Element (RPFFB) is generated, which has twice the value of BSFF.The frequency of the cage with respect to the inner ring is called Fundamental Train Frequency Factor Inner (FTFFI) and to the outer ring Fundamental Train Frequency Factor Outer (FTFFO).The cage frequency will almost always fall between 35% and 45% of the bearing inner ring rotational speed.FTFFI and FTFFO together give the value 1 and the addition of BPFFO and BPFFI gives the number of rolling elements in the bearing.The frequency values are calculated as follows and can be derived via the respective relative velocity: Ball Pass Frequency Factor Outer [31] (2): Ball Pass Frequency Factor Inner [31] (3): Ball Spin Frequency Factor [32] (4): Ring Pass Frequency Factor on Rolling Element [33] (5): Fundamental Train Frequency Factor Inner [32] (6): Fundamental Train Frequency Factor Outer [33] (7): n: number of rolling elements. Proceeding This section deals with the general basics of this work.The individual steps that are necessary for data acquisition are shown.These include the process overview, the measurement technology and the experimental design.In addition, the selection of sensors and the set up are discussed. The motion system for this research project is an Industrial Robot ABB IRB 4400 with an S4C+ controller.A polishing head is attached to the robot, which has a rotation motor for the rotary movement and a linear drive (pneumatic or electric) for the z-stroke.The polishing tool consists of a workpiece carrier, an elastomer for height compensation and the polishing pad.The robot cell is enclosed in a protective cage.The polishing tray collects the polishing agent and feeds it back into the polishing agent reservoir.A peristaltic pump returns the polishing suspension to the polishing head and supplies the process with new polishing agent.The polishing suspension filters dirt particles before the nozzle on the polishing head and before the reservoir.The set-up is shown schematically in Figure 2. Polishing head Figure 3 shows the section view of a polishing head, the core of the setup.A conventional polishing head usually consists of a rotational and a vertical axis.To achieve this, a polishing head consists of a number of bearings, each of which excites frequencies.The illustration shows the attachment of the polishing tool and the attachment of the vibration sensor.This setup is representative for comparable applications or machines.In principle, the bearing types can be exchanged and/or scaled. The bearing that is mainly considered is the bearing closest to the polishing tool, the S6001.This is the one most likely to come into contact with polishing medium and has the greatest influence on the running properties and polishing result.A polishing head usually consists of a rotation axis to realise the typical rotary movement and a vertical axis that regulates the contact pressure.Two different sensors are used in the experiments: an intelligent sensor that can be connected to the PLC via IO-Link and one that is inexpensive and almost self-taught in its handling.An integrated acceleration sensor is used as the second sensor.Both are dealt with in detail in a later subchapter.Both sensors can be attached to the adapters for the polishing head as well as to the adapters of the test stand presented later.The effective zone of the respective sensor is always directly above the centre of the bearing. Figure 4 shows the three 6001 test bearings: one new (similar to the bearing later used with already approximately 50 h running time), one with milling damage and one that is rusty and grease-free due to polishing slurry.The new bearing and the bearing with already 50 h running time are visually indistinguishable from each other.The milling damage is only on the outer ring and has been repeatedly inserted into the bearing.These three shown or four later used ball bearings represent the different bearing conditions: factory new, worn in, rusted (e.g. by polishing suspension). The frequencies are calculated according to the formulas (2)-( 7) and can also be calculated automatically on the website of the respective bearing manufacturer and the frequencies are shown in Table 1. Test rig As there are many damping elements in a polishing head, such as elastomer polishing body and belt drive, and comparatively many stimulating components, such as motors, belts (natural frequency), robots, various bearings, signs of wear, etc. Therefore initial tests are made on a test rig with a S6001 bearing.The structure of the test rig is shown schematically in Figure 5.The bearing is attached to the test stand with a flange.The inner ring of the bearing rotates while the outer ring remains static.The bearing is loaded radially in the z-axis.The test stand reproduces an idealized state in order to make frequencies visible without interfering elements.The knowledge gained from this can be used for the further experiments. Metrology Two different measurement set-ups are used to measure the vibrations.Due to different objectives two different sensors are used.Both sensors are triaxial accelerometers, where the main differences can be seen in Table 2.For example, the first measurement set-up is intended for later use in production, while the second test serves to qualify whether the first measurement set-up is sufficient for the requirements of bearing monitoring. For the first setup, the sensor Balluff BCM0001 [34] is used, which was specially developed for condition monitoring for industrial use.This is an intelligent sensor that has an IO-Link interface.With this type of sensors, the process data recorded by the sensor are processed directly in the sensor and the results from this data processing are transmitted digitally to an IO-Link master.In this setup the IO-Link master AL1060 [35] from the company IFM is used, which can be connected to a computer.Besides the cost-effectiveness, these sensors are really user-and maintenance-friendly, because they are for example very easy to integrate into a new or consisting system and have mostly additional built-in sensors, which can signal a malfunction of the sensor to the user.These advantages make this sensor really suitable for the vibrations analysis of a polishing head in industrial environment.The biggest disadvantage of the Balluff condition monitoring sensor is that due to the low update rate and the necessary pre-processing of the data in the sensor, no comprehensive data analysis can be carried out by the user.For example, only maximum/minimum or averaged values over a large measurement period (here: ~100 ms) can be obtained.Thus, it is not possible to qualify whether the signals obtained are composed of the vibrations caused by the bearings or whether the signal consists of other signals caused by the motor, belt or gear for example.Therefore, for the second measurement chain, the sensor OS-325MF-PG [36] from the company ASC is used, which is an integrated sensor, meaning in this case the measured acceleration values are converted into a normalized electrical signal between (here: ± 10 V).This signal can then be digitised by any ADC operating in this signal range.For this experimental the device NI USB-6001 [37] is used, which is a data acquisition device, which can be controlled by a computer.This measurement system allows to log data at the needed rate of 1400 Hz.This frequency was choosen to fulfil the Nyquist-Shannon-Theorem, because the maximum frequency response of the sensor is 700 Hz.With the logged data, the vibration data can be evaluated in more depth.For this work, a frequency analysis is made, to evaluate if the measured signal is mainly produced by the bearings.If so, the frequency analysis allows to get information about the condition of the bearings (see Chapter 1). Design of experiment Experimental design is known to help to precisely understand the relationship between input parameters and target parameters without huge time consuming, without trial and error iterations and with as few experiments as possible.In order to clearly understand the influence of the applied force and the rotation speed during the polishing process on the measured vibrations a Design of Experiment (DOE) was set.With the DOE it is then possible to understand how big are the vibrations variations upon changing the applied force and the rotation speed without making unnecessary trials. For the generation of DOE the commercial software for statistical experimental design named Design-Expert from the company STAT-EASE was used.The parameters applied force (N), rotation speed (min À1 ), runtime (min) are discrete input parameters.The bearing condition (new, milled, rusty) serve as a nominal input parameter.The target parameters are the output parameters from the Baluff sensor, which in this case are the Root Mean Square (RMS) of two axis and the Peak-to-Peak (PtP) of the same two axis.An optimal custom design was chosen for the generation of the DOE and a total of 25 individual experiments were necessary to complete the DOE.The following Table 3 shows the input parameters, necessary for the generation of the DOE, as well as the target parameters.The generated DOE was conducted in the test rig for the usage of both vibration measuring sensors.The results of the conducted DOE will be discussed in the next chapter. Programming The sensor data of the polishing head, including the vibration sensor data, are evaluated with the Python programming language or used further with Python for machine learning.The latter is used to make statements about the material removal onto the workpiece surface and/or the condition of the polishing head.For this data processing, the data are monitored and stored. Test rig In the following subchapter, the results conducted on the test rig using the Baluff intelligent sensor and ASC sensor are presented.The statistical software creates a deterministic model with a prediction accuracy of the vibrations of 34.60% (RMSX, Pearson correlation, R 2 ).This allows conclusions to be drawn about the accuracy of the process prediction and the influence of the individual parameters.The statistical software cannot accurately represent random scatter and runs behind expectations.Figure 6 shows the results from the conducted DOE, where a correlation is observed between the applied force and rotation speed to the measured vibration.On the left side of Figure 6, it is observed that with the increase of the applied force to the bearing the measured vibration is increased and vice-versa.The same effect occurs when the applied rotation speed of the bearing is increased and the measured vibrations also increase and vice-versa, as shown in the middle of Figure 6. The software can also assign the bearing conditions to the vibration data.However the bearing conditions in the production environment are target parameters and not input parameters.This means that the bearing condition is an unknown variable and should be determined by means of vibration sensor.The deterministic model can be determined with a prediction accuracy of 99.02%, when the bearing condition is given as an input parameter into the DOE.The right side of Figure 6, shows the correlation between the bearings condition and the measured vibration.It is The following Figure 7 shows the vibration measurements of the conducted DOE, comparing the bearings with three different conditions: new, milled and rusty.It is seen from the figures on the left (RMS) and right side (PtP) that the produced vibrations are different according to the tested bearing.The new bearing (blue curve) show less vibrations compared to the milled bearing and the rusty bearing, for both analysing parameters: RMS and PtP.Vibrations of around 1 g RMS are measured for the new bearing, while vibrations of more than 3 g RMS are measured for the milled bearing and for the rusty bearing approximately between 4 g and 6 g RMS.For the analysing parameter PtP, the new bearing shows vibrations around 5 g, the milled bearing shows almost 20 g and the rusty bearing approximately 31 g hitting the sensor saturation at 32 g. The following Figure 8 shows the vibration measurement of the four bearings: one new bearing with a runtime of 50 h and three new bearings with a runtime of 0 h.From these results it is seen that the bearing with a runtime of 50 h (blue curve) shows higher deviation in terms of RMS and PtP values compared to the other completely new bearings. During the experiments conducted in the test rig using the Baluff sensor, different assembling methods were tested: air, beeswax, soup and oil.The goal of this investigation was to check whether the different fluids would cause an impact/damping/transfer of the vibrations from the bearing to the sensor.From the conducted trials no signification differences could be seen between the four different tested assembling methods. In the following Figure 9, the results conducted on the test rig using the ASC sensor are presented.While the Baluff sensor provides processed data from the raw signal, the ASC provides the raw signal from the vibration measurement of three different axis.The resulting vector from all three axis sensor data was calculated and further used to conduct a Fast-Fourier-Transformation converting the signal from a time spectrum into a frequency spectrum.Figure 9 shows the conducted FFT on the vibration measurement for each different bearing: new, milled and rusty.On one hand, it is observed from the FFT for the new bearing that no frequencies with high intensity are measured.On the other hand, for the FFT of the milled and rusty bearing new frequencies are being measured.Table 1 shows the basic frequencies, meaning the 1st Harmonic frequencies, that are expected to be measured on the S6001 Bearing.If on any of these frequencies a high frequency peak is measured, it is then possible to measure a defect at a specific component of the bearing, according to the measured frequency.For the milled bearing it is observed a frequency peak at approximately 30 Hz, 60 Hz and 90 Hz.These values correspond to the 1st, 2nd and 3rd Harmonic of the Ball Pass Frequency Factor Outer (BPFFO) for a rotation speed of 600 rpm.From this diagram it is observed and proven that the defect on the outer ring of the bearing could be measured by the ASC sensor.For the rusty bearing more frequencies are observed.The 1st and 2nd Harmonic frequencies (30 Hz and 60 Hz) of the BPFFO are also measurable on the rusty bearing.At a frequency of 20 Hz a peak is observed, which corresponds to the Ball Spin Frequency Factor (BSFF).The same happens for the frequency 40 Hz, where a small peak is measured for the Ring Pass Frequency Factor on Rolling Element (RPFFB).As last, frequencies of approximately 4 Hz and 6 Hz are measured that correspond to the Fundamental Train Frequency Factor Inner (FTFFI) and Outer (FTFFO) respectively.The rotational speed is 600 rpm or 10 s À1 , which is also visible in the frequency by the peak at 10 Hz. Polishing head In the following subchapter, the results conducted on the polishing head using Baluff intelligent sensor and the ASC sensor are presented.Since there are much more vibrations with the robot and the polishing head compared with the test rig, during some trials with 600 rpm and 1200 rpm the saturation of the ASC sensor was achieved.For this reason the rotation speed was adjusted to 300 rpm for all of the following presented results.In the beginning the Baluff sensor was tested in order to see if it would be possible to see the different bearing conditions in the polishing head, reproducing the same results shown in Figure 7.The following Figure 10 shows the vibrations measurements conducted in the polishing head, when different bearing states were used: new, milled and rusty.It is seen that the condition of the bearings during the conducted trials could be monitored and associated with the respective bearing status.The new bearing (red curve) and the new bearing 50 h (red curve) show less vibrations compared with the milled bearing (green curve) and the rusty bearing (orange curve).The new bearings shows vibrations around 0.25 g RMS while the milled bearing and the rusty bearing show vibrations around 0.6 g RMS.In terms of PtP the new bearings are approximately at 3 g and the milled bearing and the rusty bearing at around 15 g.It can be concluded that with the Baluff intelligent sensor extreme different bearing condition can also be monitored during the robot polishing process. For the ASC sensor, the goal was to obtain also similar results as during the tests in the test rig.The aim was to also detect the individual frequencies of the bearings as shown in Figure 9, and detect the damages in the individual components of the bearings.For the analysis of these results the measurements of the three different axis were taken into consideration and the resulting vector was calculated as in the previous results conducted in the test rig.Afterwards a FFT was conducted to convert the signal into a frequency spectrum.Since there are a lot of more frequencies during the trials on the polishing head, overlapping each other, it is hard to take any conclusions from entire frequency spectrum.For this reason, the invidual frequencies were analysed. Before comparing the individual single frequencies, it is important to explain that the reproducibility of the rotation speed of the polishing head was not always the same.The following Figure 11 shows the frequency of the rotation speed of the motor, which was set at 300 rpm, meaning at 5 Hz.It is seen that this frequency has a little offset from trial to trial and that the amplitudes are also different.For the new bearing a higher amplitude is observed compared with the rusty and the milled bearing.This has an influence in the upcoming results since this frequency reproduces itself on its following harmonic frequencies making the difference between bearings less visible. With this being said, the following Figure 12 shows the six basic frequencies of the bearings for three different status: new 50 h, Milled and Rusty.It was expected for the rusty bearing to have higher amplitude on almost all basic frequencies and for the milled-bearing to show a higher amplitude for the BPFFO (outer ring damage) compared to the new bearing.It is seen that for all the frequencies, expect for the RPFFB, the rusty bearing shows higher amplitudes compared to the new bearing.This means that the damage of the bearing can be detected on its individual components during the robot polishing process.It is also seen that the milled bearing for the BPFFO frequency does not show the high amplitude that it was expected due to the high damage on the outer ring.The reason behind this may be due to the fact that the force applied by the polishing head on the bearing was conducted axially and not radially.If the force would have been radially, as in the test rig, the bearing balls would be pressed into the milled groove producing a higher amplitude on this frequency. Discussion Vibrations play a major role in polishing: on the one hand, they lead to a supply of polishing agent in the effective area and to an increase in mechanical removal, but on the other hand, they also lead to local temperature increases, contact between the optics and the polishing tool and to a fluctuation in force.Depending on the intensity of the frequencies, the material removal can be increased and the roughness can be improved by minimising the frequencies. For vibration sensors, the mounting of the sensors plays a significant role: the air gap between the sensor and the mounting surface must be eliminated.Preliminary tests have shown that screwing on and using grease or oil in the air gap is suitable.In the field of vibration measurement, beeswax, glue or magnets are also used. At the beginning, frequency analyses were carried out on a test rig to exclude interfering elements.A DOE was created to generate a parameter selection and a test sequence.In comparison to the speed, the influence on the force is more significant.Already in the statistical evaluation of the data, a distinction can be made between the bearing conditions new, rusted and with damage on the running surface.The rusted bearing simulates a grease-free bearing with increased bearing play due to wear.The differentiation of the states can take place in real time in the process without previous training etc. and no machine learning or big data is necessary for this. A distinction can be made with both sensors.It is worth noting that the Balluff sensor allows fewer conclusions to be drawn about the bearing frequencies, but is much more economical and provides a simple plug-and-play solution through the use of IO-Link.The intelligent sensor processes the data already during recording and therefore does not allow any conclusions to be drawn about the raw data and thus the individual sensor data introduced.The results are validated in several tests for repeatability. Figure 10 suggests, however, that the bearings must first run in on the polishing head, as bearings with a running time of 50 h have lower frequencies.A bearing with 50 h was used, as there it can be assumed that the bearing has run in.The bearing manufacturer SKF specifies a running-in time of at least 48 h [38].Here, it would make sense to consider separately and retrospectively from when a bearing is run in and from when it is worn out. Figure 8 shows that the repeatability of the frequencies of the brand new bearing cannot be guaranteed.Running in the bearing allows a detailed prediction of these frequencies.This leads to a limited running time and to a change in machining: after run in of the bearing the optic will be later polished with the pre conditioned bearing. Sensor data can even be used to determine where the damage is located: Inner or outer ring or on the rolling elements, in the latter case even exactly which rolling element. Since the frequencies of a polishing head can be measured on the surface of the optics and these are demonstrably due to the rolling bearings, it is advisable to use aerostatic bearings or hydrostatic bearings.The virtually non-existent wear on these bearings will also result in longer service lives.Vibrations should not be completely avoided in the process.Only disturbance frequencies and wear vibrations should be avoided or eliminated.Such vibrations are also helpful in supplying polishing suspension in the polishing gap. Summary and outlook The polishing of glass, glass ceramic and ceramic components will play an increasingly important role in the production of high-precision parts in the future.Due to the many process parameters in the polishing process and their insufficient research, the process is not as stable as comparable mechanical material removal processes.One of these hardly considered fields of research are vibrations and bearings in the polishing process.Bearings are subject to wear with increasing running time and generate six frequencies plus their respective multiples.The bearing frequencies are clearly visible in the vibration sensor data for a worn bearing and a damaged bearing.It is assumed that these frequencies are also visible on the glass surface after polishing.With increasing wear or damage, the frequencies also increase accordingly. In this publication it was shown that the bearing frequencies influence the polishing tool and can be measured.The individual bearing frequencies are not visible with new bearings, but become increasingly visible with increasing damage (worn, damage on the running surface or rust).Statements can even be made about which surface (outer or inner surface, cage or which rolling element) the damage is on. A correlation between the polishing force normal to the surface and the vibrations could be shown.With increasing damage, the process divergence increases and thus the deviation from the target.Some precautions need to be taken for the future and any recommendations are addressed below: In order to reduce disturbance frequencies of the bearings or their wear, there is the possibility to replace the bearings with main influence.There is a choice of magnetic or aerostatic bearings or flexure hinges.Magnetic and aerostatic bearings are more cost-intensive and require more maintenance than conventional roller bearings, but have a higher efficiency and hardly any wear or interference frequencies.Another option on the eccentric is to use a flexure hinge as a bearing.Flexure hinges have only material friction and can also be operated without problems below the polishing slurry liquid level.The disadvantage is the large construction and the cost-intensive manufacturing [39].Another possibility for reducing the individual interference frequencies is the use of Active Vibration Controlling (AVC).Based on the measured vibration, the same frequency with a phase shift is created and coupled into the component.This results in destructive interference and both vibrations extinguish themselves (schematic view: Fig. 13).Nowadays, this is possible thanks to high-performance calculations and powerful computers.There are a number of different electrical circuits, each of which follows the principle of phase shifting the voltage of the vibration sensor by almost 90°in order to drive a piezo transducer.This is called a gyrator circuit.Various methods can be used as actuators, as follows: Piezoelectric effect: the application of an electrical voltage results in the change in length of a material (e.g.a-quartz SiO 2 ) [40].Magnetostrictive effect: the application of a magnetic field leads to a change in the length of a material (e.g.Permendur Fe 49 Co 49 V 2 ) [41]. Similar thermal or fluid-based systems are only suitable for passive damping due to the system inertia.Because of the measurement of the vibration, an acceleration sensor is already present, so that part of the setup for AVC is already in place.Such an approach is already being used in automotive (example: exhaust technology) [42], for 3D printers [42], measuring equipment [43], etc.By using such systems, the frequencies can be shifted into a range that is no longer relevant for the application.This is of particular interest for EUV lithography. The vibration generated by the polishing head is constant because of the constant speed during polishing.AVC, which involves offsetting a frequency with its counterfrequency requires to generate the corresponding counterfrequency.The actual frequency can also be recorded in real time via an air or structure-borne sound microphone directly at the polishing head.The recorded acoustic signal can then be analysed with software (for example python package librosa [44]) and a phase-shifted signal can be output to a structure-borne sound microphone.This process also achieves Active Vibration Controlling. In the future, the path will be towards more controlled process fluctuations, but this also means in terms of vibrations: Solid state joints, air bearing spindle and AVC.The latter for the remaining process vibrations. Figure 2 . Figure 2. Schematic illustration of the robot polishing cell. Figure 3 .Figure 4 . Figure 3. Sectional view of a robot polishing head with vibration sensor attachment and all ball bearings and damaged. Figure 6 . Figure 6.DOE results on the test rig using the Baluff sensor.Left: correlation between the applied force and the measured vibration.Middle: correlation between the applied rotation speed and the measured vibration.Right: correlation of the vibration of each different bearing condition with the applied force. Figure 7 . Figure 7. Vibration measurements, using the Baluff sensor, in the axial direction of the three different bearings: new, milled and rusty.Left: Root Mean Square (RMS); Right: Peak-to-Peak (PtP). Figure 8 . Figure 8. Vibration measurements, using the Baluff sensor, in the axial direction of four bearings with different conditions: new 50 h runtime and new 0 h runtime.Left: Root Mean Square (RMS); Right: Peak-to-Peak (PtP). Figure 9 . Figure 9. Vibration measurement, using the ASC sensor, of three different bearings: new, milled and rusty. Figure 10 . Figure 10.Vibration measurements of the four bearings with different status (new 50 h, corrosion, milled and new 0 h) bearings measured in the axial direction during the polishing head trials.Left: Root Mean Square (RMS); Right: Peak-to-Peak (PtP). Figure 11 . Figure 11.FFT of the conducted trials on the polishing head at 5 Hz. Figure 13 . Figure 13.Schematic illustration of the process vibrations and a possible frequency shift with a counter signal for cancellation, that works as Active Vibration Controlling. Table 1 . Basic frequency factors of S6001 bearing. Table 2 . Comparison of the two sensors used. Table 3 . Design of experiments input parameters and target parameters. seen that vibration sensor can assign the vibration to each different bearing condition.
9,011.8
2023-02-12T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Research on the Reform of University Computer Curriculum Based on OBE Concept--Taking Data Structure as an Example : The computer major in application-oriented undergraduate colleges focuses on cultivating students' practical ability to solve complex engineering problems in the field of computer science. Data structure, as a core foundational course in computer science, plays an important role in connecting the past and the future in the curriculum system. This article adopts the OBE concept, student-centered Therefore, this article utilizes the OBE teaching concept to implement teaching reform for the data structure course, which can improve students' ability to apply theoretical knowledge to solve practical application problems and achieve the goal of optimizing teaching. 2.The overall architecture of data structure reform based on OBE concept The overall architecture of the data structure curriculum reform based on the OBE concept is shown in Figure 1. Firstly, in accordance with the certification standards for engineering education [5] , combined with social needs, the latest developments in the industry, and the talent training plan for computer science and technology [6] , determine the graduation requirements; Identify the graduation requirement indicators that need to be supported by the data structure course in the computer major system by comparing the 12 graduation requirements for engineering certification; Furthermore, based on the OBE concept, establish the teaching objectives of the data structure course, and establish the corresponding relationship between the course objectives and graduation requirements indicator points; Next, carefully analyze the set course goals, clarify the knowledge points required to achieve each course goal, and determine the teaching content, refining the course goals supported by each chapter; Then, in the process of teaching implementation, various teaching methods and means should be reasonably utilized, adopting a mixed online and offline teaching mode.In addition to text, a multimodal teaching mode combining sound, animation, and audio video should be introduced, and various teaching methods such as heuristic and case studies should be used to guide students to actively learn, thereby achieving the initially set course objectives; Finally, develop corresponding assessment and evaluation systems based on course objectives to measure students' ultimate learning outcomes; Analyze results, identify issues, and continuously improve.Course objectives -teaching content -teaching implementation -assessment and evaluation, forming a complete closed-loop teaching process, with training objectives, monitoring mechanisms for the teaching process, and evaluation mechanisms for the achievement of course objectives, forming a virtuous cycle and continuous improvement. 3.1.A. Reverse design teaching content based on OBE concept According to the certification standards for engineering education, the graduation requirements in the talent training program for computer majors are refined into 34 secondary indicator points.The data structure course provides support for four indicator points, focusing on cultivating students' engineering knowledge, analytical ability, design ability, research and application ability.After determining these indicator points, we set 4 corresponding course objectives (as shown in Table 1).Deeply analyze the knowledge points required for each course objective, revise the teaching outline, update the knowledge system, and in addition to describing the knowledge points, key points, difficulties, and class hours of each chapter, clarify the knowledge requirements, ability requirements, quality requirements, teaching methods used, and the course objectives supported by this chapter in the outline. During the teaching process, the outcome and pre-test will be arranged in the online environment of the Wisdom Tree; The two stages of bridge-in and participation are conducted in offline classrooms; The post-test and summary are conducted through a combination of online and offline methods, introducing KeTangPai and online judge platforms for chapter content testing and online experiments.Students are subjectively evaluated through questionnaire stars, while also cooperating with offline assignments and summaries.The teaching mode framework is shown in Figure 2. The implementation of course teaching is divided into three processes: pre class, during class, and post class. Before class: Teachers develop course teaching objectives based on the OBE concept, and determine the graduation requirement indicator points supported by each course objective; Upload online learning resources on the Wisdom Tree platform, including teaching outlines, teaching plans, courseware, videos, exercises, etc; Upload experimental questions on the online judge platform; Then publish the task; Students independently engage in online video learning according to task requirements, understand course content, and document existing problems. In class: Theoretical teaching condenses knowledge points through centralized offline classroom teaching, provides precise lectures on key and difficult points, and provides centralized Q&A on common problems that exist in students' learning process, inspiring students to further deepen their thinking; Basic experiments and innovative experiments are adopted in experimental teaching, and this hierarchical and progressive experimental model is used to meet students' personalized needs and strengthen the cultivation of their practical application abilities.After each chapter, conduct online testing on the KeTangPai platform and complete the experimental questions assigned by the online judge platform. After class: For complex engineering problems, the method of assigning large assignments is adopted, and students actively review literature and submit solutions.Finally, students effectively combine online and offline learning, both in and out of class, to summarize and review; Teachers conduct teaching evaluations and reflect on teaching in order to continuously improve. The entire teaching process will organically integrate offline and online classrooms, and construct a blended online and offline teaching model.Combining case based, heuristic, exploratory, discussion based, and participatory teaching methods in teaching methods; In terms of teaching means, more vivid multimodal teaching materials are used, utilizing multimedia resources such as images, animations, and videos to stimulate students' multisensory reactions, making dull and abstract teaching content vivid , helping students better understand obscure algorithms, and enhancing their learning interest and enthusiasm. 3.3.Constructing a multi-dimensional curriculum evaluation system The traditional data structure course evaluation methods mostly use the final exam scoring method to evaluate the teaching quality of the course, which is difficult to dynamically reflect the specific performance of students in the learning process.This article proposes a multidimensional curriculum evaluation system that integrates exam evaluation and process evaluation [8] .Process evaluation mainly assesses and evaluates students' performance in online and offline learning activities, including attendance, classroom tests, homework completion, experimental completion results, video viewing time, and participation in discussions.Process evaluation accounts for 50% of the total score; The evaluation of the exam is completed through the final exam, accounting for 50% of the total score.The evaluation methods are diversified, including student selfevaluation, intra group mutual evaluation, inter group mutual evaluation, teacher evaluation, questionnaire survey, and other methods.The entire evaluation system focuses on controlling and evaluating students' learning process, allowing for timely understanding of each student's learning status and problems in course learning, and personalized teaching.The assessment and evaluation methods for data structure courses are shown in Table 2. 4.Conclusions This article is based on the OBE concept, with students as the main body and output oriented.Based on the certification standards of engineering education, social needs, and computer professional talent training plans, it comprehensively explores and practices the teaching reform of data structure courses from various aspects such as teaching syllabus, course content, teaching mode, and assessment evaluation.The focus is on cultivating students' ability to comprehensively apply the basic principles of data structure to solve complex engineering problems in the computer field, Enable students to gradually develop professional qualities such as seeking truth, practicality, excellence, and innovation, laying a solid foundation for future work. Figure 1 . Figure 1.The overall architecture of the data structure reform based on the OBE concept training l Figure 2 . Figure 2. Teaching design based on BOPPPS method Table 1 . Graduation requirements indicators supported by the data structure course objectives Be able to understand the logical structure and storage structure of basic data structures such as linear tables, stacks, queues, strings, arrays, trees, and graphs, as well as their advantages and disadvantages.2.Problem analysis: Able to apply the basic principles of mathematics, natural sciences, and engineering sciences, as well as professional knowledge in computer science to express and analyze complex engineering problems in computer application systems in order to obtain effective conclusions. Table 2 . The evaluation methods for data structure
1,855
2023-01-01T00:00:00.000
[ "Computer Science" ]
Newton's Shear Flow Applied to Infiltration and Drainage in Permeable Media The paper argues that universal approaches to infiltration and drainage in permeable media that pivot around capillarity and that led to dual porosity, non-equilibrium, or preferential flow need to be replaced by a dual process approach. One process has to account for relatively fast infiltration and drainage based on Newton's shear flow, while the other one is responsible for storage and relatively slow redistribution of soil water by focusing on capillarity. Already Schumacher (1864) postulated two separate processes. However, Buckingham's (1907) and Richards' (1931) apparent universal capillary-based approach to flow and storage of water in soils dominated. The paper introduces the basics of Newton's shear flow in permeable media. It presents experimental support for the four presumptions of (i) sharp wetting shock fronts; (ii) that move with constant velocities; (iii) atmospheric pressure prevails behind the wetting shock front; (iv) laminar flow. It further discusses the scale tolerance of the approach, its relationship to Darcy's (1856) law, and its extension to solute transport. Introduction Infiltration is the transgression of liquid water from above the surface of the permeable lithosphere to its interior, while drainage refers to liquid water leaving some of its bulk. Infiltration and drainage still bear unsolved problems. For instance, Blöschl et al. (2019), in a most thorough, exhaustive, and detailed survey among hundreds of active hydrologists, compiled 23 Unsolved Problems in Hydrology (UPHs). The 7 th UPH asks "Why is most flow [in the unsaturated lithosphere P.G.] preferential across multiple scales and how does such behaviour co-evolve with the critical zone?" The critical zone in hydrology delineates the upper most layer of the lithosphere that is in direct contact with the atmosphere. It typically carries the terrestrial ecosystems and thus simultaneously provides water, air, nutrients, and mechanical support to roots of most terrestrial plant communities. In general, the critical zone is congruent with soil. This contribution presents a solution of the 7 th UPH. The second section reviews the evolution of infiltration concepts in partially saturated soils since the second half of the 19 th century. The next one summarizes Newton's shear flow applied to flow in permeable media, while the other ones provide support to, and various applications of the approach. Review of infiltration concepts In the mid-19th century, there was an increasing interest in flows in saturated soils and similarly permeable media. Hagen, a German hydraulic engineer, and Poiseuille (1846), a French physiologist, independently analyzed laminar flow in thin capillary tubes. Darcy (1856), in the quest of designing a filtration system for the city of Dijon, empirically developed the concept of hydraulic conductivity as proportionality factor of flow's linear dependence on the pressure gradient. Dupuit (1863) expanded Darcy's law to two dimensions as perpendicular and radial flow between two parallel drainage ditches and towards a groundwater well, respectively. Schumacher (1864), a German agronomist, was probably the first who considered capillarity as the cause for simultaneous flows of water and gas in partially water-saturated soils. He qualitatively compared the rise of wetting fronts in soil columns with the rise of water in capillary-sized glass tubes, and concluded that the wetting fronts rise higher but slower in finer textured soils compared with coarser materials. He also infiltrated water in columns of undisturbed soil and found that infiltration fronts progressed much faster than the rising wetting fronts. He suggested two separate processes for the two flow types: (i) slower capillary rise and (ii) faster infiltration, however, without further dwelling on infiltration. Lawes et al.(1882) concluded from the chemical composition of the drain from large lysimeters at the Rothamsted Research Station that "The drainage water of a soil may thus be of two kinds (1) of rainwater that has passed with but little change in composition down the open channels of the soil; or (2) of the water discharged from the pores of a saturated soil." Lawes et al. (1882) prioritized two separate flow paths to explain the observations. During the second half of the 19 th century irrigation agriculture spread in semi-arid areas and so increased the demand for better understanding of the soil-water regime. Buckingham (1907), working on an universal approach to the simultaneous storage and flow of water and air in soils, postulated the relationship between the capillary potential ψ (Pa) and the volumetric water content θ (m 3 m -3 ), also known as the water retention function, retention curve, or water release curve. The capillary potential follows from the Young (1805)-Laplace relationship, stating that the pressure difference between a liquid and the adjacent gas phase increases inversely proportional to the radius of the interface. In addition to the specific weight of the soil water, Buckingham (1907) introduced the spatial gradient of ψ as the other major driving force, thus allowing for the redistribution of soil water in all directions, evaporation across the soil surface, transpiration via roots, and capillary rise from perched water including groundwater tables. In analogy to Fourier's (1822) and Ohm's (1825) laws for heat flow and electrical current, and Darcy's (1856) law for water flow in saturated porous media, Buckingham (1907) also proposed the hydraulic conductivity for flow in unsaturated porous media as function of either K(θ) or K(ψ) (m s -1 ). According to , the British meteorologist Richardson (1922) was most likely the first who introduced a diffusion type of K-ψ-θ-relationship in the quest of quantifying water exchange between the atmosphere and the soil as lower boundary of the meteorological system. A second-order partial differential expression became necessary because ψ depends on θ, and both their temporal variations on flow, while flow is driven by the gradient of ψ. The race was on to the experimental determination of the K-ψ-θ-relationships. For instance, Gardner et al. (1922) applied plates and blocks of fired clay with water-saturated pores fine enough to hydraulically connect the capillary bound water within soil samples with systems outside them. Richards(1931) applied the technique to the construction of tensiometers that directly measure ψ within an approximate range of 0 > ψ >≈ -80 kPa (ψ = 0 corresponds to the atmospheric pressure as reference). With the pressure plate apparatus he measured ψ-θrelationships and determined hydraulic conductivity K(ψ or θ). Similar to Richardson (1922), he presented a diffusion-type approach to the transient water flow in unsaturated soils. Numerous analytical procedures evolved for solving the well-known Richards (1931) equation. Van Genuchten (1980, for instance, developed a closed form of K-ψ-θ-relationships that provide the base for the many hues of HYDRUS, a numerical simulation packages dealing with flow and storage of water and solutes in unsaturated soils (e.g. Simunek et al., 2008). Veihmeyer (1927) investigated water storage in soils in the quest of scheduling irrigation schemes. He proposed the water contents at the field capacity FC and at the permanent wilting point PWP as upper and lower thresholds of plant-available soil water, where FC gets established a couple of days after a soil was saturated under exclusion of evaporation (also referred to as 'drainable or gravitational soil water'). Various methods appeared on how to establish PWP that is accepted today at -15 bars. It became unavoidable that concepts based on Buckingham's (1907) fundamental and seminal work contradicted with practical and fieldoriented research. Veihmeyer (1954), for instance, stated "Since the distinction between capillary and other 'kinds' of water in soils cannot be made with exactness, obviously a term such as non-capillary porosity cannot be defined precisely since by definition it is determined by the amount of 'capillary' water in the soils". Progress in field instrumentation and computing techniques allowed for producing and processing large data sets including the numerical solution of Richards' (1931) equation. In the late 1970s, the development increasingly unveiled substantial discrepancies between measurements and the numerous approaches to water movement in unsaturated soils based on Richards' (1931) capillarity-dominated theory. Particularly disturbing were observations on wetting fronts advancing much faster than expected from the Richards approach. Concepts like macropore flow (e.g., Beven and Germann, 1982) and flow at non-equilibrium with respect to the ψ-θ-relationship appeared. Jarvis et al. (2016) summarized as preferential all the flows in unsaturated porous media not obeying Richards' (1931) equation. See also Morbidelli et al. (2018) for a recent review on infiltration approaches. Beven (2018) argued that, for about a century, the hardly questioned preference given to capillarity denied recognition of concepts considering flow along macropores, pipes, and cracks. Indeed, there is an increasing number of contributions focusing on the dimensions and shapes of flow paths, their 3-d imaging, and trials to derive flows from them (e.g., Abu Najm et al., 2019). However, there is hardly an approach capable of applying the wealth of information about the paths to the quantification of flow. Ignoring Veihmeyer's (1954) warning, the attraction of research on flow paths is so dominant that, for instance, Jarvis et al. (2016) flatly denied the applicability of Hagen-Poiseuille concepts to flow in soils. (See Germann, 2017, andJarvis et al. 2017). Moreover, advanced techniques of infiltration with non-Newtonian fluids led so far just to the description of path structures rather than more directly to the flow process (Atalah and Abou Najm, 2018). Wide-spread research in the types, dimensions, and shapes of 'macropores' and their apparent relationships to flow and transport mostly pivot around Richards (1931) equation that is numerically applied to either macropore-/ micropore-domains or by modelling flow and transport in the macropore domain with separate rules yet still maintaining a Richards-type approach in the micropores. Both approaches allow for due exchange of flow and transport between the two domains. Imaging procedures visualize flow in 2-d and 3-d in voids as narrow as some 10 µm, rising hope that the wealth of information gained at the hydro-dynamic scale will eventually lead to macroscopic models at the soil profile scale of meters (see, for instance, Jarvis et al., 2016). Thus, Beven's (2018) denial of progress in infiltration research is here carried a step further. The obsession with pores, channels, flow paths, and their connectivity, tortuosity, and necks actually retards research progress towards more general infiltration that should be based on hydro-mechanical principles as the 7 th Unresolved Problem in Hydrology demands. A second thread, leading to the alternative infiltration approach presented here, is traced back to Schumacher's (1864) dual-processes. He suggested that infiltration follows rules, though unspecified at that time, that markedly differ from the capillary rise out of water tables. Moreover, the alternative approach should be based on the same principles as Hagen-Poiseuille's (1846) and Darcy's (1856) laws, thus closing the gap of one to two orders of magnitude of hydraulic conductivity between saturated flow and flow close to saturation (Germann and Beven, 1981a). In his quest of demonstrating the benefits of forests and reforestations on controlling floods and debris flows from steep catchments in the Swiss Alps and Pre-Alps, Burger (1922) measured in situ the time lapses Δt100 for the infiltration of 100 mm of water into soil columns of the same length. In the laboratory, he determined the air capacity AC (m 3 m -3 ) of undisturbed samples taken near the infiltration measurements, where AC is the difference of the specific water volume after standardized drainage on a gravel bed and complete saturation. Germann and Beven (1981b) found an encouraging coefficient of determination of r 2 = 0.77 when correlating via a Hagen-Poiseuille (1846) approach 76 pairs of Δt100and AC-values. Following Lawes et al. (1882), who distinguished between fast and slow drainage, Germann (1986) assessed the arrival times of precipitation fronts in the Coshocton lysimeters. Accordingly, rains of 10 (mm/d) sufficed to initiate or increase drainage flow within 24 hours at the 2.4-m depth if the volumetric water content in the upper 1.0 m of the soil was at or above 0.3 (m 3 m -3 ). The observations result in wetting front velocities greater than 3 x 10 -5 (m s -1 ). Beven and Germann (1981) modelled flow in tubes and planar cracks, and proposed kinematic wave theory according to Lighthill and Whitham (1955) as analytical approach to Newton's shear flow. Germann (1985) applied the theory successfully to data from an infiltration-drainage experiment carried out on a block of polyester consolidated coarse sand. The paper is considered a precursor of the following section that treats infiltration and drainage in permeable media as exclusively gravity driven and viscosity controlled, while capillarity may adsorb water from flow to the sessile parts of the system. Theory a) Basic relationships The approach is laid out at the hydro-mechanical scale of spatio-temporal process integration, allowing for its easy handling with analytical expressions, yet under strict observance of the balances of energy, momentum, and mass (i.e., the continuity requirements). The approach builds on four presumptions that are not necessarily common to soil hydrology: (i) infiltrating water forms a sharp wetting shock front; (ii) the wetting shock front moves with constant velocity; (iii) atmospheric pressure prevails in the mobile water between the wetting shock front and the surface; and (iv) flow is laminar (i.e., Reynolds numbers may not exceed values close to unity). The interior of a permeable solid medium contains connected flow paths that are wide enough to let liquids pass across the volume considered. The definition purposefully avoids further specification of the flow paths' shapes and dimensions. Water supply to the surface is thought of a pulse P(qS, TB, TE), where qS (m s -1 ) is constant volume flux density from the pulse's beginning at TB to its ending at TE (both s). (The subscript S refers to the surface of the permeable medium). The pulse initiates a water content wave WCW of mobile water that is conceptualized as a film gliding down the paths of a permeable medium according to the rules of Newton's shear flow. The parameters film thickness F (m) and specific contact length L (m m -2 ) per unit cross-sectional area A (m 2 ) of the medium specify a WCW. Regardless of the thickness of F, atmospheric pressure prevails in the film. Figure 1 illustrates the concept. A WCW supposedly runs along the flow paths while forming a discontinuous and sharp wetting shock front at zW(t). The WCW partially fills the upper part of the medium within 0 ≤ z ≤ zW(t) with the mobile water content w(z,t) (m 3 m -3 ), where w < εθante with porosity ε and antecedent soil moisture θante, both (m 3 m -3 ). The lower part z > zW(t) remains at θante. The coordinate z (m) originates at the surface and points positively down. Newton (1729) where η (≈ 10 -6 m 2 s -1 ) is the temperature dependent kinematic viscosity of water, ρ (1000 kg m -3 ) is the water's density, v(f) (m s -1 ) is the velocity of the lamina at f in the verticaldown direction, and dv(f)/df is the velocity gradient in the horizontal direction. The expression (N). Integration of Eq. [2] from the SWI, where v(0) = 0 (the non-slip condition), to f yields the parabolic velocity profile from the SWI to f as (3) The differential volume flux density at f is (m s -1 ). Its integration from the SWI at f = 0 to the air-water interface AWI at f = F produces the volume flux density of the film as (m 3 s -1 ), while the volume of mobile water per unit volume of the permeable medium from the surface to zW(t) amounts to (m 3 m -3 ). The constant velocity of the wetting shock front follows from the volume balance amounting to while the position of the wetting shock front as function of time becomes The terms relating to velocity depend exclusively on F 2 , Eqs. [3,7,8], while those relating to mobile water and its volume flux density also on L, Eqs. [5,6]. Under consideration of zW(t), L expresses the specific vertical contact area of the WCW per unit volume of the permeable medium as the locus where momentum, heat, capillary potential, water, solutes, and particles get exchanged between the WCW and the sessile parts of the medium. Equations [3 to 8] hold during infiltration i.e., TB ≤ t ≤ TE. Input ends abruptly at TE and at z = 0 i.e., qS → 0, when and where the WCW collapses from f = F to f = 0. All the rear ends of the laminae are released at once at z = 0. They move downwards with v(f) according to Eq. [3]. The outermost lamina moves the fastest with the celerity of the draining front as Celerity refers to the velocity of a flow property change. The slower moving wetting front intercepts the faster draining front at time TI (s), that follows from setting vW x (TI-TB) = cD x (TI-TE), as ( ) Thus, TI is an exclusive expression of the pulse duration. The wetting front intercepts the draining front at depth The rear ends of all the other laminae move with decreased celerities. According to Eq. [9], the celerity cRE (f) of the rear end of a lamina between 0 < f < F is Rearranging the last two terms in Eq. [12] and solving for f leads to the temporal position of the film thickness as ( ) Multiplication of FRE (z,t) with L leads to the spatio-temporal mobile water content of the WCW as trailing wave according to ( ) After TI and beyond ZI the draining front disappears and vW(z,t) decreases with time and depth. However, the shape of the WCW remains according to Eq. [14] over the depth range of 0 ≤ z ≤ zW(t). The volume balance of the WCW amounts to where VWCW (m) represents the water volume of the WCW that has infiltrated during TE-TB. [15] for zW(t) leads to the temporal position of the wetting shock front as The first derivative of Eq. [16] produces the wetting shock front velocity as Re ≤ 1 strictly defines laminar flow; however, depending on the application, Re > 1 might be tolerable, yet within an undisclosed range. The following paragraph b) provides cuts of the WCW in the z-w(z,t)-plane of Fig. 2 while paragraph c) introduces cuts in the t-w(z,t)-plane. b) Variation of a WCW with depth The following inspects the spatial variation w(z,τ) of the WCW's mobile water content during the three intervals of (i) TB ≤ τ1 ≤ TE, (ii) TE ≤ τ2 ≤ TI , and (iii) TI ≤ τ3 < ∞ . (i) TB ≤ τ1 ≤ TE: Position of the wetting shock front, mobile water content, and volume flux density are Thus, piston flow occurs during infiltration, TB ≤ τ1 ≤ TE. (ii) TE ≤ τ2 ≤ TI: The position of the wetting shock front is the same as in Eq. The mobile water content remains constant at w between zD(τ2)≤ z ≤ zW (τ2) c) Variation of a WCW with time The following inspects the time series of a WCW's mobile water content w(ζ,t) at the three depth ranges of (i) 0 ≤ ζ1 < ZI , (ii) ζ2 = ZI , and (iii) ζ3 ≥ ZI . (i) 0 ≤ ζ1 < ZI: The arrival times of the wetting shock and draining fronts at ζ1 are while the mobile water content assumes the following values during the respective time intervals: (ii) ζ2 = ZI : At depth of front interception and after t ≥ TI the mobile water content becomes and the mobile water content as a function of time becomes d) Routing pulse series From mass balance requirement follows the celerity of an abrupt pulse increase from P1 to P2 with q2 > q1 and w2 > w1, as Experimental support for the four presumptions This section experimentally supports the four presumptions that provide the base for Eqs. [1 to 36]: (i) infiltrating water forms a sharp wetting shock front; (ii) the wetting shock front moves with constant v; (iii) atmospheric pressure prevails in the WCW; and (iv) flow is laminar (i.e., Tank (alHagrey et al., 1999) produced the data that support presumptions (i) to (iv). infiltration (Flammer et al., 2001), Germann (2018a) concluded that atmospheric pressure prevails in the WCW between the wetting shock front and the soil surface at least during TB ≤ t ≤ TI. e) Laminar flow, Reynolds number: presumption (iv) From the application of Eq. a) Coherence of the approach The parameters F and L suffice to treat infiltration and drainage with Newton's shear flow approach, Eqs. [1] to [20]. In principle, time series of either θ(Z,t) or q(Z,t) prmits calibration of the two parameters. Both procedures are introduced, using the data presented in Fig. 6. [20], and integrating the resulting expression from tW(Z) to t > tD(Z), yielding where Z refers to the depth of drainage flow at 2.0 (m). The specific contact area is the only factor left for matching Eq. [37] to the data that resulted in L(q) = 3.3 x 10 4 (m 2 m -3 ), comfortably lying within the range of L(w). This demonstrates the coherence of Newton's shear flow approach to infiltration and drainage. Dubois (1991) b) Scale tolerance depths of 100, 150, and 200 (mm). Fluxes in each layer followed from Newton's shear flow approach. The flux differences from layer to layer deviated utmost by 19% from the corresponding water content changes in the volumes between the layers (Germann, 2014). Dubois ' (1991) observations across 1800 (m) of crystalline rocks of the Mont Blanc massif and the water balance calculations of finger flow in the sand box of Hincapié and Germann (2010) at the scale of millimeters hint at the spatio-temporal tolerance of Newton's shear flow that may advance the approach to an attractive tool, for instance, for the study of infiltration into groundwater systems. c) Preferential and retarded tracer breakthrough Preferential flow in soil hydrology is frequently associated with enhanced and accelerated solute and pollutant breakthrough (e.g., Larsbo et al., 2014). However, Bogner and Germann (2019) reported considerable delays of tracer breakthrough compared with the first arrival of the wetting shock fronts at the bottoms of soil columns with heights of 0.4 (m). They referred to the phenomenon as 'pushing out old water' that is well known in catchment hydrology. They statistically explained 81% of the observed delay variations with combinations of L and F when applying Newton's shear flow to the data. Tracer exchange on large L from thin F of the WCW may be even faster than presumed 'preferential' tracer transport. Under consideration of the mechanistic parameters F and L, Newton's shear flow provides for a novel tool for the unambiguous investigation of tracer transport and exchange i.e., accelerated as well as decelerated breakthrough. d) Gravity vs. capillarity Schumacher (1864) suggested a two-process approach to water flow and storage in partially saturated permeable media. While he recognized capillarity as responsible for the water's rise, and probably also its contribution to water redistribution in soil columns, he left open the mechanism behind infiltration. This paper concentrates on infiltration that is completely gravity-driven and viscosity-controlled, yet allowing for water abstraction due to capillarity from the mobile to the immobile part of the permeable system. Concentrating on gravity and viscosity liberates infiltration and drainage from the omnipresence of capillarity in soil hydrology with the benefit of avoiding the difficult definitions of non-equilibrium flow and the separation of macropores from the remaining pores. With respect to capillarity, the relative contribution of gravity to flow varies according to cos(α), where α (°) is the angle of deviation from the vertical. Thus, at cos(0°) = 1, as in the cases presented above, gravity's contribution is at maximum; it reduces to cos(90°) = cos(270°) = 0, while it completely opposes capillarity at cos(180°) = -1. Darcy's (1856) law mutates to an extension of unsaturated vertical shear flow. From Eq. e) Shear flow and Darcy's law [5] follows: θante + w < ε and Δp/Δz = ρ g: where θante (m 3 m -3 ) is the antecedent volumetric water content, ε (m 3 m -3 ) is porosity, Δp/Δz (Pa m -1 ) is the pressure gradient, ρ (=1000 kg m -3 ) is the density of water, and µ = ρ η (Pa s) is dynamic viscosity. At saturation we get: θante + w = ε and Δp/Δz = ρ g: Darcy's law states that q  p/z i.e., volume flux density is a linear function of the flowdriving gradient with the proportionality factor Ksat. In view of the various dimensionalities of w  (L 1 ,F 1 ), v  ( L 0 , F 2 ), and q  (L 1 , F 3 ), linearity seems only possible if Fsat and Lsat remain constant and independent from p in the transition from gravity-driven to pressure-driven shear flow at saturation i.e., in the transition from Eq. [38] to Eq. [40]. This elaboration supports the linearity of Darcy's law, but it is not its independent proof. As a consequence, w = q/v also remains constant. Further, if θante + w = ε, dLsat/dp = 0 and dFsat/dp = 0 then follows the hypothesis that (Fsat x Lsat) represent (F x L)max leading to Ksat. However, other combinations of (F x L) in unsaturated media are feasible that may lead to q > qsat = Ksat. This unproven speculation opens an unexpected view on shear flow, that is in stark contrast to Richards (1931) capillary flow, where a priori Ksat > K(θ or ψ). See Germann and Karlen (2016) for further discussion. f) Water abstraction from the WCW Pressure in the WCW is atmospheric while ψ < 0 typically prevails ahead of it. Therefore, water is abstracted from the WCW onto L. Abstraction is usually completed during short periods as the θ(Z,t)-series in Fig. 6 demonstrate. The amount of abstraction shows in the difference between θend and θante. Conclusions Newton's shear flow provides for a cohesive approach to infiltration and drainage in permeable media, and no a priori decisions on pore properties are required. So far, the approach is in its descriptive mode, capable of quantifying infiltration and drainage with the two parameters film thickness F and specific contact area L. However, the analytical expressions facilitate the development of predictive model applications such as to groundwater recharge and to the transport of solutes and particles. Advances are expected from research, among other topics, in the relationships of F and L with antecedent soil moisture, intensity of infiltration, and hydraulic conductivity Ksat. Finally, Newton's shear flow seems to have solved the 7 th Unsolved Problem in Hydrology (Blöschl et al., 2019) that asks "Why is most flow preferential across multiple scales and how does such behaviour co-evolve with the critical zone?". However, Newton's shear flow as the solution of the 7 th UPH did not evolve from the suggested dual-porosity perspective but from a hydro-mechanical point of view that does neither require preferential flow nor coevolution of flow paths.
6,272
2019-11-13T00:00:00.000
[ "Environmental Science", "Engineering" ]
A practical guide to de novo genome assembly using long reads Genome assemblies that are accurate, complete, and contiguous are essential for identifying important structural and functional elements of genomes and for identifying genetic variation. Nevertheless, most recent genome assemblies remain incomplete and fragmented. While long molecule sequencing promises to deliver more complete genome assemblies with fewer gaps, concerns about error rates, low yields, stringent DNA requirements, and uncertainty about best practices may discourage many investigators from adopting this technology. Here, in conjunction with the gold standard Drosophila melanogaster reference genome, we analyze recently published long molecule sequencing data to identify what governs completeness and contiguity of genome assemblies. We also present a meta-assembly tool for improving contiguity of final assemblies constructed via different methods. Our results motivate a set of preliminary best practices for assembly, a “missing manual” that guides key decisions in building high quality de novo genome assemblies, from DNA isolation to polishing the assembly. . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; Introduction: De novo genome assembly is the process of stitching DNA fragments together into contiguous segments (contigs) representing an organism's chromosomes (Simpson and Pop 2015). Until recently, genomes tended to be assembled using fragments shorter than 1,000 bp. However, such assemblies tend to be highly fragmented when they are generated using sequencing reads shorter than common repeats (Baker 2012;Bradnam, et al. 2013;Myers 1995;Simpson and Pop 2015). Longer reads can circumvent this problem, even when such reads exhibit errors rates as high as 20% Lam, et al. 2014;Lander and Waterman 1988;Motahari, et al. 2013;Shomorony, et al. 2015). Importantly, error-prone reads can be corrected, provided there is sufficient coverage and the errors are approximately uniformly distributed (Lander and Waterman 1988). Single molecule sequencing, like that offered by Pacific Biosciences (PacBio), meets this criterion with reads that are routinely tens of kilobases in length (Kim, et al. 2014;Koren and Phillippy 2015;Pendleton, et al. 2015). While PacBio sequences have high error rates (~15%), errors are nearly uniformly distributed across sequences . With sufficient coverage, these sequences can be used to correct themselves (Churchill and Waterman 1992). Assemblies using such correction are referred to as PacBio only assembly (Berlin, et al. 2015). Alternatively, researchers can perform a hybrid assembly using a combination of noisy PacBio long molecules and high quality short reads (e.g. Illumina). (Koren, et al. 2012) (Pendleton, et al. 2015) Recently, the value of long molecule sequencing has been definitively demonstrated with the release of several high quality reference-grade genomes assembled from PacBio sequencing data (Berlin, et al. 2015;Kim, et al. 2014). Despite these successes, shepherding a genome project through the process of DNA isolation, sequencing, and assembly still poses many uncertainties and challenges, especially for research groups who see genomes as a means to another goal rather than the goal itself. For example, because high quality genome assembly relies upon long sequencing reads to bridge repetitive genomic regions (Bresler, et al. 2013;Lam, et al. 2014;Lander and Waterman 1988;Myers, et al. 2000) and high coverage to circumvent read errors (Baker 2012;Churchill and Waterman 1992;Motahari, et al. 2013), the stringent DNA isolation requirements (size, quantity, and purity) for PacBio sequencing (Kim, et al. 2014) intended for genome assembly are different than those typically employed. Moreover, at present, the low average read quality produced by PacBio sequencing causes coverage requirements to be at least 50-fold (Berlin, et al. 2015;Koren and Phillippy 2015;Sakai, et al. 2015). This, combined with its comparatively expensive price, makes striking the right balance between price and assembly quality important. Exacerbating the problem is the fact that rediscovering the optimal approach for a genome project is itself expensive and time consuming. As a consequence of these challenges and uncertainties, many groups may opt out of a long molecule approach, or worse, sink scarce resources into an approach ill-suited for their goals because the consequences of many decisions involved in long molecule sequencing projects have not been synthesized. In order to derive an optimal strategy for genome assembly we investigated sample handling (i.e. DNA isolation, quality control, shearing, library loading, etc.), assembly strategies, and properties of the data (i.e. read quality, length, and read filtering). We first evaluate strategies for assembling PacBio reads, and how they perform with differing amounts of sequence coverage. Then, we assess the contribution of read length and read quality to assembly contiguity. We also introduce quickmerge, a simple, fast, and general meta-assembler that merges assemblies to generate a more contiguous assembly. We also describe the protocols, quality-control practices, and size selection strategies that consistently yield high quality DNA reads required for reference grade genome assemblies. Finally, we recommend a strategy flexible enough to yield high quality assemblies from as little as 25X long molecule coverage to as much as >100X. PacBio self correction has been used to assemble the D. melanogaster reference strain (ISO1) genome so contiguously that most chromosome arms were represented by fewer than 10 contigs (Berlin, et al. 2015). This assembly was generated by using the PBcR pipeline (Berlin, et al. 2015) and 121X (15.8 Gb), or 42 SMRTcells' worth, of PacBio long molecule sequences (Kim, et al. 2014). However, currently, such high coverage may be too expensive for many projects, especially when the genome of the target organism is large. Consequently, we set out to determine how much sequence data is required to obtain assemblies of desired contiguity. We first selected reads from 15,20,25,30,and 35 randomly chosen SMRTcells (5.16Gb,6.87Gb,8.12Gb,10.06Gb,12.85Gb) from the 42 SMRTcells of ISO1 PacBio reads (Kim, et al. 2014). Our sampling method was inclusive and additive: to obtain 20 SMRTcells, we took the 15 previously randomly chosen SMRTcells and then added 5 more randomly selected SMRTcells to it. We then assembled these datasets using the PBcR pipeline. As shown in Fig. 1, the contig NG50 (NG50; G =130×10 6 bp) improves until it plateaus at 77X coverage (30 SMRTcells). At extremely high coverage (42 SMRTcells), the NG50 surges again. Notably, despite the extreme contiguity of these sequences, we are still discussing complete contigs, not gap containing scaffolds. Hybrid assembly As Fig. 1 makes clear, PB only assembly leads to relatively fragmented genomes at lower coverage ( Fig. 1), we investigated whether another assembly strategy could perform better with similar amounts of long molecule data. We chose DBG2OLC (Ye, et al. 2014) for its speed and its ability to assemble using less than 30X of long molecule coverage (cf. PacBio only methods, which typically require higher coverage ). DBG2OLC is a hybrid method, which uses both long read data and contigs obtained from a De Bruijn graph assembly. We used contigs from a single Illumina assembly generated using 64X of Illumina paired end reads (Langley, et al. 2012). As shown in Fig. 1, the assembly NG50 increases dramatically as PacBio coverage increased, plateauing near 10 SMRT cells (26X). Beyond this point, NG50 remained relatively constant. Alignment of the test assemblies to the ISO1 reference genome showed that the high level of contiguity in the 26X hybrid assembly without downsampling was due to chimeric contigs, and that these errors are fixed as coverage increases (supplementary Fig. 1-2). Chimeras were also absent when only the longest 50% or 75% of reads from the 26X dataset are used. To measure the impact of read length on hybrid assembly contiguity, we downsampled the datasets by discarding the shortest reads such that the resulting datasets contained 50% and 75% of initial total basepairs of data. We then ran the same assembly pipelines using these downsampled datasets and compared to the assemblies constructed from their counterparts that were not downsampled. Our downsampling shows that with high levels of PacBio coverage, modest gains in assembly contiguity can be obtained by simply discarding the shortest reads ( Fig. 1, red lines). Our hybrid assembly results indicate that improvements in contiguity above 30X are modest, though hybrid assemblies remain more contiguous than PacBio-only assemblies up until above 60X coverage. For projects limited by the cost of long molecule sequencing, a hybrid approach using ~30X PacBio sequence coverage is an attractive target that minimizes sequencing in exchange for modest sacrifices in contiguity. Assembly merging With modest PacBio sequence coverage (≤50X), hybrid assemblies are less fragmented than their self corrected counterparts, but more fragmented than self corrected assemblies generated from higher read coverage (Fig. 1). Despite this, for lower coverage, many contigs exhibit complementary contiguity, as observed in alignments (e.g. Supplementary Fig. 3a) between a PB only assembly (20 SMRT cells or 52X reads; NG50 1.98 Mb) and a hybrid assembly (longest 30X from 20 SMRTcells reads; NG50 3.2 Mb). For example, the longest contig (16.8 Mb) in the PB only assembly, which aligns to the chromosome 3R of the reference sequence ( Supplementary Fig. 3c), is spanned by 5 contigs in the hybrid assembly ( Supplementary Fig. 3b). This complementarity suggests that merging might improve the overall assembly. We first attempted to merge the hybrid assembly and the PB only assembly using the existing meta assembler minimus2 (Treangen, et al. 2011), but the program often failed to run to completion when merging a hybrid assembly and a PB only assembly, and when it did finish, the run times were measured in days. We therefore developed a program, quickmerge, that merges assemblies using the MUMmer (Kurtz, et al. 2004) alignment between the assemblies. Assembly contiguity improved dramatically when we merged the above hybrid and PB-only assemblies (assembly NG50 9.1 Mb; Fig. 1, supplementary Fig. 4). Further, assembly merging closed gaps present in the published ISO1 PacBio genome assembly (supplementary Fig. 5) (Berlin, et al. 2015) . The longest merged contig (27.5Mb), which aligns to the chromosome arm 3R of the reference sequence (supplementary Fig. 5), was longer than PacBio assembly based on 42 SMRTcells (25.4Mb) (Berlin, et al. 2015) (supplementary Fig. 5). This indicates that the contiguity of even high coverage PB-only assemblies can be increased by addition of inexpensive Illumina reads, and gaps in hybrid assembly can be closed by PB-only assembly even when the PB-only assembly quality is suboptimal. Assessment of assembly quality We assessed assembly quality using the Quast software package (Gurevich, et al. 2013). We confined our assessment to assemblies related to application of the quickmerge meta assembler, leaving the assessment of PBcR and DBG2OLC assemblies to their respective publications (Berlin, et al. 2015;Ye, et al. 2014). Quast quantifies assembly contiguity and additionally identifies misassemblies, indels, gaps, and substitutions in an assembly when compared to a known reference. We found that, compared to the D. melanogaster reference, all assemblies had relatively few errors, with the primary difference among the assemblies being genome contiguity (NG50). Hybrid assemblies tended to have fewer assembly errors than PB-only assemblies: the total number of misassemblies and the total number of contigs with misassemblies tended to be higher in PB only assemblies compared to hybrid assemblies. Still, PBonly assemblies tended to have slightly fewer mismatched bases compared to the reference, and slightly fewer small indels. Merged assemblies, being a mix of PB-only and hybrid assemblies, tended to have intermediate Quast statistics, although the merged assemblies improved upon the source assemblies in terms of misassemblies and misassembled contigs. Overall, the rate of mismatches was low at an average (across all assemblies) of 47 errors per 100kb (Supplementary Table 1, Supplementary Fig. 12). Mismatches and indels can be further reduced using existing programs, such as Quiver (Chin, et al. 2013). We used Quiver to polish all non-downsampled hybrid, self, and merged assemblies that used at least 15 SMRTcells of data. After Quiver, the average mismatch rate of the selected assemblies decreased from 24 per 100kb to 15, while the average indel rate decreased from 180 per 100kb to 32 ( Supplementary Fig. 13). Size selection and assembly contiguity Long reads generated by library preparation with aggressive size selection (Kim, et al. 2014) can generate extremely contiguous and accurate de novo assemblies (Berlin, et al. 2015). Genomic DNA libraries prepared with less stringent size selection (see Methods) can generate reads that are substantially shorter than the reads that have been shown to assemble into nearly gapless contigs (Kim, et al. 2014) (Fig. 2a). Longer reads are predicted to generate more contiguous genomes (Lander and Waterman 1988;Motahari, et al. 2013). We measured this by assembling genomes using randomly sampled whole reads (see Materials and Methods) from the ISO1 dataset to simulate a read length distribution comparable to, but slightly longer than is typical when size selection is not aggressive. Due to the long read length distribution of the ISO1 dataset relative to the shorter target distribution above, a maximum of 52X of ISO1 data could be sampled. Consistent with the theoretical prediction that, all else being equal, shorter reads produce more fragmented assemblies (Lander and Waterman 1988;Motahari, et al. 2013), reads from the downsampled 20 SMRTcell ISO1 data produced a PB-only assembly with an NG50 of 1.38 Mb, which is shorter than the NG50 (1.98 Mb) of the assembly from the same amount of ISO1 long read data (Fig. 2c). In addition, nearly all long contigs present in the original 20 SMRTcell assembly are fragmented in the assembly from the shorter reads ( Supplementary Fig. 6), although the amount of sequence data (52X) used to build the assemblies is the same. . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; For hybrid assembly, the shorter dataset also produced significantly less contiguous assemblies, consistent with predictions from theory (Motahari, et al. 2013) ( Fig. 2b). The NG50 achieved with 26X coverage of the shorter dataset was 1.62Mb, compared to an NG50 of 3.58Mb with the original ISO1 data. This is consistent with the PB-only result -longer read lengths lead to higher assembly contiguity. Thus, a library preparation procedure that aggressively size selects DNA is crucial in delivering long contigs. The effects of read quality on assembly As with reduction in read length, increased read errors are predicted to worsen assembly quality because noisier reads increase the required read length and coverage to attain a high quality assembly (Churchill and Waterman 1992;Shomorony, et al. 2015). When a PacBio sequencing experiment is pushed for high yield through either high polymerase or template concentration, the data exhibits lower quality scores (Fig. 3). Thus, with equal coverage and read length distribution, reads with higher error rates should result in a more fragmented assembly. To measure this effect, we partitioned the ISO1 PacBio read data into three groups with equal amounts of sequence ( Supplementary Fig. 7). For the first two groups, the data was split in half, with one half comprising the reads from the bottom 50% of phred scores and the other comprising the top 50%. Cutoffs were chosen for individual 100bp length bins, so the resulting datasets maintained the length distribution of the original data. The third dataset was generated by randomly selecting 50% of the reads in the full dataset. We then performed PacBioonly and hybrid assemblies with these data. Low read quality had a particularly dramatic effect on assembly by self correction (Fig. 4): the high quality and randomly sampled reads produced substantially better assemblies (6.23 Mb and 6.15 Mb, respectively) than the assembly made from low quality reads (NG50 146 kb). Hybrid assembly contiguity was far more robust to low quality reads (Fig. 4: NG50 of 3.1Mb for the high quality reads, 2.5Mb for the unfiltered reads, and 2.2Mb for the low quality reads), showing only moderate variation amongst different quality datasets. . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; DNA isolation for long reads As shown in the previous sections, read length is an important determinant of genome assembly contiguity. The method used for DNA isolation to generate the published PacBio Drosophila assembly involved DNA extraction by CsCl density gradient centrifugation and g-Tube (Covaris, Woburn, MA) based DNA shearing (Kim, et al. 2014). CsCl gradient centrifugation is a time-consuming method that requires expensive equipment that is not routinely found in most labs. Additionally, g-Tubes are expensive, require specific centrifuges, and are extremely sensitive to both the total mass of DNA input and to its length. These problems can be circumvented by using a widely available DNA gravity flow anion exchange column extraction kit in concert with a blunt needle shearing method (Graham and Hill 2001). Because the DNA fragment size distribution is so important, field inversion gel electrophoresis (FIGE) is an essential quality control step to validate the length distribution of the input DNA (Fig. 5) (see Methods for details). Sequences generated from libraries constructed from this isolation method are comparable to or longer than the published Drosophila PacBio reads (Kim, et al. 2014) ( Fig. 2a). The length distribution of the input DNA can potentially be improved further by using needles that generate even longer DNA fragments after shearing (supplementary Fig. 8). Discussion: Genome assembly projects must balance cost against genome contiguity and quality (Baker 2012). Self correction and assembly using only long reads clearly produces complete and contiguous genomes ( Fig.1; supplementary Table 1). However, it is often impractical to collect the quantity of PacBio sequence data (>50X) necessary for high quality self correction either because of price or because of scarcity of appropriate biological material, especially when assembling very large genomes. For example, at least 40 µg of high quality genomic DNA is required to for us to generate 1.5 µg of PacBio library when we use two rounds of size selection in the library preparation protocol. A 1.5 µg library produces, on average, 15-20 Gb of long DNA molecules. This dramatic loss of DNA during library preparation limits the amount of . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . PacBio data that can be obtained for a given quantity of source tissue. When a project is limited by cost or tissue availability, a hybrid approach using a mix of short and long read sequences is an alternative to self corrected long read sequences. Our results show that when 64.3X of 100bp paired end Illumina reads is used in combination with 10X -30X of PacBio sequences, reasonably high quality hybrid assemblies can be obtained, with 30X of PacBio sequences yielding the best assembly. In fact, as our results show, a 30X hybrid assembly is less fragmented and hence of higher quality than even a 50X self-corrected assembly (Fig. 1). However, our results also show that with the same long molecule data, PB only and hybrid assemblies often assemble complementary regions of the genome. Hence merging of a PB only and a hybrid assembly results in a better assembly than either of the two (supplementary table 1), regardless of the total amount of long molecule sequences (≥30X) used. Thus, projects for which ≥ 30X of single molecule sequence can be generated will better served by collecting an additional 50-100X of Illumina data. These data can then be used to generate both a self-corrected assembly and a hybrid assembly, which can then be merged to obtain an assembly of comparable contiguity to PB only assemblies using twice the amount of PacBio data (Fig. 1). This merged assembly approach produced the highest NG50 of any assembly at all coverage levels at which it could be tested, with little or no tradeoff in base accuracy or misassemblies ( Supplementary Fig. 12-13). Nonetheless, it is clear that the tools available for genomic assembly have inherent technical limitations: DBG2OLC assembly contiguity asymptotes as PacBio read coverage passes about 30X, and the PBcR pipeline produces the best assembly when the longest reads that make up 40X (of genome size) data are corrected and only the longest 25X from the corrected sequences are assembled (Berlin, et al. 2015). Indeed, when coverage greater than 25X is used for PacBio only assembly, there is a real loss of assembly quality as coverage increases (data not shown). This may be because an increase in coverage leads to the stochastic accumulation of contradictory reads that cannot be easily reconciled, a limitation of the overlap-layout-consensus (OLC) algorithm used in assembling the long reads (Miller, et al. 2010;Myers 1995). . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; Single molecule sequencing technologies, as offered by PacBio and Oxford Nanopore (Goodwin, et al. 2015), promise to improve the quality of de novo genome assemblies substantially. However, as we have shown using PacBio sequences as example, not all LMS data is equally useful when assembling genomes. We provide empirical validation, perhaps for the first time, of length and quality on assembly contiguity. Additionally, our results provide a novel insight: high throughput short reads can still be useful in improving contiguity of assemblies created with LMS, even when LMS coverage is high. In light of our results, we have a compiled a list of best practices for DNA isolation, sequencing, and assembly (Supplementary Fig. 10 and Supplementary Fig. 11). Particularly important for DNA isolation is quality control of read length via pulsed field gel electrophoresis. Regarding assembly, we recommend that researchers obtain between 50x and 100x Illumina sequence. Next is to determine how much long molecule coverage to obtain: between 25x and 35x, or greater than 35x. With coverage below 35X, PB only methods often fail to assemble, and produce low contiguity when they do assemble, and thus, we can only confidently recommend a hybrid assembly. Above 35X, we recommend meta assembly of a hybrid and a PB only assembly. In this case, we recommend downsampling to the 35X longest PacBio reads when generating the hybrid assembly may be helpful because hybrid assembly contiguity decreases above this coverage level, but this has not been extensively tested. For the last several years, the rapid development of short read sequencing has fostered an explosion of genome sequencing. However, as a result of the popularity of short read technologies, the average quality and contiguity of published genomes has plummeted (Alkan, et al. 2011). Indeed, short read sequences are poorly suited to the task of assembly, especially when compared with long molecule alternatives. While long molecule sequencing has rekindled the promise of high quality reference genomes for any organism, it is substantially more expensive than short read alternatives. In order to mitigate uncertainties inherent in adopting new technology, we have outlined the most salient features to consider when planning a genome assembly project. We have recommended effective DNA isolation and preparation practices that result in long reads that take advantage of what the PacBio technology has to offer. We have also provided a guide for assembly that leads to extremely contiguous genomes even when . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; circumstances prevent the collection of large quantities of long molecule sequence data recommended by current methods. PB-only Assembly For PacBio sequences, the assembly pipeline is divided into three parts: correction, assembly, and polishing. Correction reduces the error rate in the reads to 0.5-1% (Berlin, et al. 2015), and is necessary because reads with a high (~15%) error rate are extremely difficult to assemble (Myers, et al. 2000). Correction is facilitated by high PacBio coverage, which allows the error corrector to successfully 'vote out' errors in the PacBio reads. For self correction, we used the PBcR pipeline (Berlin, et al. 2015) as implemented in wgs8.3rc1 which, by default, corrects the longest 40X reads. The second step involves assembling the corrected reads into contigs. We used the Celera assembler (Myers, et al. 2000), included in the same wgs package, for assembly. A third optional step involves polishing the contigs using Quiver (Chin, et al. 2013), which brings the error rate down to 0.01% or lower. All of the assemblies described in this paper were generated with the same PBcR command and spec file (commands and settings, Supplementary materials). For PB only assembly of D. melanogaster ISO1 sequences, we used a publicly available PacBio sequence dataset (Kim, et al. 2014). We chose the D. melanogaster dataset for our experiments and simulations because D. melanogaster is widely used in genetics and genomics research and its reference sequence (release 5.57,http://www.fruitfly.org) is one of the best, if not the best, eukaryotic multicellular genome assemblies in terms of assembly contiguity. This is true for both the PacBio generated assembly (21Mb contig N50) 13 and the Sanger assembly (14Mb scaffold N50) of ISO1. A high quality reference assembly serves as a great positive control and a reference. We evaluated assembly qualities using the standard assembly statistics (average contig size, number of contigs, assembled genome size, N50, etc.) using the package Quast (Gurevich, et al. 2013). . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; Hybrid Assembly PB only assembly of high error, long molecule sequences depends upon redundancy between the various low quality reads to 'vote out' errors and identify the true sequence in the sequenced individual. An alternative approach to this problem is to use known high quality sequencing reads to correctly call the bases in the sequence, and then to use PacBio reads to identify the connectivity of the genome. In order to achieve the best possible assembly results, we tested several different hybrid assembly pipelines before choosing DBG2OLC and Platanus (Kajitani, et al. 2014). In our early tests, the next highest performing hybrid assembler, a combination of ECTools (Lee, et al. 2014) and Celera, achieved a highest N50 of 616kb in Arabidopsis thaliana using 19 SMRT cells of data (Lee, et al. 2014); in contrast, using 20 SMRT cells of the same data, the DBG2OLC and Platanus pipeline produced an N50 of 4.8Mb. We thus disregarded ECTools and focused on DBG2OLC. We tested the alternative error corrector, LorDEC (Salmela and Rivals 2014), along with the Celera assembler, but found that the Lordeccorrected Celera assembly of our standard D. melanogaster dataset (26X of PacBio data and 64.3X of Illumina data) produced an NG50 of only 109KB; thus, we also discarded Lordec as a viable assembly choice compared to DBG2OLC. Using the standard 64.3X of Illumina data discussed above and 26X of PacBio data, we compared DBG2OLC runs using three different De Bruijn graph assemblers: SOAP (Luo, et al. 2012), ABySS (Simpson, et al. 2009), and Platanus. The NG50s for the three assemblies were, respectively, 2.43Mb, 0.167Mb, and 3.59Mb. Based on this result, we chose to use Platanus for the remainder of the assemblies. We used the pipeline recommended by DBG2OLC (Ye, et al. 2014) to perform hybrid assemblies. In this pipeline, we used Platanus to perform De Bruijn graph assembly on the Illumina reads. We used 8.36 Gb (64.3X) of Illumina sequence data of the ISO1 D. melanogaster inbred line generated by the DPGP project (Langley, et al. 2012) to generate a De Bruijn graph assembly using Platanus. We used DBG2OLC to align our PacBio reads to the De Bruijn graph assembly to produce a 'backbone', then, according to the DBG2OLC standard pipeline, used the backbone generate the consensus using the programs Blasr (Chaisson and Tesler 2012) and PBDagCon . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; (https://github.com/PacificBiosciences/pbdagcon). As with the PB only assemblies above, we evaluated assembly quality using the Quast package. Assembly merging Hybrid assembly and PacBio assembly were merged using a custom C++ program (https://github.com/mahulchak/quickmerge). The program takes two fasta files (containing contigs from a PB only assembly and contigs from a hybrid assembly) as inputs and splices contigs from the two assemblies together to produce an assembly with higher contiguity. First, the program MUMmer (Kurtz, et al. 2004) is used to compute the unique alignments between contigs from the two assemblies. Our program then uses these alignments and finds the high confidence overlaps (HCO) among them (supplementary Fig. s9). The program identifies HCOs by dividing the total alignment length between contigs by the length of unaligned but overlapping regions of the alignment partners (supplementary Fig. s9). The "HCO" parameter controls merging sensitivity at the cost of increased false positives: the higher the HCO parameter value, the more stringent the cutoff for HCO selection. For our assembly merging, we used 1.5 as the HCO cutoff. The program then searches amongst these HCOs to find alignments that involve long contigs in the linker assembly (here, the PB only assembly). The program then uses these long contigs as seeds to begin a search for contig length expansion. A higher HCO cutoff is used for seed contigs to avoid spurious seeding. We used an HCO value of 5.0 for seed contigs for all merged assemblies. The seed contigs are extended on both sides by looking for alignment partners in the alignment pool that passed the HCO >1.5 cutoff. Next, the ordered contigs are joined by crossing over from one assembly to another (supplementary Fig. s9). For 15, 20, 25, 30 SMRTcells datasets, merged assemblies were generated using the PB only assembly and their corresponding hybrid assemblies. For 35 and 42 (all reads) SMRTcells datasets, the PB only assemblies were merged with the hybrid assembly obtained from the 30 SMRTcells dataset. All hybrid assemblies used for merging were generated without downsampling by read length or quality. Downsampling . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; We used three different downsampling schemes on the D. melanogaster data: first, we randomly downsampled the data by drawing a random set of SMRTcells of data from the entire set of 42 SMRTcells; second, from those datasets, we downsampled the longest 50% and 75% of the reads. Finally, we downsampled the D. melanogaster data to match the read length distributions of PacBio reads from a pilot Drosophila pseudoobscura genome project that was produced using a standard protocol without aggressive size selection and generously made available by Stephen Richards. We used the lowess function in R with a smoother span (f) of 1/5 to generate curves representing the distribution of read lengths in the D. melanogaster and D. pseudoobscura datasets, then assigned a probability to each read length defined as the quotient of the melanogaster distribution and the pseudoobscura distribution at that read length. Reads were then randomly removed from the D. melanogaster dataset according to the assigned probabilities. This method was used for all numbers of SMRTcells up to 20. Thus, we generated a set of reads that relatively closely resembles the read length distribution of the original D. pseudoobscura data, but is made up of D. melanogaster sequence data, allowing for a comparison of assembly quality with regard to read length without differences in the genomes of the two species as a confounding factor. The lowess function resulted in a slightly over-smoothed distribution such that samples drawn from it were slightly longer than in D. pseudoobscura. Consequently, assemblies from these reads should be slightly better than if they exhibited the (shorter) distribution for D. pseudoobscura. As such, this choice is conservative and, if anything, underestimates the importance of size selection. Additionally, we downsampled based on read quality to test the effect of read quality on assembly contiguity. We used a custom script to separate the entire 42 SMRTcell ISO1 dataset into two halves. One half contained the 50% of all reads with the lowest average base quality, while the other half contained the 50% of all reads with the highest average base quality. We also generated a dataset containing 50% of the data that consisted of randomly chosen reads (to preserve the quality distribution of the original data). Preparing high quality DNA library for long reads . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; Obtaining high quality, high molecular weight (HMW) genomic DNA We used Qiagen's Blood and Cell culture DNA Midi Kit for DNA extraction. As single molecule technologies (PacBio and Oxford Nanopore) do not require any sequence amplification step, a large amount of tissue is required to ensure enough DNA for library preparations that opt for no amplification (as is standard for genome assembly sequencing). For flies, 200 females or 250 males flies is sufficient for optimal yield (40-60ug ug DNA) from a single anion-exchange column. For other organisms, number of individuals need to be adjusted based on the tissue mass. A good rule of thumb is to keep the total amount of input tissue 100-150mg for optimal yield from each column. To extract genomic DNA, 0-2 days old flies were starved for two hours, flash frozen in liquid nitrogen, and then ground into fine powder using a mortar and pestle pre-chilled with liquid nitrogen. The tissue powder is directly transferred into 9.5 ml of buffer G2 premixed with 38µl of RNaseA (100mg/ml) and then 250 µl (0.75U) of protease (Qiagen) is added to the tissue homogenate. The volume of protease can be increased to 500 µl to reduce the time of proteolysis. The tissue powder is mixed with the buffer by inverting the tube several times, ensuring that there are no large tissue clumps present in the solution. The homogenate is then incubated at 50˚C overnight with gentle shaking (with 500µl protease, this incubation time can be reduced to 2 hours or less). The next day, the sample is taken out of the incubator shaker and centrifuged at 5000xg for 10 minutes at 4˚C to precipitate the tissue debris. The supernatant is decanted into a fresh 15ml tube. The little remaining particulate debris in the tube was removed with a 1 ml pipette. The sample is then vortexed for 5 seconds to increase the flow rate of the sample inside the column and then poured into the anion-exchange column. The column is washed and the DNA is eluted following the manufacturer's protocol. Genomic DNA is precipitated with 0.7 volumes of isopropanol and resuspended in Tris buffer (pH 8.0). For storage of one week or less, we kept the DNA at 4˚C to minimize freeze-thaw cycles; for longer storage, we kept the DNA at -20˚C. 1.5" blunt end needles (Jensen Global, Santa Barbara, CA) were used to shear the DNA. The needle size can be varied to obtain DNA of different length distribution: 24 gauge needles produces a size range of 24-50 kb. To obtain larger fragments, <24 gauge needles need to be used. For the DNA we have sequenced, up to 200ug of high molecular weight raw genomic DNA was sheared using the 24 gauge needle (Fig. 5). Additionally, we have also sheared DNA with 21, 22, and 23 gauge needles to demonstrate the size distribution they generate (supplementary Fig. s8). In brief, the entire DNA solution is drawn into a 1ml Luer syringe and dispensed quickly through the needle. This step is repeated 20 times to obtain the desired distribution of fragment sizes. Quality Control using FIGE We verified the size distribution of unsheared and sheared genomic DNA using field inversion gel electrophoresis (FIGE), which allows separation of high molecular weight DNA. The DNA is run on a 1% agarose gel (0.5x TBE) with a pulse field gel ladder (New England Biolabs, Ipswich, MA). The gel is run at 4˚C overnight in 0.5 x TBE. To avoid temperature or pH gradient buildup, a pump is used to circulate the buffer. The FIGE is run using a BioRad Pulsewave 760 and a standard power supply with the following run Library preparation The needle sheared DNA is quantified with Qubit fluorometer (Life Technologies, Grand Island, NY) and NanoDrop (Thermo Scientific,,Wilmington, DE). Following quantification, 20 µg of sheared DNA is optionally run in four lanes of the Blue Pippin size selection instrument (Sage Science, Beverly, MA) using 15-50 kb as the cut-offs for size selection (Fig. 5). This optional size selection step increases final library yield at the cost of requiring more input DNA. This size selected DNA is then used to prepare SMRTbell template library following PacBio's protocol. A second round of size selection is performed on the SMRTbell template using a 15-50 kb cutoff to remove the smaller . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; fragments generated during the SMRTbell library preparation step (Fig. 5). The second step ensures that DNA molecule smaller than 15kb are not sequenced in zero mode waveguides. DNA Sequencing PacBio sequencing was conducted to demonstrate length distributions (D. simulans Fig. 2a) and evaluate the impact of library preparation on quality (Fig. 3), and was performed at the UCI High Throughput Core Facility using DNA isolated using the protocol described above. We sequenced one SMRTcell of Drosophila genomic DNA with the following conditions to obtain sequences with standard quality and length distribution: 10:1 polymerase to template ratio, 250 pM template concentration. To demonstrate the tradeoff between yield and quality, we sequenced one SMRTcell each for polymerase:template ratios of 40:1,80:1,100:1 with template concentration held constant at 200pM, and one SMRTcell each with 300pM and 400pM template concentration with the polymerase:template ratio being held constant at 10:1. NG50 here is the contig size such that at least half of the 130Mb D. melanogaster genome (65Mb) is contained in contigs of that size or larger. "50% longest" and "75% longest", respectively, refer to datasets in which only the longest 50% or 75% of the available reads have been used. The coverage listed on the x-axis in this case refers to the total amount of available data (before downsampling). "ISO1 to Pseudo by removal" refers, to the downsampling scheme in which data was removed from the ISO1 dataset to cause the read lengths of the ISO1 data to resemble read lengths in the publically available D. pseudoobscura by removing reads differentially based upon length. . The distribution of read quality in sequencing runs performed at the UCI genomics core using our DNA preparation technique. "P" here refers to polymerase loading during sequencing (the proportion of polymerase to template, where 10 would indicate a 10:1 ratio of polymerase to template), while "T" refers to template loading concentration during sequencing (in picomolarity). Figure 2a, a plot of cumulative length distribution. These curves represent the cumulative length distribution of final assemblies using low, medium, and high quality selected reads using either PB only assembly or hybrid assembly. . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015; Figure 5: An example of correctly extracted and sheared DNA visualized using field inversion gel electrophoresis. The ladder is the NEB low range PFG marker (no longer produced). The lanes of the gel are as follows: ladder, unsheared DNA, DNA sheared with a 24 gauge needle, sheared DNA size selected with 15-50kb cut-off, SMRTbell template library after 15-50kb size selection. From the gel, it is evident that there is a minimal 'tail' of DNA below ~15kb, the preferred size selection minimum. . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/029306 doi: bioRxiv preprint first posted online Oct. 16, 2015;
9,370.2
2015-10-16T00:00:00.000
[ "Biology" ]
Observation of localized flat-band states in Kagome photonic lattices We report the first experimental demonstration of localized flat-band states in optically induced Kagome photonic lattices. Such lattices exhibit a unique band structure with the lowest band being completely flat (diffractionless) in the tight-binding approximation. By taking the advantage of linear superposition of the flat-band eigenmodes of the Kagome lattices, we demonstrate a high-fidelity transmission of complex patterns in such two-dimensional pyrochlore-like photonic structures. Our numerical simulations find good agreement with experimental observations, upholding the belief that flat-band lattices can support distortion-free image transmission. © 2016 Optical Society of America OCIS codes: (130.0130) Integrated optics; (130.2790) Guided waves; (230.0230) Optical devices; (230.7370) Waveguides; (230.3120) Integrated optics devices. References and links 1. N. K. Efremidis, S. Sears, D. N. Christodoulides, J. W. Fleischer, and M. Segev, “Discrete solitons in photorefractive optically induced photonic lattices,” Nature (London) 66, 046602 (2002). 2. J. W. Fleischer, M. Segev, N. K. Efremidis, and D. N. Christodoulides, “Observation of two-dimensional discrete solitons in optically induced nonlinear photonic lattices,” Nature (London) 422, 147–150 (2003). 3. F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, “Discrete solitons in optics,” Phys. Rep. 463, 1–126 (2008). 4. Z. Chen, M. Segev, and D.N. Christodoulides, “Optical spatial solitons: historical overview and recent advances,” Rep. Prog. Phys. 75, 086401 (2012). 5. S. Longhi, M. Marangoni, M. Lobino, R. Ramponi, P. Laporta, E. Cianci, and V. Foglietti, “Observation of dynamic localization in periodically curved waveguide arrays,” Phys. Rev. Lett. 96 243901 (2006). 6. T. Schwartz, G. Bartal, S. Fishman, and M. Segev, “Transport and Anderson localization in disordered twodimensional photonic lattices,” Nature (London) 446, 52 (2007). 7. A. Szameit, F. Dreisow, M. Heinrich, T. Pertsch, S. Nolte, A. Tünnermann, E. Suran, F. Louradour, A. Barthélémy, and S. Longhi, “Image reconstruction in segmented femtosecond laser-written waveguide arrays,” Appl. Phys. Lett. 93, 181109 (2008). 8. A. Szameit, Y. V. Kartashov, F. Dreisow, M. Heinrich, T. Pertsch, S. Nolte, A. Tünnermann, V. A. Vysloukh, F. Lederer, and L. Torner, “Inhibition of light tunneling in waveguide arrays,” Phys. Rev. Lett. 102(15), 153901 (2009). 9. P. Zhang, N. K. Efremidis, A. Miller, Y. Hu, and Z. Chen, “Observation of coherent destruction of tunneling and unusual beam dynamics due to negative coupling in three-dimensional photonic lattices,” Opt. Lett. 35(19), 3252–3254 (2010). 10. J. Yang, P. Zhang, M. Yoshihara, Y. Hu, and Z. Chen, “Image transmission using stable solitons of arbitrary shapes in photonic lattices,” Opt. Lett. 36, 772 (2011). #259832 Received 23 Feb 2016; revised 6 Apr 2016; accepted 11 Apr 2016; published 14 Apr 2016 (C) 2016 OSA 18 Apr 2016 | Vol. 24, No. 8 | DOI:10.1364/OE.24.008877 | OPTICS EXPRESS 8877 11. R. Keil, Y. Lahini, Y. Shechtman, M. Heinrich, R. Pugatch, F. Dreisow, A. Tünnermann, S. Nolte, and A. Szameit, “Perfect imaging through a disordered waveguide lattice,” Appl. Phys. Lett. 37, 809 (2012). 12. H. Aoki, M. Ando and H.Matsumura, “Hofstadter butterflies for flat bands,” Phys. Rev. B 54, R17296 (1996). 13. R. A. Vicencio, C. Cantillano, L. Morales-Inostroza, B. Real, C. Mejia-Cortes, S. Weimann, A. Szameit, and M. I. Molina, “Observation of localized states in Lieb photonic lattices,” Phys. Rev. Lett. 114, 245503 (2015). 14. S. Mukherjee, A. Spracklen, D. Choudhury, N. Goldman, P. Ohberg, E. Andersson and R. R. Thomson, “Observation of a localized flat-band state in a photonic Lieb lattice,” Phys. Rev. Lett. 114, 245504 (2015). 15. S. Xia, Y. Hu, D. Song, Y. Zong, L. Tang, and Z. Chen, “Demonstration of flat-band image transmission in optically induced Lieb photonic lattices,” Opt. Lett. 41(7), 1435–1438 (2016). 16. J. L. Atwood, “Kagome lattice: a molecular toolkit for magnetism,” Nat. Mater. 1, 91 (2002). 17. D. L. Bergman, C. Wu, and L. Balents, “Band touching from real-space topology in frustrated hopping models,” Phys. Rev. B 78, 125104 (2008). 18. B. Moulton, J. Lu, R. Hajndl, S. Hariharan, and M. J. Zaworotko, “Crystal engineering of a nanoscale kagome lattice,” Angew. Chem. Int. Ed. Engl. 41, 2821–2824 (2002). 19. Y. Nakata, T. Okada, T. Nakanishi, and M. Kitano, “Observation of flat band for terahertz spoof plasmon in metallic kagome lattice,” Phys. Rev. B 85, 205128 (2012). 20. S. Endo, T. Oka, and H. Aoki, “Tight-binding photonic bands in metallophotonic waveguide networks and flat bands in kagome lattices,” Phys. Rev. B 81, 113104 (2010). 21. H. Takeda, T. Takashima, and K. Yoshino, “Flat photonic bands in two-dimensional photonic crystals with kagome lattices,” J. Phys.: Condens. Matter 16, 6317 (2004). 22. M. Boguslawski, P. Rose, and C. Denz, “Nondiffracting kagome lattice,” Appl. Phys. Lett. 98, 061111 (2011). 23. Y. Gao, D. Song, S. Chu, and Z. Chen, “Artificial graphene and related photonic lattices generated with a simple method,” IEEE Photonics J., 6, 2201806 (2014). 24. R. A. Vicencio and C. Meja-Corts, “Diffraction-free image transmission in kagome photonic lattices,” J. Opt. 16, 015706 (2014). 25. O. Peleg, G. Bartal, B. Freedman, O. Manela, M. Segev, and D. N. Christodoulides, “Conical diffraction and gap solitons in honeycomb photonic lattices,” Phys. Rev. Lett. 98, 103901 (2007). 26. D. Song, V. Paltoglou, S. Liu, Y. Zhu, D. Gallardo, L. Tang, J. Xu, M. Ablowitz, N. K. Efremidis, and Z. Chen, “Unveiling pseudospin and angular momentum in photonic graphene,” Nat. Commun. 6 6272 (2015). 27. H. Martin, E. D. Eugenieva, Z. Chen, and D. N. Christodoulides, “Discrete solitons and soliton-induced dislocations in partially coherent photonic lattices,” Phys. Rev. Lett. 92, 123902 (2004). 28. A. Kelberer, M. Boguslawski, P. Rose, and C. Denz, “ Embedding defect sites into hexagonal nondiffracting wave fields,” Opt. Lett. 37, 5009–5011 (2012). 29. R. A. Vicencio and M. Johansson, “Discrete flat-band solitons in the kagome lattice,” Phys. Rev. A 87, 061803(R) (2013). 30. G. Chern and A. Saxena, “PT-symmetric phase in kagome photonic lattices,” Opt. Lett. 40(24), 5806–5809 (2015). 31. S. Mukherjee and R. R. Thomson, “Observation of localized flat-band modes in a quasi-one-dimensional photonic rhombic lattice,” Opt. Lett. 40, 5443–5446 (2015). Introduction Nondestructive transmission of optical information has always been a challenging subject of research. Over the past several years, evanescently coupled waveguide arrays, or better known as photonic lattices, have provided an extremely effective platform for studying many intriguing fundamental phenomena ranging from discrete solitons to dynamical localization and Anderson localization in disordered lattices [1][2][3][4][5][6]. They have also been explored for applications such as image transmission in a variety of optical settings [7][8][9][10][11]. In particular, recent developments in the so-called flat-band [12] lattices have opened up new avenues for manipulation of light and controlled image transmission. For instance, Lieb photonic lattices were realized via the femtosecond laser-writing technique as well as the optical induction technique, allowing for direct observations of diffractionless flat-band states [13][14][15]. In these recent experimental demonstrations, the Lieb photonic lattices remain perfectly periodic without any modulation, yet they are able to support localized states due to the principle of phase cancellation which is a common feature of flat-band eigenstates [12]. A Kagome lattice is a triangular depleted lattice that is essentially a two-dimensional (2D) counterpart of the "pyrochlore" structure, which has been historically studied as a model for geometrically frustrated magnetism and for presenting the flat bands [16,17]. For decades in condensed matter physics, Kagome lattices have been the subject mainly for theoretical study due to their intriguing properties associated with spin frustration. But technology advancement has turned them from theory into reality, including for example the realization of nanoscale Kagome lattices by self-assembling atoms and molecules [18] and metallic Kagome lattices by novel design and fabrication of metamaterials [19]. Several schemes were theoretically proposed to create flat bands in the 2D Kagome lattice, utilizing metallo-photonic waveguides [20] or using photonic crystal structures [21]. In optics, photonic Kagome lattices have also been established using the technique of optical induction [22,23]. In fact, recently it has been proposed theoretically that the flat-band system in a Kagome photonic lattice could be used for diffraction-free image transmission [24]. However, up to now, undistorted propagation of flatband states in Kagome photonic lattices has not been accomplished experimentally to the best of our knowledge. In this paper, we report the first experimental demonstration of a localized flat-band state in Kagome photonic lattices. The Kagome lattices are "fabricated" in a bulk self-focusing nonlinear crystal by a simple yet effective optical induction technique. Such induction technique is based on optical Fourier transformation through an amplitude mask superimposed with a phase mask, without the need of using a spatial light modulator (SLM) for engineering the latticeinducing beam. We show that optically induced Kagome lattices offer a convenient platform for probing the flat-band states. Furthermore, we realize a high-fidelity bound-state transmission in such 2D pyrochlore-like photonic structures by judiciously exciting a superposition of flat-band eigenmodes of the Kagome lattices. Comparing with our previous work on Lieb lattices [15], which are more feasible for square or L-shaped image transmission, in Kagome lattices the flat band is in the "ground-state", and their localized mode structures are more suitable for ring or necklace-shaped image transmission. Theoretical model and linear spectrum The linear propagation of a light beam in photonic lattices is well described by the following Schrödinger-like equation under paraxial approximation [1][2][3]: where (x,y) are the transverse coordinates, z represents the longitudinal propagation direction, and ψ corresponds the electric field envelop of the probe beam. 2 is the 2D transverse Laplacian operator. n 0 is the refractive index of the nonlinear medium, k 0 = 2π/λ 0 is the wave number in the vacuum, and k 1 = n 0 k 0 . The refractive index change is Δn(x, y) = n 3 0 γ 33 E 0 / [2 (1 + I l )], which represents the refractive index changes corresponding to the 2D Kagome photonic lattices. I l is the Kagome intensity pattern, n 0 = 2.35, the electro-optic coefficient γ 33 = 280 pm/V, and the bias field E 0 = 1 kV/cm. Assuming that only the hopping between the nearest-neighbor lattice sites is considered, we can use the tight-binding model to solve Eq. (1). Here we consider the 2D Kagome photonic lattice shown in Fig. 1(a) as the index changes (potential) in Eq. (1). Figure 1(b) shows a 3D plot of the band structure in k-space. In this case, the linear spectrum exhibits three bands: two dispersive and one nondispersive (flat) band in a unitary cell of the Kagome lattice [dashed triangle of Fig. 1(a)]. Similar to honeycomb lattices [25,26], the curved bands are featured by the linear dispersion relation in the vicinity of Dirac points (six K points), where the Dirac cones from the upper and lower bands are integrated. The bottom of the lower band touches the third (flat) band at Γ point of the first Brillouin zone (BZ). As seen from the Fig. 1(b), the third band is a completely degenerated flat band. The presence of the flat band which comes from local in- terference effect is the significant feature in this band structure. The localized eigenmodes in the flat-band are called "ring" modes with equal amplitudes but alternating opposite phases [24], which are marked as black and white circles [see the Fig. 1(c)]. Different from modes in normal Bloch bands, the flat-band linear wave functions (flat-band states) are not completely extended due to the destructive interference. The purpose of this paper is to experimentally demonstrate the predicted localized flat-band modes in Kagome photonic lattices, and to explore the possibility to use superposition of these modes for distortion-free image transmission through bulk media. Optical induction of Kagome photonic lattices First, we fabricate the Kagome photonic lattices using the well-established optical induction technique [1,2]. The experimental setup used for lattice induction and excitation of the flatband states is shown in Fig. 2(a). A laser beam operating at 532nm is divided into three paths (as labeled in the figure): the first path is the lattice-forming beam, the second path is the probe beam, and the third path provides a reference beam for interference measurement as needed. Lattice-forming beam [ Fig. 2(a), the line of the red arrow 1] is ordinarily polarized and is partially spatially incoherent after passing through a rotating diffuser. To generate the Kagome lattices intensity pattern, this partially incoherent beam is further modulated by a specially designed amplitude mask and a spectral filter (phase mask 1) positioned at the Fourier plane [23]. The phase mask 1 (PM1) is used to spatially filter the light and modulate the phase of spatial frequency spectrum in the front focal plane of the transform lens. The PM1 consists of six holes, three of them covered with tilted glass plates to adjust the relative phase. The diameter of each hole is 0.8 mm and the spacing between adjacent holes is 4.0 mm. When the phase difference of the adjacent spatial frequency spectrum is 2π/3, the Kagome lattice is established. This combined modulation leads to a Kagome intensity pattern on the front facet of the photorefractive crystal (SBN: 5 mm×10 mm×5 mm), which leads to a Kagome index lattice when the crystal is biased by an electric field that provides a self-focusing nonlinearity. The technique using partially incoherent light beam for optical induction of photonic lattices is well established in literature, as used for earlier experiments on discrete solitons [27] and for recent demonstrations of photonic graphene lattices and Kagome lattices [23,26]. Thus, in this work, we use the technique to create a Kagome lattice which can remain stable and nearly invariant along the direction of propagation throughout the nonlinear crystal for testing the flat-band states, but our focus is not on lattice induction. In the second path (the line of the red arrow 2), the probe beam is extraordinarily polarized. In order to excite a flat-band state, we use a phase-only spatial light modulator (SLM) to modulate the phase of the probe beam. We simultaneously encode the amplitude and the phase information onto SLM by designing a hologram (phase mask 2) consisting several phase gratings arranged in a hexagonal structure as shown in the bottom-left insert of Fig. 2(a). When the hologram is encoded onto the SLM, a broad extraordinarily polarized quasi-plane wave is sent to the SLM, and then first order of the diffracted light is filtered through an adjustable diaphragm. With this method, we can obtain a probe beam with a necklace-like intensity pattern with a desired phase structure as needed for the "ring" mode shown in Fig. 1(c). After modulating by the SLM, the probe beam is divided into two parts by a Mach-Zehnder interferometer with a mechanical shutter inserted in one arm. When the shutter is closed, the probe beam only excites one flat-band eigenmode (i.e, a localized "ring" mode). However, when the shutter is open, the two outputs from the interferometer can be superimposed to generate a complex pattern (an elongated necklace) to excite two "ring" modes as a flat-band bound state. The size of each intensity spot of the probe beam at input is controlled by the imaging lens, and the spacing between adjacent spots is controlled via adjusting the phase gratings. Meanwhile, the phase structure of the probe beam can be controlled via fine-tuning the relative localizations of the six gratings. For example, when the stripes of the nearest neighboring gratings are staggered arranged as shown in PM2, we can get a probe beam with an out-of phase structure, whereas unstaggered grating arrangement leads to in-phase condition of the probe beam. In addition, the input/output intensity profiles of the probe beams and the Kagome lattices are monitored with a CCD camera. To examine the phase structure of different profiles, we use the third path beam as a tilted reference quasi-plane wave to obtain interferograms. The crystal used in our experiments is a 10-mm long SBN: 60 under a positive bias field of 1.6 KV/cm. The orientation of the crystal's symmetry c-axis is at horizontal direction, and the applied electric field is along the crystalline c-axis. Typical experimental results for an optically induced Kagome lattice of 28 µm are shown in Figs. 2(b) and 2(c). Figure 2(b) depicts the output intensity distribution for the induced Kagome lattice at the back face of the crystal. In order to test the Kagome structured waveguide, a broad uniform beam (quasi-plane wave) is used as a probe to the lattice, and its output after propagating through the lattice is shown in Fig. 2(c), exhibiting clearly a Kagome structure due to linear guidance by the lattice-induced waveguide arrays. In our experiments, the ordinarily-polarized lattice-inducing beam would experience only weak nonlinear index change, so the anisotropic photorefractive nonlinearity does not play a significant role. On the other hand, the diffraction along the two orthogonal principal axes is slightly different due to inherent orientation anisotropy in the Kagome lattices. So we make the direction with slightly stronger diffraction along the crystalline c-axis (i.e., the direction with stronger nonlinearity) in our experiments. Using this method, our induced Kagome lattices are pretty uniform in both directions and are not afflicted by the anisotropic nonlinearity, as seen in Fig. 2(c) for the output intensity distribution of the guided wave pattern. Demonstration of localized flat-band states in Kagome photonic lattices Next, we demonstrate experimentally non-diffracting propagation of flat-band modes and their superposition in the Kagome lattice. Typical experimental results are shown in Fig. 3 , in contradistinction to that from disorder-induced localization. As expected, a single Gaussian beam experiences discrete diffraction in the lattice [ Fig. 3(i)]. A six-spot necklace pattern cannot be localized if they are all in phase [ Fig. 3(j)]. To excite the flat-band states, the spots in the necklace are made with equal amplitude but alternating opposite phase [ Fig. 3(c)], thus a localized "ring" mode is excited [ Fig. 3(k)]. Lower insets in Figs. 3(b) and 3(c) show the interferograms of the input probe beams with a tilted quasi-plane wave, where we can clearly see the in-phase and out-of-phase relation between neighboring intensity spots of the probe beam through the interference fringes. More importantly, the interferogram [inset in Fig. 3(k)] taken at the output from the lattice reveals that the out-of-phase structure of the localized "ring" modes is preserved during propagation. Of course, these phase information can be better retrieved from the other technique such as that used in Ref. [28]. Clearly, the difference in performance between Figs. 3(j) and 3(k) emphasizes the relevance of the phase structure for the excitation of flat-band states. Based on the invariability of any linear combination of eigenmodes in the direction of propagation, we constitute a simple bound state [ Fig. 3(d)] by the linear combination of two flat-band modes. Again, the nondestructive output pattern [ Fig. 3(l)] is observed, indicating the feasibility for distortion-free image transmission based on arbitrary superposition of flat-band modes. As compared with fs laser-written lattices [13,14], the optically induced lattices have less propagation length due to the limitation of crystal length of 10 mm. It should be pointed that the propagation length in our experiment is about 1.2 coupling length (i.e., more than one coupling length). Furthermore, as shown in many prior experiments [2,26], optically induced lattices (even only 10mm long) can allow for strong coupling between lattice waveguide channels. One can also see discrete diffraction from Fig. 3(i), indicating the coupling among waveguides. Thus, the localized patterns observed in Fig. 3 indeed arise from formation of flat-band modes rather than simple isolated wave-guiding. From numerical simulation, we found that the nearest-neighbor coupling constant is about 0.184 mm −1 , and the next nearest-neighbor coupling constant is about 0.014 mm −1 . Thus, the next-nearest-neighbor coupling is an order of magnitude smaller than the nearest-neighbor coupling, insignificant in our 10 mm-long lattices. To further corroborate these experimental observations, we numerically solved the equation (1) by using the beam propagation method. The parameters are similar to those used in our experiments. λ = 532 nm, the lattice spacing D = 28µm, n 0 = 2.35, the index change associated with the induced Kagome lattices Δn = 1.8 × 10 −4 , and the propagation length L = 10mm corresponding to the experimental results. Typical results are shown in Fig. 4, where it can be clearly seen that our simulation results agree well with experimental results shown in Fig. 3. The first panel reveals the input probe beams, and the corresponding output patterns without lattices are shown in the middle panels. It can be seen that the input beams exhibit linear diffraction patterns without lattices. However, when the lattices are introduced, localization of the necklace-like beams is achieved as shown in the bottom panels. More importantly, the difference between distorted [ Fig. 4(j)] and undistorted [ Fig. 4(k)] is evident. This is in good agreement with previously performed numerical simulation [24]. In addition, we performed a series of simulations for different lattice constants and found that the localization was not ob- served after 10 mm propagation with the lattice spacing D ≤ 22µm. This is because, when the lattice spacing is too small, strong coupling occurs even between the next-nearest neighbors, so the flat band can no longer preserve. Experimentally, we cannot achieve longer propagation distances due to the limitation of the crystal length. However, we have performed simulations to a longer propagation distance (60 mm), and we found that the diverse linear combinations of eigenmodes show little diffraction while propagating through the induced Kagome lattice [ Fig. 5(a)]. In our experiments, weak disorder of the Kagome lattice (diagonal disorder) is present due to non-uniformity of light spot. Furthermore, we have performed linear-evolution simulations for flat-band mode transmission in Kagome lattices under random-noise perturbations. Figure 5 shows the numerical results. Upper row shows the output patterns of a single flat-band mode excited in the Kagome lattices under different random-noise perturbations. Lower row shows the corresponding linear-evolution simulations of long distance propagation. For a long distance propagation of 60 mm (about six coupling length), we have found that a flat-band "ring" state still remains robust and does not break up in the Kagome lattices with the random noise perturbations d ≤4%, as shown in Figs. 5(a) and 5(b). However, with 6% random noise perturbations, we can see some weak diffraction from the excited sites in Fig. 5(c). When the random noise perturbations reach to 10%, stronger diffraction can be seen and the "ring" mode is severely distorted [ Fig. 5(d)]. Finally, we want to mention that in our simulation, we did not include the effect of anisotropic nonlinearity inherent in the biased photorefractive crystal. From the output lattice pattern [ Fig. 2c] and also the discrete diffraction pattern of a Gaussian probe beam, we can see that, at the bias field we used, the anisotropic nonlinearity does not play an appreciable role here in deforming the Kagome lattice or its flat-band structure. Conclusion In conclusion, we have "fabricated" 2D Kagome photonic lattices in a bulk nonlinear crystal by optical induction and demonstrated experimentally the excitation of a flat-band state. Moreover, we have observed diffraction-free propagation of a complex pattern formed due to a superposition of the flat-band states in such Kagome lattices. These results further support the theoretical prediction of image transmission based on flat-band states in Kagome lattices [24]. Our work may provide inspiration for developing alternative light-trapping and image transmission schemes in structured photonic materials without engineered disorder or nonlinearity. In addition, one can envisage the possibility for experimental demonstration of predicted novel phenomena such as discrete flat-band solitons and PT-symmetric phase in Kagome photonic lattices [29,30], Aharonov-Bohm photonic caging with the implementation of a synthetic gauge field [31].
5,474
2016-04-18T00:00:00.000
[ "Physics" ]
FORMATION OF THE GENETIC STRUCTURE OF CATTLE POPULATIONS BY SINGLE LOCUS DNA FRAGMENTS DEPENDING ON THEIR PRODUCTIVITY DIRECTION AND ORIGIN , INTRODUCTION The improvement of domestic animals for human needs depends on genetic diversity of the species (Wei-gend S et al, 2002). The criterion for such diversity is the availability of different species which are the main material for breeders and the foundation for the adjustment of domestic animals to our needs (Mackowski M et al, 2015). Each breed has its own set of genes, formed under the impact of different factors of artifi cial and natural selection (Putnová L et al, 2019). Recently the problem of preserving domestic breeds and sustainable use of genetic resources has become one of the most relevant ones for most countries (Oppermann M et al, 2015;Sunnucks P, 2000;Pamilo P, Nei M, 1988). To estimate the genetic structure and to study the dynamics of population genetic processes in the populations of domestic animals, many countries use the advantages of the methods of molecular-genetic analysis. Microsatellites -highly polymorphic genetic markers -are the main instrument for European investigators (Teneva A et al, 2018). The analysis of microsatellite DNA sequences in different breeds is of interest because the study of this issue may facilitate the understanding of the evolution mechanisms, divergence dynamics for both wild and domesticated species, including the processes of breedformation (Li MH et al, 2009). Microsatellite loci have non-randomized character of distribution, but their functional relevance, genetic and evolutionary mechanisms of formation are yet to be determined. Microsatellites are believed to be selectively neutral DNA fragments, not related to productive features. However, recently there is more and more information about the association between some microsatellite loci and specifi c productivity indices in domestic animals (Ciampolini R et al, 2002;Bressel RMC et al, 2003;Andrade PC et al, 2008;Komatsu M et al, 2011). Contrary to our previous studies (Shelyov AV et al, 2017; this publication highlighted the specifi cities in the formation of the genetic structure of populations depending on the productivity direction of animals, which allowed us to combine animals of two Ukrainian dairy breeds into one population according to the productivity direction. The second important aspect of this study was to estimate the impact of the parental form on genetic polymorphism of modern intensive specialized breeds. MATERIALS AND METHODS The material of our study was the herd (n = 318 animals) of Ukrainian cattle, which belongs to dairy cattle (n = 83 animals), represented by Ukrainian Red-and-White dairy breed (n = 42 animals) and Ukrainian Black-and-White dairy breed (n = 41 animals), kept at Voronkiv farm, Boryspil District, Kyiv Region; meat cattle (n = 192 animals), represented by Southern Beef breed, kept at SE DG Askaniyske farm of the Askanian State Agricultural Experimental Station of the Institute of Irrigated Agriculture, the NAAS, in Kakhovka District, Kherson Region; and aboriginal cattle (n = 43 animals), represented by Gray Ukrainian breed, kept at Voronkiv farm, Boryspil District, Kyiv Region. The molecular-genetic analysis was conducted in the Department of Genetics and Biotechnology of the M.V. Zubets Institute of Animal Breeding and Genetics, the NAAS of Ukraine, the Mykolayiv National Agrarian University, and the experimental part -at the Ukrainian Laboratory of Quality and Safety of Agricultural Products of the National University of Life and Environmental Sciences of Ukraine. The following methods were used in the study. Veterinary methods. The blood was sampled from the jugular vein using double-ended needles Venoject and vacuum tubes and holders Venosafe (Terumo, Belgium) FORMATION OF THE GENETIC STRUCTURE OF CATTLE POPULATIONS BY SINGLE LOCUS DNA following the standard method in accordance with the manufacturer's recommendations in sterile conditions. Molecular-genetic methods. DNA isolation from blood samples was conducted using DNA-sorb-B kit (Amplisense, Russia) according to the manufacturer's recommendations. The microsatellite analysis was performed using 10 loci, recommended by the International Society for Animal Genetics (ISAG). The polymerase chain reaction (PCR) was conducted using АВ 2720 Thermal Cycler (Applied Biosystems, USA). The reaction mixture for PCR was prepared according to the protocol, recommended by the manufacturer of the test-system (Stock Marcs, 2010). The amplifi ed DNA was separated by the method of capillary gel electrophoresis at ABI Prism 3130 Genetic Analyzer (Applied Biosystems, USA). The registration of the obtained graphic results was done using programs Run 3130 Data Collection v.3.0 (Applied Biosystems, USA) and GeneMapper 3.7 (Applied Biosystems, USA). Methods of mathematic-statistical analysis. The incidence of genotypes and alleles (Na), the effective number of alleles (Ae), observed (Ho) and estimated (He) heterozygosity and the inbreeding index (Fis) for specifi c microsatellite DNA loci were assessed for each sampling using GenAIEx v. 6.5 (Peakall R et al, 2012). In addition, M-ratio was calculated for each breed and locus of microsatellites (Garza JC et al, 2001). The hypothesis on the absence of signifi cant differences between the investigated groups of animals in terms of the incidences of rare and most common alleles was checked using Pearson's chi-square in PAST program (Hammer Ø et al, 2001). PopGen program was used to check the correspondence of the distribution of genotypes of each microsatellite DNA locus in each breed to the state of Hardy-Weinberg genetic equilibrium (HWE) based on the algorithm of G-test of maximal likelihood (Yeh FC et al, 1999). To check the hypothesis on the absence of signifi cant differences in the applied indices of genetic diversity, we conducted Friedman's non-parametric disperse test using PAST program (Hammer Ø et al, 2001) for several breeds. The estimation of Wright's F-statistics (Fis and Fst) for each microsatellite DNA locus and each breed was done using GenALEx v. 6.5 (Peakall R et al, 2012). The signifi cance value of the deviation of the obtained estimates from zero was calculated using the exchange test with 999 exchanges. The assignment test, based on the distribution of the incidences of microsatellite multilocus genotypes (Paetkau D et al, 1995), was conducted for several groups of animals using GenALEx program (Komatsu M et al, 2011). The hypothesis on the presence of the "bottleneck effect" for different breeds in the past based on the use of three models (IAM, SMM and TPM) was checked using BOTTLENECK v.1.2.03 program (Cornuet JM et al, 2001). The estimation of the mean correlation between alleles (r) and the number of reliable cases of linkage disequilibrium (NLD) between some alleles in 10 microsatellite DNA loci for different breeds, as well as the results of Ewens-Watterson test in terms of their neutrality was conducted using PopGen (Yeh FC et al, 1999). The effective number of the population for some groups of animals (Ne/Neb) was estimated using NeEstimator v. 2.0 (Do C et al, 2014). RESULTS The analysis of animals from different cattle breeds by 10 microsatellite DNA loci demonstrated 138 allelic variants. The rate of allelic diversity of different breeds is presented in Table 1. For different breeds, from one third to almost a half of the number of noted alleles were presented by very rare alleles (with the share of ≤ 0.050). Here the reliable differences (Pearson's chisquare: χ 2 = 8.79; df = 2; P = 0.013) were determined between the breeds only by the share of the most common alleles, the estimates of which fl uctuated from 0.055 (dairy productivity direction) to 0.176 (southern meat productivity) ( Table 1). The indices of genetic diversity and M-ratio estimation for 10 microsatellite DNA loci of cattle of different selection (per one locus, on average) are presented in Table 2. The reliable association between a breed and a microsatellite locus was noted only regarding the estimates of the effective number of alleles (Ae) and the expected heterozygosity (He) (Friedman's test: in both cases P < 0.01). Therefore, the patterns of genetic variability for some loci by these indices were considerably different for animals, belonging to different groups of cattle. The average number of alleles was the highest for the dairy cattle (11.00 alleles per locus), meat cattle (10.20 alleles per locus), and the lowest -for Ukrainian Grey breed (9.40 alleles per locus). The lowest index of the effective number of alleles (4.74 alleles per locus) was noted for meat cattle which was more than twice lower as compared with the average number of alleles. It may be the result of a high number of alleles in this breed with both very low and very high incidence (18 alleles out of 102 registered ones). In general, almost two thirds of alleles within this breed had either very low or very high incidence whereas the share of such alleles for dairy cattle was a little over 1/3 (Table 1). The average estimates of the observed heterozygosity varied from 0.642 (meat cattle) to 0.802 (dairy cattle), whereas the average estimates of the expected heterozygosity varied from 0.773 (meat cattle) to 0.861 (dairy cattle). The prevalence of the expected heterozygosity indices over the observed ones was noted in all the cases which led to obtaining positive and relatively high indices of heterozygosity index -from 0.069 (dairy cattle) to 0.185 (aboriginal cattle) per locus on average. It demonstrated a considerable defi cit of heterozygosity among the animals from the investigated breeds, especially among dairy cattle, which may serve as a manifestation of active breeding work, conducted with these breeds. It is remarkable that on the background of the decreased rate of allelic diversity (noted for meat cattle) there was no narrowing of the spectrum of this diversity, that was the same for the animals of two other groups, which was confi rmed by M-ratio indices, very similar for all three groups of animals (Table 2), that were much higher than the critical value of 0.600. In terms of some loci, we detected very low indices of M-ratio (0.500-0.652) only for locus TGLA122, which demonstrated faster loss of allelic diversity by this locus than on average for 10 investigated microsatellite DNA loci. It was especially notable for meat cattle, where a very high number of alleles were absent in the investigated animals (TGLA122 147 , TGLA122 155 , TGLA122 157 , TGLA122 159 , TGLA122 163 , TGLA122 165 and TGLA122 167 ) ( Fig. 1). No signifi cant deviation from Hardy-Weinberg equilibrium was found for three out of 10 microsatellite DNA loci, used in the analysis (TGLA122, INRA023 and ETH10) ( Table 3). In general, the signifi cant deviation from the state of genotype equilibrium was noted for 4-5 loci in different breeds. The estimation of the inbreeding index, Fis (as a total for three groups) varied from -0.040 (locus SPS115) to 0.262 (locus ETH225) and was 0.138 ± 0.030 (per locus on average), which demonstrated some defi cit of heterozygosity among the investigated animals ( Table 4). The estimation of the genetic differentiation index between breeds (Fst) also varied within a considerable range -from 0.032 (locus ETH225) to 0.093 (locus TGLA126), and was 0.060 ± 0.007 on average. Significant differences between breeds were found in terms of the distribution of allele incidences for all microsatellite loci and for the mean value of Fst (Table 4). A high specifi city rate was confi rmed by the results of the assignment test based on the distribution of incidences of multilocus genotypes by 10 microsatellite DNA loci (Table 5). The genetic uniqueness of the in- To analyze possible impact of targeted breeding work on the formation of genetic structure of different breeds (Shelyov AV et al, 2017; by single locus DNA fragments, we conducted the population genetic evaluation of the groups of animals, different in the productivity direction. Although a priori microsatellite DNA loci are neutral molecular-genetic markers, we determined that among 10 loci, used by us, there were some loci, demonstrating that the hypothesis on their neutrality was reliably refuted based on the results of Ewens-Watterson test ( Table 6). As could be expected considering active breeding work with these breeds, the highest number of these loci (7 out of 10) were registered for dairy cattle. The number of such loci was considerably lower for aboriginal and meat cattle (three and two), and half of them had the indices of Obs. F were close to the lower threshold (95 %) of the confi dence interval (Table 6). On the other hand, the reliable manifestation of "bottleneck effect" was noted for these two groups in the past, based on the heterozygosity indices for 10 microsatellite loci (Table 7). Similarly, the negative consequences of population demographic processes within aboriginal and meat cattle were also noted regarding their estimates of mean correlation between alleles (r), used in the analysis of loci, which exceeded 0 reliably, and a considerable number of signifi cant cases of linkage disequilibrium (N LD ) between some alleles of 10 microsatellite DNA loci (Table 8). These processes, related to the limited number of animals, bred in Ukraine, and a low rate of their heterozygosity (Table 2, Fig. 2), resulted in very low estimates of effective population number (Ne) and the effective number of inseminators (Neb) for aboriginal and meat cattle (Table 9). In general, the estimated effective population number of the investigated cattle was 61-78 animals (with 95 % confi dence interval from 40 to 127 animals) while the estimate of Ne for dairy cattle was 709 animals (with open higher threshold of 95 % confi dence interval). The estimate for the effective number of inseminators (Neb) for these animals was also 2.0-2.5 times higher as compared with aboriginal and meat breeds (Table 9). DISCUSSION In our work, we determined a high rate of genetic diversity in all investigated populations of Ukrainian cattle. Despite the decreased number of the registered allelic variants, their spectrum did not narrow (M-ratio > 0.85). The animals under investigation were characterized by specifi cities of their genetic structure, confi rmed by the results of the assignment test (the range of the results varied from 90.7 % (aboriginal cattle) to 100 % (meat cattle) using 10 STR. If the number of microsatellites is increased up to 12, the results of testing allow referring an animal to a specifi c breed correctly with the likelihood of >98 % (Opara A et al, 2012), if 19 microsatellite markers are applied for the analysis, the likelihood rises up to 99.5 %, and even for rather close breeds (Fst = 0.041) this likelihood may reach as high as 96.3 % (Ciampolini R et al, 2006). This makes the assignment test, based on multilocus genotypes, a powerful method, which allows referring an animal to a certain breed with rather high likelihood. The urgency of this issue is conditioned by more detailed attention to the matter of confi rming the origin and quality of the products of animal breeding in recent years. The highest number of alleles, found in dairy cattle as compared with meat and local (aboriginal) breeds in our study, was also noted in numerous studies of other authors. For instance, the highest number of detected alleles in dairy cattle (Holstein) (110) as compared with meat cattle (Simmental) (95) and local (Italian Aosta Black Pied (76) and Swiss Evolene (61) breeds by 17 microsatellite loci was noted for Alpine populations (Del Bo L et al, 2001). A higher level of allelic diversity for dairy cattle as compared with meat cattle was registered in the studies of Polish (Holstein-Friesian (76 alleles) and Hereford (61)) scientists (Radko A et al, 2005). Slovakian researchers also noted a prevalence of meat breeds over local ones by this index (Czerneková V et al, 2006). It was remarkable that the estimate of the mean number of alleles for Czech population of Holstein animals was much lower than for local Slovakian Pinzgau (a traditional breed for mountainous districts of Slovakia) -5.8 and 9.0 alleles per locus, respectively. However, it should be noted that in this study the dairy cattle had the lowest rate of variability. Our data on a higher rate of genetic variability, observed in dairy cattle (NAT = 110 (79.7 %), Na = 11.0, Ae = 7.56, Ho = 0.802, He = 0.861 and MLH = 0.802) were confi rmed by the results of highly productive dairy breeds as compared with the local ones in European (Peelman LJ et al, 1998) and Asian populations (Kim KS et al, 2002). In our opinion, it may be related to the breeding work, targeted at increasing the dairy productivity indices, and the factor of admixture of genetic material of different breeds during their creation. We determined a signifi cant difference between animals in terms of such indices as the effective number of alleles (Ae) and expected heterozygosity (He) for several loci depending on the productivity direction, which was demonstrated by a reliable association between the direction of breeding work and microsatellite loci (Friedman's test: P < 0.01). This is also confi rmed by the data of German scientists (Brenig B, Schütz E, 2016), who demonstrated the association of nine microsatellites and the yield of protein and milk fat, the bodyweight at birth and weaning, and the index of somatic cells, the percentage of milk fat and the area of long muscles. Moderate intensity of the selection pressure is observed in all the populations under investigation, which is demonstrated by the defi cit of heterozygotes (Но < Не), the presence of moderate inbreedness (Fis = 0.138) and reliable deviation from the state of Hardy-Weinberg genetic equilibrium for only 4-5 microsatellite loci out of 10. It is in agreement with the tendencies, registered for European cattle populations of different selection (Bos taurus) (Gamarra D et al, 2017), and different populations of Indonesian (Bos javanicus) (Agung P et al, 2019) and zeboid (Bos indicus) cattle of different selection (Chaudhari MV et al, 2009;Sodhi M et al, 2005). A wide range of allelic variants was noted for southern meat breed despite their lowest number. Two thirds of alleles had very low or very high incidences (18 alleles out of 102 registered ones), while the share of these alleles in others was considerably lower which may be related to the history of creating this breed and involving zebu on the initial stages. We determined high informative value of the chosen microsatellite loci; rather low indices of M-ratio (0.500-0.652) were found only for locus TGLA122 which demonstrated faster loss of allelic diversity in domestic breeds by this locus. The application of Ewens-Watterson test allows estimating the neutrality of some DNA fragments in terms of the impact of paratypical factors. For instance, in the work (Li MH et al, 2010) 13 loci were determined for cattle from 10 northern European populations (represented by Finnish Ayrshire, Finnish Holstein-Friesian and Finncattle breeds), and the test for their neutrality was reliably refuted. Among these, there were four loci (BM2113, ETH10, ETH225 and TGLA227), for which the hypothesis of neutrality was reliably refuted regarding the animals, investigated by us (Table 6). No neutrality of locus SSM66 was registered in local Scandinavian and globally common cattle breeds of northern European selection (Kantanen J et al, 2000). MC-DNA loci, the neutral character of which was dismissed based on the results of Ewens-Watterson test, were also noted for other representatives of Bovidae family, for instance, for buffalo (Bubalus bubalis) these were loci ILSTS17 (Bhuyan DK et al, 2010), ILSTS089 and ILSTS036 (Kathiravan P et al, 2009). For zebu (Bos indicus), these were loci TGLA122 and TGLA227 (Vohra V et al, 2017), also noted in our research. The neutrality of all the investigated microsatellite loci was also noted in the studies of such local populations as Arunachali in India (Sharma H et al, 2018), and 12 local (aboriginal) breeds in Portugal (Bastos-Silveira C et al, 2009), as well as Creole local breed La Angelica (Giovambattista G et al, 2001). Similar to our study, the manifestation of the "bottleneck effect" in the past, found in the analysis of MC-DNA loci was also noted for the dairy breed, Danish Red (Kantanen J et al, 2000), and Red Steppe breed , as well as the meat cattle -Japanese Black breed (Sasazaki S et al, 2004). In addition, this effect was registered in the populations of other representatives of Bovidae family -zebu-like (Bos indicus) cattle Nellore in Brazil (Barbosa ACB et al, 2013) and Bargur in India (Ganapathi P et al, 2012), as well as banteng (Bos javanicus) in Australia (Bradshaw CJ et al, 2007). It is not always possible to prove statistically that the observed rate of genetic diversity in the investigated populations was conditioned by negative consequences of some population genetic processes, which took place in the past, but it was proven for local breeds in Turkey (Anatolian Black, Anatolian Grey, South Anatolian Red, Native Southern Anatolian Yellow, East Anatolian Red, Zavot cattle) (Özšensoy Y, Kurar E, 2014;Semen Z et al, 2019), a local Icelandic breed (Icelandic cattle) (Asbjarnardottir MG et al, 2010), a local Croatian breed (Istrian cattle) (Ivankovic A et al, 2011), local Austrian breeds (Carinthian Blond and Waldviertler Blond) and a Hungarian breed (Hungarian Grey) (Manatrinon S et al, 2008), as well Romanian breeds (Ilie DE et al, 2015). This indefi niteness may be explained by the fact that the obtained results may sometimes be ambiguouswhen using one model (e.g. IAM) the hypothesis about the manifestation of the "bottleneck effect" in the past is refuted, whereas during the application of another model (SMM) this hypothesis cannot be refuted, like what happened in our case (Table 7). A similar situation was described while studying 17 MC-DNA loci of the Creole breed from Uruguay (Armstrong E et al, 2013). As for widely common dairy and meat breeds, in general the estimates of the effective number of the population are on a relatively high level, for instance, for such breeds as Holstein (Ne = 100-150 animals) (Qanbari S et al, 2010), Hayes BJ et al, 2003), Charolais and Limousine in France and Ne = 376 (168-740), respectively (Leroy G et al, 2013), for American Red Angus -Ne = 429 (369-459) (Marquez GC et al, 2010), although critical values were obtained in several studies -(Holstein Ne = 39) (Weigel KA, 2001); Hereford Ne = 85 (Cleveland MA et al, 2005) and 64 (Mc Parland S et al, 2007); Aberdeen Angus Ne = 30 animals (Falleiro VB et al, 2014). Previously we obtained the values for Ukrainian dairy breeds (Ukrainian Black-and-White and Ukrainian Red-and-White) which demonstrated no threat of losing genetic diversity in their populations, as Ne values were 397 and 505 animals, respectively (Shelyov AV et al, 2017). The values of Ne = 709.0 (299.9 -∞) animals, obtained in this study, are very close to these indices (Table 9). On the other hand, there have been threats for aboriginal dairy breeds, when the estimates of effective population number were within the range of 50-100 animals, for instance, for such French local breeds as Montbeliarde (Ne = 57 animals) and Normande (Ne = 64 animals) (Leroy G et al, 2013). The corresponding estimate for Red dairy breed was even lower -Ne = 23 (11-74) animals ). Yet, among cattle breeds probably the most dangerous threat is faced by a dairy breed Wagyu cattle in the USA, the estimate of effective population of which is only 17 animals (with the range from 2 to 43 animals) (Scraggs E et al, 2014). Similarly, there are very low estimates of effective population number for the populations of local meat cattle breeds, for instance, a Japanese aboriginal breed, Japanese Black -Ne = = 30 (13-52) (Nomura T et al, 2001) and a Portuguese aboriginal breed, Merto-lenga -Ne = 25 animals (Carolino N, Gama LT, 2008). We determined that the estimate of average correlation between alleles was the highest for Grey Ukrainian breed (r = 0.262) which demonstrated a considerable manifestation of linkage disequilibrium between the investigated MC-DNA loci. Previously, the manifesta-tion of linkage disequilibrium was noted for the animals of this breed between loci INRA037 and CSRM60 (Kiselyova TY et al, 2014). As for the aboriginal breeds of Spanish cattle under the threat of extinction, the share of the pairs of markers, between which a reliable manifestation of linkage disequilibrium was noted, may vary from 6.2 % (for Casta Navarra breed) to 80.9 % (for Betizu) (Martín-Burriel I et al, 2007). It is believed that the share of pairs of loci, between which a reliable manifestation of linkage disequilibrium is noted, is in inverse relation to the recombination frequency and thus the distance between markers (Thevenon S et al, 2007). There are also data, stating that most linkage disequilibrium cases are conditioned by the random genetic drift, especially in the populations with low indices of effective population number (Farnir F et al, 2000). In general, the consequences of population demographic processes in the populations of Gray Ukrainian and southern meat cattle are manifested in their estimates of mean correlation between alleles (r), used in the analysis of loci, which exceeded 0 reliably, and a considerable number of signifi cant cases of linkage disequilibrium (N LD ) between some alleles of 10 loci. The consequences of these processes, associated with a limited number of animals from these breeds and a low rate of their heterozygosity, were low estimates of effective population number (Ne) and effective number of inseminators (Neb). In general, the results obtained refl ect the history of creating the investigated Ukrainian cattle breeds, namely, Grey Ukrainian is a long-standing breed, the product of long-term selection of local Grey Steppe cattle (Podolian type), which inhabited a wide steppe of Mediterranean and Black Sea regions in the nineteenth century and originated from one of the forms of wild buffalo (Bos taurus primigenius). In early 1900s they received the name "Grey Ukrainian". The substitution of Grey Ukrainian cattle with productive breeds started at the end of the 19 th -the beginning of the 20 th century. This breed was the basis for the formation of local and fi rst domestic breeds, which later became maternal breeds for modern meat and dairy breeds. Ukrainian Red-and-White dairy breed was created by the method of complicated reproductive crossing of Ukrainian Simmental breed, which originated from the accumulation cross-breeding of local cattle (Grey Ukrainian) and Swiss Simmental with simultaneous breeding of desired hybrids "in itself", with Red-and-White Holstein and, FORMATION OF THE GENETIC STRUCTURE OF CATTLE POPULATIONS BY SINGLE LOCUS DNA in some areas, additionally with Ayrshire and Montbeliarde breeds; Ukrainian Black-and-White dairy breed was created by reproductive crossing of Holstein and Dutch Black-and-White breeds, and local Black-and-White, Simmental, and Whiteheaded Ukrainian breeds, used for improvement (Yefi menko M.Ya. et al (2013). The southern meat breed was created by the method of complicated reproductive crossing and hybridization of Red Steppe breed (created by crossing the local Grey Ukrainian breed and Red Ostfriesen, and later -Angler, Wilstermarsh and some other breeds from the Middle European hollow) with the animals of Charolais, Hereford, Santa Gertrudis, and Cuban zebu (Vdovychenko YuV et al, 2014). The breeds, investigated in this study, preserved relative blood relationship to Grey Ukrainian breed approximately within the range of 1/32-1/64. CONCLUSIONS The specifi cities in the formation of the genetic structure of populations depending on the productivity direction of animals were determined using the results of our studies. The impact of the parental form on genetic polymorphism of modern intensive specialized breeds was noted. Among 10 microsatellite loci, used by us, there were loci in each group of animals, regarding which the hypothesis about their neutrality was reliably refuted according to the results of Ewens-Watterson test: for dairy cattle (INRA023, ETH3, ETH225, BM1824, BM2113, ETH10 and SPS115), for meat cattle (TGLA122 and ETH225), and for aboriginal cattle (TGLA126, INRA023 and TGLA227). A signifi cant difference was observed between animals in terms of such indices as the effective number of alleles (Ae) and expected heterozygosity (He) for several loci depending on the productivity direction, which was demonstrated by a reliable association between the direction of breeding work and some microsatellite loci (Friedman's test: P < 0.01). The highest rate of genetic variability was observed for dairy cattle (NAT -110 (79.7 %)), Nа -11.0, Ае -7.56, Но -0.802, Не -0.861 and MLH -0.802, which may be related to the selection direction. We determined a high diversity rate for the chosen microsatellite loci, except for locus TGLA122, for which rather low indices of M-ratio (0.500-0.652) were found which demonstrated faster loss of allelic diversity in domestic breeds by this locus. Adherence to ethical principles. All procedures performed in the studies involving animal participants were in accordance with the European Convention for the Protection of Vertebrate Animals used for Experimental and Other Scientifi c Purposes, Strasbourg, 1986. Confl ict of interests. The authors declare the absence of any confl icts of interests. Financing. This study was not fi nanced by any specifi c grant from fi nancing institutions in the state, commercial or non-commercial sectors.
6,672.4
2021-12-20T00:00:00.000
[ "Biology" ]
Organic Matter Responses to Radiation under Lunar Conditions Abstract Large bodies, such as the Moon, that have remained relatively unaltered for long periods of time have the potential to preserve a record of organic chemical processes from early in the history of the Solar System. A record of volatiles and impactors may be preserved in buried lunar regolith layers that have been capped by protective lava flows. Of particular interest is the possible preservation of prebiotic organic materials delivered by ejected fragments of other bodies, including those originating from the surface of early Earth. Lava flow layers would shield the underlying regolith and any carbon-bearing materials within them from most of the effects of space weathering, but the encapsulated organic materials would still be subject to irradiation before they were buried by regolith formation and capped with lava. We have performed a study to simulate the effects of solar radiation on a variety of organic materials mixed with lunar and meteorite analog substrates. A fluence of ∼3 × 1013 protons cm−2 at 4–13 MeV, intended to be representative of solar energetic particles, has little detectable effect on low-molecular-weight (≤C30) hydrocarbon structures that can be used to indicate biological activity (biomarkers) or the high-molecular-weight hydrocarbon polymer poly(styrene-co-divinylbenzene), and has little apparent effect on a selection of amino acids (≤C9). Inevitably, more lengthy durations of exposure to solar energetic particles may have more deleterious effects, and rapid burial and encapsulation will always be more favorable to organic preservation. Our data indicate that biomarker compounds that may be used to infer biological activity on their parent planet can be relatively resistant to the effects of radiation and may have a high preservation potential in paleoregolith layers on the Moon. Key Words: Radiation—Moon—Regolith—Amino acids—Biomarkers. Astrobiology 16, 900–912. Introduction A ctive organic chemistry is ubiquitous throughout the Solar System. From the hydrocarbon lakes of Titan to volatile-rich comets and asteroids, organic matter is continually being processed and redistributed. The evolution of organic chemical systems in the early Solar System ultimately led to the origin of life on Earth. Some of the prebiotic organic matter may have been generated on Earth itself by a variety of mechanisms. These include formation from gases in the atmosphere through electrical discharges or ultraviolet light, or processing by impact shocks (Chyba and Sagan, 1992), and possible abiotic synthesis at hydrothermal vents (McDermott et al., 2015). In addition, a substantial proportion of Earth's prebiotic organic material could have been delivered by small carbonaceous bodies such as asteroids and comets. These bodies can contain significant quantities of organic material (Ehrenfreund and Charnley, 2000;Sephton, 2002;Capaccioni et al., 2015). There was large-scale delivery of asteroidal and cometary material to Earth during a period approximately 4 billion years ago known as the Late Heavy Bombardment (LHB) (Gomes et al., 2005;Bottke et al., 2012;Marchi et al., 2014). There was substantial contribution from small particles as well as large impactors. Court and Sephton (2014) estimated the flux of micrometeorites to Earth during the peak 50 Myr of the LHB as 8.4 (-4.7) · 10 13 g yr -1 , substantially higher than the present-day estimated infall rate of 4 (-2) · 10 10 g yr -1 (Love and Brownlee, 1993;Court and Sephton, 2014). Estimates for the total amount of material delivered to the Moon during the LHB are on the order of 6 · 10 21 g (Levison et al., 2001;Kring and Cohen, 2002;Gomes et al., 2005). The larger gravitational attraction of Earth would have resulted in a substantially higher number of asteroid impacts on Earth than on the Moon (Bottke et al., 2012). However, the impact record on Earth from this time no longer exists. Plate tectonics and erosion have destroyed rocks older than *3.8 Ga that might have preserved some of this information, denying us direct knowledge of the terrestrial prebiotic environment. Examination of asteroidal samples has given us a great deal of information about some of the organic starting materials, but this cannot provide a chronology of the types of impactor that were delivering material to the surface of Earth or tell us how this material was subsequently processed. There remains a potential opportunity to delve into the impact record of ancient Earth by looking at the Moon, where asteroidal ( Joy et al., 2012) and cometary (Zhang and Paige, 2009;Anand, 2010;Ong et al., 2010) material could be preserved. The potential for such a mode of preservation on the Moon is illustrated by the discovery of the hydrated carbonaceous chondrite meteorite ''Bench Crater,'' which was identified within samples collected during the Apollo 12 mission ( McSween, 1976;Zolensky, 1997). In addition to asteroidal and cometary material, rocks ejected from early Earth by impacts may also be preserved on the Moon (Armstrong et al., 2002;Crawford et al., 2008). Modeling work has shown that large impacts on Earth's surface could have ejected fragments that then landed on the Moon (Armstrong et al., 2002;Armstrong, 2010). It has been shown that shock effects for meteorites during ejection from Earth and impact on the Moon are in the range within which organic material could have survived (Crawford et al., 2008;Parnell et al., 2010;Burchell et al., 2014). The impact survival of organic structures is of particular significance as it raises the possibility that biotic and prebiotic organic molecules contemporaneous with life's development on Earth could be preserved on the Moon, providing access to a record that has been completely lost on Earth. Organic matter that has been delivered to the lunar surface may be destroyed by physical impacts and radiation. These processes constitute ''space weathering'' and can cause substantial alteration to materials directly exposed at the surface. Micrometeorites impact the lunar surface at high velocities, causing pitting and vaporization of the impactor and target materials (Keller and McKay, 1997). Radiation from solar ultraviolet light, the solar wind, energetic solar events (solar energetic particles, SEP), and from galactic cosmic rays (GCR) also modifies material in the surface regolith. The intense ultraviolet radiation has very limited penetration depth. The solar wind has the highest fluence, but the low energy ( *1 keV nucleon -1 ) of the particles, chiefly protons, means that they are stopped in the outer few micrometers of surface materials (Table 1). While the solar wind is a constant feature of the lunar radiation environment, SEP occur only in occasional energetic outbursts from the Sun (Tripathi et al., 2006;De Angelis et al., 2007). SEP are dominated by protons, and energies generally range from 1 to 100 MeV per nucleon (Lucey et al., 2006). Although the average fluence of SEP is much lower than the solar wind, the higher energies mean that SEP can penetrate the lunar regolith to depths of a few centimeters before being stopped. GCR are the highest-energy particles, consisting mainly of protons with a smaller component of heavy ions. The sources of GCR are thought mainly to be supernova explosions within the Galaxy (e.g., Ackermann et al., 2013) with some contribution from extragalactic sources, and particle energies range from around 100 MeV nucleon -1 up to tens of GeV nucleon -1 . The extreme energies associated with GCR mean that particles striking planetary surfaces can penetrate several meters of rock and generate a cascade of highly damaging secondary particles within the regolith volume (e.g., Dartnell, 2011). For any organic material to survive over geological periods of time and to be discoverable by an exploration mission, it must be protected from space weathering in some way. Ejecta blankets from nearby large impacts would be effective at quickly burying the material to substantial depth; however, it would be difficult to identify particular paleosurfaces within a sequence of very similar regolith layers. Rapid burial of preexisting regolith surfaces by pyroclastic deposits from fire-fountaining volcanic eruptions is another possibility (McKay, 2009), although such occurrences will be spatially and temporally limited. A mechanism of encapsulation by lava flows or impact melt flows provides the necessary combination of rapid burial and readily identifiable horizons within the paleoregolith layers Fagents et al., 2010;Rumpf et al., 2013). In the lava flow preservation scenario, a fluid basaltic lava flow covers the surface regolith, which contains meteorites bearing the organic material of interest. The lava flow cools, and the solid basalt layer provides effective shielding for the buried regolith (Fig. 1). Space weathering Galactic cosmic rays Protons ( *87%), alpha particles (*12%), heavy ions ( *1%) a processes then begin to form a regolith layer on the surface of the cooled basalt flow, incorporating any newly fallen meteoritic material. This new regolith layer is in turn covered by a new lava flow, and the process repeated. The result is a sequence of paleoregolith deposits and basalt lava layers. The individual basalt layers can be radiometrically dated, placing constraints on the duration of space exposure and closure age of the paleoregolith layers and the material within them. A record of the changes in the types of material being delivered to the lunar surface over time is thereby generated. The lava flow preservation scenario has many advantages; however, the high temperatures of the lunar lava flows, with liquidus temperatures for mare basalts of between 1150°C and 1400°C , have the potential to alter and degrade organic materials contained within the underlying regolith layer. We have previously investigated the effect of heat from the overlying lava flow on different types of organic material that may be used to diagnose contributions from biological activity (i.e., biomarkers), mixed with lunar regolith analog material. It was found that the presence of a regolith simulant promoted the preservation of intermixed organic matter (Matthewman et al., 2015). Once buried beneath meters of cooled lava flows, the encapsulated organic matter would be protected from all but the most energetic GCR and associated particle cascades. However, before the lava flow can cover the regolith containing the carbonaceous particles, there is a period of time during which the organic matter will be subjected to radiation and micrometeorite impacts in the upper few centimeters Rumpf et al., 2013). Ionizing radiation is known to affect organic material in a number of ways (Swallow, 1960) and is damaging to biological materials. Common effects in organic matter include polymerization and cross-linking, with scission of long chains (Swallow, 1960;Chapiro, 1988). Experimental irradiation of meteoritic material caused progressive disordering and amorphization of the organic component (Brunetto et al., 2014), and similar effects have been observed in the electron radiation of synthetic and terrestrial analog materials (Le Guillou et al., 2013;Laurent et al., 2014). In this paper, we report an assessment of the effects of radiation on organic matter under conditions similar to those expected in lava-encapsulated records on the Moon. Our findings provide guidance for future lunar missions and their science goals. Methods To understand the effect that solar radiation might have on the preservation of abiotic and biotic organic compounds in the presence of lunar minerals, we subjected a range of organic materials to proton irradiation that was intended to simulate the effects of SEP. Table 2 lists the 12 samples that were irradiated, and Table 3 summarizes the organic materials used in the experiments. The Schematic of the preservation of meteorites (black circles) within paleoregolith layers on the Moon. In this case, a 1.0 m thick lava flow has covered a thick layer of regolith containing meteorite fragments. The upper part of the regolith layer will be heated by the overlying lava, and the indicated depth of 0.2 m corresponds to a peak temperature of 300°C, which is reached after 20 days (Rumpf et al., 2013). With a regolith accumulation rate of 5 mm Myr -1 (Hörz et al., 1991), it would take 40 Myr for these meteorite fragments to be buried under 0.2 m of regolith. This assumes that there is no reworking to bring the meteorite to shallower depths in the regolith. (2) For the polymer, powdered (8 lm particle size) crosslinked poly(styrene-co-divinylbenzene) was used. It comprises long chains of styrene units, cross-linked by divinylbenzene. This polymer is intended to be representative of the aromatic macromolecular organic material in carbonaceous chondrite meteorites. Although structurally different, the polymer has been well characterized, and any structural or compositional modifications can be easily recognized. (3) The amino acids used were glycine (Gly), l-alanine (Ala), l-aspartic acid (Asp), l-glutamic acid (Glu), and l-phenylalanine (Phe). Gly, Ala, Asp, and Glu are abundant in biological materials, whereas Phe, although present in biological materials, was primarily selected to represent amino acids containing an aromatic ring. The organic materials were either used in isolation or were mixed with mineral substrates. To assess any effect of lunar regolith minerals on the organic materials during irradiation, organic samples were mixed with JSC-1, a lunar mare basaltic regolith analogue (for detailed characterization, see McKay et al., 1994;Willman et al., 1995), or with a powdered lunar feldspathic meteorite. The lunar meteorite used was MacAlpine Hills (MAC) 88105, subsplit 43 (0.203 g). This split was acquired in powder form from the Smithsonian Institute, where 20 g of the bulk parent meteorite stone had previously been processed to give a representative homogeneous powder (see Lindstrom et al., 1991). This material was not modified or processed further before use in the experiments. MAC 88105 is a regolith breccia formed from consolidated feldspathic soil (Koeberl et al., 1991;Lindstrom et al., 1991;Neal et al., 1991), which experienced low levels of space weathering [as indicated by a low I s /FeO space weathering index (Lindstrom et al., 1995)]. The stone likely originated from the outer regions of the Feldspathic Highlands Terrane (FHT-O) and is, therefore, a good analogue of lunar highland soils. Other minerals of relevance include those delivered in any carbonaceous meteorite, terrestrial meteorite, or cometary material itself. These minerals are likely to retain some degree of contact with organic material on the Moon during irradiation and other space weathering processes. As such, powdered serpentinized mafic rock was used as analog material for the meteorite mineral matrix. This mineral powder is loosely representative of both chondrites (Krot et al., 2006) and aqueously altered terrestrial igneous rock that was likely to have been close to the surface of early Earth during the early Archean (e.g., Kröner and Layer, 1992) and that may have contained organic materials, including some of biotic origin. The serpentinized rock contains a mix of olivine, pyroxene, and hydrated serpentine minerals. The JSC-1 powder was rinsed with dichloromethane to remove soluble organic contamination. The powdered serpentinized mafic rock was processed more extensively to remove soluble organic compounds by sonication in 93:7 dichloromethane/ methanol v/v in triplicate. The MAC 88105 powder was used without cleaning with solvent. For the free hydrocarbon compounds, a solution was made up in dichloromethane to a concentration of *1 mg mL -1 for each compound. Sample mineral powders were first placed inside quartz pyrolysis tubes and secured with quartz wool at the top and bottom. Ten microliters of the solution was spiked onto the substrate in the quartz sample tube and allowed to dry. For the amino acids, a solution was made up in high-purity 18 mO$cm water to a concentration of *2 mg mL -1 for each compound. Five microliters of this solution was spiked onto the substrate and allowed to dry. In all cases, the dry mass of mineral substrate was *20 mg. Where a mineral substrate was not used, solutions were spiked directly onto a loose plug of quartz wool. The cross-linked poly(styrene-co-divinylbenzene) was thoroughly mixed with mineral powders to give a loading of*5 wt %. Approximately 20 mg of this mixture was used for each sample prepared for irradiation. Where a mineral matrix was not used, *2.5 to 3.8 mg of the polymer was placed directly into the quartz pyrolysis sample tube and secured with quartz wool. For sample 5 (Table 2), 18 mO$cm water was added to a sample tube containing poly(styrene-co-divinylbenzene). The pyrolysis sample tubes were then flame sealed inside evacuated 6 mm (o.d.) borosilicate glass tubes. The exclusion of air was necessary to avoid reaction of the organic material with atmospheric oxygen during irradiation (Swallow, 1960). The samples containing water, free hydrocarbon compounds, and amino acid compounds were chilled in liquid nitrogen during vacuum flame sealing, to avoid the potential loss of compounds. Irradiation Samples in the sealed glass tubes were irradiated with protons at room temperature using the MC40 Scantronix cyclotron at the University of Birmingham. Table 2 shows the fluences used for each sample. The energy of the protons generated by the cyclotron was 25 MeV, which is within the energy spectrum of SEP (Table 1); however, irradiation of the samples was complicated by the glass tubes, which gave a variable target profile due to the cylindrical shape. Protons incident at the center of the tube had a shorter distance to travel through the glass than protons incident on the edge of the tube. The samples experienced a range of proton energies from *4 MeV (outside edge of the tube with maximum effective glass wall thickness) to *13 MeV (center of the tube with minimum effective glass wall thickness), calculated using SRIM-2013 with borosilicate glass as the model material. All but one sample was exposed to a fluence of 3 · 10 13 protons cm -2 using a current of 800 nA and taking 13.2 min. For the single poly(styrene-co-divinylbenzene) sample exposed to the higher fluence, the current was 800 nA, and duration was *1.5 h, giving a fluence of 2 · 10 14 protons cm -2 . Confirmation that the samples were correctly aligned in the proton beam was provided by the use of radiosensitive film placed behind the sample. Sample recovery and solvent extraction Following irradiation, the outer tubes of the samples containing the hydrocarbon biomarkers were wiped with acetone and cracked open. Clean steel wire was used to remove the sample from the inner pyrolysis tube. The inner tube was rinsed with dichloromethane/methanol 93:7 v/v and the solvent collected. The sample was sonicated in a test tube for 5 min in 0.5 mL dichloromethane/methanol 93:7 v/v, centrifuged, and the extract removed. The extraction was repeated for a total of three times, and the extracts were combined. The samples were filtered through quartz wool to remove residual mineral grains with dichloromethane rinses, then made up to 1 mL in dichloromethane ready for analysis. To ascertain the recoveries of the hydrocarbon compounds from the substrates without the potential modifying effects of radiation, triplicate samples of quartz wool, MAC 88105, and JSC-1 substrates were spiked with the hydrocarbon mixture, vacuum sealed, and extracted, but were not irradiated. The irradiated amino acids were extracted by using hot water. The outer sample tube was cracked open, and the small inner tube containing the sample was transferred to a new 6 mm o.d. borosilicate tube that had been sealed at one end. Two hundred microliters of 18 mO$cm water was added to the tube, before being chilled in liquid nitrogen and flame sealed under vacuum. These tubes were then heated at 100°C for 24 h. After heating, the outsides of the tubes were thoroughly cleaned with acetone and cracked open. Twentyfive microliters of the sample solution was transferred to a vial then dried at *35°C. Once completely dried, samples were derivatized by addition of 20 lL BSTFA and 10 lL pyridine, before being capped and heated at 85°C for 45 min immediately prior to analysis. Derivatization modifies the molecule into a form that is amenable to further analysis. Blank samples were also run to monitor any contamination introduced during the extraction and derivatization process. Analysis Hydrocarbon biomarkers were analyzed by gas chromatography-mass spectrometry (GC-MS). An Agilent 7890 GC and 5975C mass selective detector (MSD) system was used, with a 30 m J&W DB-5MS column with 250 lm i.d. and 0.25 lm film thickness. The GC oven was held at 50°C for 1 min then ramped at 4°C min -1 to 310°C, where it was held for 20 min. The helium flow rate was 1.1 mL min -1 . The inlet was in splitless mode and held at a temperature of 250°C. Polymer materials were analyzed with pyrolysis-gas chromatography-mass spectrometry (pyrolysis-GC-MS). Samples were analyzed by a Chemical Data Systems 5200 pyrolysis unit with an Agilent 7890 GC and 5975C MSD. The valve oven of the pyrolysis unit was maintained at 350°C and the transfer line at 270°C. The pyrolysis interface was rapidly ramped from 50°C at the start of the run to 350°C, taking less than a minute. Pyrolysis was performed at 600°C for 15 s. The pyrolysis products were separated on a 30 m J&W DB-5MS column with 250 lm i.d. and 0.25 lm film thickness. The oven was held at 40°C for 1 min before being ramped at 5°C min -1 to 310°C, where it was held for 10 min. The inlet was operated in split mode at a ratio of 50:1 and held at 270°C. The helium flow rate was 1.1 mL min -1 . Derivatized amino acid samples were analyzed with the Agilent 7890 GC and 5975C MSD with a 30 m J&W DB-5MS column. The GC oven was held at 80°C for 1 min, then heated at a rate of 8°C min -1 to 310°C, where it was held for 12 min. Helium flow rate was 1.1 mL min -1 . The inlet was operated in splitless mode and held at 250°C. The MSD scan range in all cases was from m/z 50 to 550. Compounds were identified by using retention times, comparison with standards, and the NIST08 mass spectrum library. Results Irradiation with protons under the experimental conditions described appeared to have had little effect on any of the compounds tested. No degradation products from any of the original organic materials were detected. The irradiation was sufficient to induce short-lived radioactivity in the sample tubes, as measured by a handheld detector, and caused brown discoloration of the outer glass tube. Hydrocarbon biomarkers The recovered fractions of the original hydrocarbon biomarker compounds following irradiation and the recoveries of hydrocarbon compounds that had not been subjected to irradiation are listed in Table 4 and plotted in Fig. 2. For the irradiated samples, it can be seen that the recoveries were high (>69%) across all three substrates. Notably, the recovery of coprostane was lower than the other compounds, by at least 14% compared with the recovery of hexadecane. This could be indicative of destruction by radiation; however, no degradation products were detected that would indicate breakdown of the molecule (Fig. 3). It would also require the irradiation to be selectively destructive. The stable rings of aromatic compounds are more resistant to the effects of ionizing radiation than aliphatic compounds (e.g., LaVerne and Dowling-Medley, 2015, and references therein). However, the lack of molecular fragments from coprostane or other compounds requires an alternative explanation to the action of radiation which is responsible for the variation in compound recoveries. The recovery of hydrocarbon compounds from MAC 88105 is systematically lower than from the quartz wool and JSC-1 substrates ( Table 4). The absence of compound fragments in the extract from MAC 88105 indicated that these losses were not the result of irradiation. One possible factor affecting recovery of the compounds could be adsorption onto the surfaces of glassware and mineral substrates. One of the primary controls on adsorption of organic materials to minerals is the size of the mineral grains. The MAC 88105 sample used was powdered to pass a 100-mesh sieve (Lindstrom et al., 1991), which corresponds to a maximum grain size of 149 lm. Particles from JSC-1 have been measured up to several hundred micrometers across, with a median particle size of around 100 lm (McKay et al., 1994;Willman et al., 1995), indicating that a significant fraction of JSC-1 has a grain size greater than the maximum grain size of the powdered MAC 88105. The differences in grain sizes would result in higher overall surface area per unit mass for the powdered MAC 88105 than JSC-1, providing more opportunity for the adsorption of organic molecules and hence greater losses during extraction of MAC 88105. However, comparison with the recoveries from the non-irradiated samples showed that there was systematic variation that is more readily explained by errors from the handling of small volumes of volatile solutions, which may also be responsible for the lower recoveries The blank records any contamination introduced during the hot-water extraction process. Peaks labeled with an asterisk were also found in the chromatograms of non-irradiated amino acid standard mixtures and are therefore not products from the irradiation experiment. of coprostane in the irradiated samples. Compounds structurally related to stigmasterol were detected in both extracted samples and in the standard solutions, indicating the presence of impurities in the original stigmasterol standard material. The recoveries of the experimentally irradiated compounds lie within the range of the non-irradiated recoveries, indicating that radiation has not had a substantially deleterious effect. Amino acids Analysis of the amino acids was complicated by the requirement for derivatization to allow separation and detection by GC-MS. For some of the amino acids, it appeared that derivatization was incomplete, making quantification difficult. Nevertheless, the five amino acids were recovered from the irradiated samples in high concentrations, well above blank levels, and no products of degradation were detected. Figure 4 shows the chromatograms for the derivatized amino acids extracted from the irradiated samples. The relative responses of compounds between samples were similar, showing that the analog material did not have a substantially different effect to the genuine lunar material. Polymers None of the irradiated polymer samples showed any notable differences from non-irradiated polymer when using pyrolysis-GC-MS. The primary response in each sample was styrene (not shown). Styrene is a product from the breaking of the polymer chains during pyrolysis. Discussion Proton irradiation of a variety of organic compounds at fluences of 3 · 10 13 protons cm -2 and 2 · 10 14 protons cm -2 and energies between *4 and 13 MeV had little to no measurable effect on their recovery or composition. The presence or absence of different mineral powders mixed with the organic materials also had no apparent effect. The principal objective of the work was to determine if organic compounds were degraded or transformed into other products by proton radiation similar to SEP; no new compounds were detected, and there was no evidence for degradation in the form of molecular fragments of original compounds. In the case of the high-molecular-weight polymer poly(styrene-co-divinylbenzene), no change to the units making up the polymer were detected. There have been many relevant studies of the irradiation of organic materials in space with application to the survival of polymers in spacecraft construction and instrument components (e.g., Grossman and Gouzman, 2003). The main modes of alteration of a polymer in response to irradiation would be expected to be chain scission and cross-linking (Chapiro, 1995), but these effects were not observed in this study. Recent work has demonstrated the presence of amino acids within lunar regolith samples returned from the Apollo missions (Elsila et al., 2016). A portion of these amino acids were shown to be the result of terrestrial contamination; however, the source of other amino acids was less certain, and they could potentially be of meteoritic origin. Organic material associated with lunar pyroclastic deposits has also recently been described (Thomas-Keprta et al., 2014). These results, together with the survival of organic compounds of multiple types in the irradiation experiments reported here, provide a renewed incentive to investigate curated regolith samples for the presence of organic materials by using sensitive modern equipment. Radiation at the lunar surface The radiation type simulated in this experimental work was SEP, but these represent only a fraction of the radiation types incident on the lunar surface. In addition, we were only able to simulate a limited part of the energy spectrum of SEP when using the monoenergetic cyclotron source (see Table 1). Ultraviolet radiation, the solar wind, and GCR including heavy ions were not simulated but may be important factors in the alteration and long-term preservation of organic matter on the surface of the Moon (e.g., Sagan, 1972;Schwadron et al., 2012). Recent measurements from the Lunar Reconnaissance Orbiter have helped to refine our understanding of the lunar radiation environment (Spence et al., 2010). The Cosmic Ray Telescope for the Effects of Radiation instrument indicates that dose deposition from GCR is sufficient to cause substantial space weathering effects (Schwadron et al., 2012). Despite our improving knowledge of the present-day radiation environment, major uncertainties exist as to the flux of solar and cosmic radiation on the Moon over its lifetime. SEP are produced periodically by the Sun and can vary over long and short timescales (Vaniman et al., 1991;Tripathi et al., 2006). The averaged flux of SEP is believed to range between 50 and 100 protons cm -2 s -1 over the last 2-10 Myr (Reedy et al., 1983;Rao et al., 1994;Nishiizumi et al., 2009). The equivalent ranges of duration of lunar exposure for the fluences used in our experiments are *9.5 to 19 kyr for the lower fluence and *63 to 127 kyr for the highest fluence. Our experimental durations are short in comparison to the length of time it would take to build up a layer of regolith of sufficient thickness to insulate against the heat from the overlying lava flow in the lava flow preservational model Fagents et al., 2010;Matthewman et al., 2015). At a formation rate of 5 mm Myr -1 , it would take 40 Myr to accumulate a regolith thickness of 0.2 m (Fig. 1). This rate is within the estimated range for the initial regolith formation rate at the Apollo 11 and 12 landing sites and can be compared with modern formation rates of *1 mm Myr -1 (Hörz et al., 1991). Our work has not identified a maximum duration on the lunar surface following which organic preservation is retained. Yet the lack of degradation in our experiments can be interpreted conservatively to indicate that in those circumstances where regolith is deposited relatively quickly there appears to be no barrier to effective preservation. We note that total radiolytic degradation of organic compounds increases by several orders of magnitude over timescales of millions of years (Kminek and Bada, 2006) and that SEP are variable over time (Vaniman et al., 1991;Tripathi et al., 2006), and further complex experiments will be required for an exhaustive assessment. Beneath the maximum depth of penetration of the most energetic cosmic rays, the primary sources of radiation on the Moon would be mineral grains and glass phases containing radioactive elements (Lucey et al., 2006). The influence of radiation from radioactive mineral sources would depend on the dominant rock type. For example, KREEP-rich samples (enriched in potassium, phosphorous, and rare earth elements) like KREEP basalts, KREEP-rich impact melts, and High Alkali Suite rocks, have relatively high concentrations of the incompatible elements uranium and thorium (Vaniman et al., 1991;Korotev, 1998;Yamashita et al., 2010). Radioactive components in these rocks could modify exogenous organic material that had become incorporated. However, the range of influence of alpha radiation from these grains is limited to tens of micrometers (Owen, 1988;Nasdala et al., 2006). Short-lived radioactive nuclides that persisted from the formation of the Solar System (Chaussidon and Gounelle, 2007) may have provided an additional source of radiation that is absent today. Mineral radioactivity induces polymerization of smaller molecules and changes the structure of aromatic organic materials (Court et al., 2006(Court et al., , 2007. The flux of radiation at the lunar surface and within the regolith is likely to have varied over the lifetime of the Moon. The flux of the solar wind during the early history of the Sun is believed to have been hundreds of times higher than today (Wood et al., 2005;Cnossen et al., 2007;Lundin et al., 2007). It would therefore be expected that degradation of organic matter due to solar particle radiation would be higher at this time. A further consideration is the length of time that a terrestrial meteorite spends in transit from Earth to the Moon after being ejected by impact (i.e., 4p irradiation). Transit times for asteroidal meteorites to Earth commonly vary from a few million years to hundreds of millions of years (Eugster et al., 2006), and the meteorites will be exposed to space radiation for the duration. However, the transfer time for a terrestrial meteorite to the Moon is likely to be less than a few thousand years (Armstrong et al., 2002), which is a short time relative to the potential residence time of the meteorite once it has landed on the lunar surface. Regolith formation processes and preservation of meteorites The discovery of meteoritic organic compounds with a recognizable source in layered paleoregolith deposits on the Moon will most likely involve centimeter-scale meteorites that have escaped substantial comminution. Meteorites of this size would have sufficient mineralogical information to help constrain whether they originated from an asteroid or cometary or planetary body ( Joy et al., 2012). The ultraviolet radiation and the high flux of low-energy solar wind particles together with micrometeorite impacts will destroy or heavily degrade organic compounds in the outer layers of a meteorite at the surface of the lunar regolith. However, in larger meteorite fragments, the internal portion of the meteorite will be shielded from these types of radiation, which only penetrate to shallow depths in rock. Only the more energetic GCR and SEP, along with radiation from cascades, will affect the compounds preserved in the center of these meteorites. A preservation scenario that minimizes the duration of exposure of the organic meteoritic material to radiation would reduce the opportunity for degradation. Our experiments simulated exposure to SEP for tens of thousands of years. However, in a ''steady state''-type model, the slow and gradual formation of regolith of sufficient thickness to bury and protect meteoritic material would take on the order of tens of millions of years (Fig. 1). Yet the lunar surface early in the history of the Earth-Moon system was a chaotic place, with a higher frequency of impacts than occurs today. As a result, regolith formation rates may have been substantially higher than today . Even small impacts would have generated ejecta blankets thick enough to protect underlying materials (i.e., megaregolith formation), and the seismic shocks caused by impact would have caused slope failures and landslides (Schultz and Gault, 1975;Xiao et al., 2013); these events could have quickly buried meteorites lying in and on surficial regolith. Pyroclastic deposits from fire-fountaining volcanic eruptions would have a similar effect (McKay, 2009). A number of scenarios can, therefore, be envisioned for the creation of meteorite lagerstätten in lunar regolith, reducing the length of exposure of exogenous organic materials to micrometeorite impacts and radiation. However, identifying these horizons and determining their closure ages may be more challenging than for the paleoregolith lava flow preservation scenario. Correlation of known increases in impactor flux through geological time on Earth with dated layers on the Moon would help identify paleoregolith layers that potentially contain high concentrations of meteorites (e.g., Schmitz et al., 2001;Jones, 2014). The quantity of organic material within an asteroidal or terrestrial meteorite that is delivered to the surface of the Moon will also affect how likely it is that detectable quantities of organic compounds are preserved. For a given rate of degradation, meteorites with a high concentration of organic compounds would be more likely to retain some organic chemical information following irradiation compared to an organic-poor meteorite. Carbonaceous asteroidal meteorites contain a range of proportions of organic material, depending on their type and alteration history. The CM2 chondrite Murchison is perhaps the most extensively analyzed example of an organic-rich asteroidal meteorite. The types and abundances of organic materials within Murchison have been summarized by Sephton (2002). The majority of the organic matter is insoluble macromolecular material (1.45%), which is represented by the *5 wt % of poly(styrene-co-divinylbenzene) added to the mineral substrates in this study. Soluble hydrocarbons, amino acids, and carboxylic acids in the Murchison meteorite are each in the range of tens to hundreds of parts per million. The meteorite soluble organic compound abundances are lower than those for the spiked substrates used in this study (*0.35 wt % for hydrocarbons and *0.25 wt % for amino acids). The concentration of organic material in terrestrial meteorites that may have been delivered to the surface of the Moon billions of years ago is uncertain. The rock record from this time has largely been destroyed, and indeed it is perhaps terrestrial meteorites preserved on the Moon that will provide the best constraints on the organic chemical and biological conditions on the earliest Earth (Armstrong et al., 2002;Armstrong, 2010). Hydrothermal vent systems have been suggested as possible sites for early life to thrive (e.g., Martin et al., 2008), and modern carbonate samples from the Atlantic Lost City hydrothermal field show total organic carbon values of up to 0.6% (Kelley et al., 2005). Such modern ecosystems that can act as analogues for unpreserved ancient counterparts provide an indication of the 908 MATTHEWMAN ET AL. possible abundances of organic matter that could have been transferred to the Moon. The measured abundances of organic material in asteroidal meteorites and analog biotic sites, and the types of organic compound within them, provide starting points for estimating what fraction of material may remain after encapsulation for billions of years in lunar paleoregolith layers. The experimental radiation of hydrocarbons, amino acids, and polymers has shown that recoveries of the original compounds remain high after proton irradiation and that no secondary breakdown products were detected. This would suggest that the degradation of organic material at the lunar surface owing to radiation may occur at slow rates, and as such a high proportion of the original meteoritic organic material would remain unaffected. In the case where radiation has a significant deleterious effect on organic matter, it would be expected that polycyclic aromatic hydrocarbon-dominated structures would be the most persistent types of organic material (Court et al., 2006). Irradiation of organic materials in the presence of water To investigate the influence of aqueous fluids, one of our polymer samples was irradiated while immersed in water. Our data are relevant to potential organic carbon and water-ice mixtures that may occur in permanently shadowed craters at the poles of the Moon (Colaprete et al., 2010;Crites et al., 2013). Organic molecules can be generated and modified by interaction with ices under irradiation (Bernstein et al., 1995;Lucey, 2000;Hand and Carlson, 2012;Crites et al., 2013). Although the permanently shadowed regions are largely protected from the effects of ultraviolet radiation, they are still subject to GCR (Schwadron et al., 2012) and the solar wind (Zimmerman et al., 2011). Irradiation of water and water ice can result in the formation of hydrogen and hydroxyl radicals, which can also recombine to form hydrogen peroxide ( Johnson and Quickenden, 1997;Loeffler et al., 2006;Dartnell, 2011). These highly reactive species have the potential to degrade organic matter. However, no evidence for degradation of the organic materials by irradiation in the presence of water was found in our data. Conclusion Proton irradiation intended to simulate exposure of representative biotic and abiotic polymers and compounds to SEP at the lunar surface over periods of tens of thousands of years had little to no effect on the organic materials under the experimental conditions used. The presence or absence of lunar meteorite powder and other mineral analogues mixed with the organic materials had no apparent influence. The data demonstrate the relatively robust nature of the organic materials and suggests that they have a high preservation potential in the lunar radiation environment if buried and encapsulated on favorable timescales. However, the proton energy used in the experiments of *4-13 MeV at the sample is representative of only the lower energy range of SEP. SEP in turn is only one type of radiation that occurs on the lunar surface environment. Lower-energy but higher-fluence radiation types may have an influence on the preservation of organic matter where reworking and mi-crometeorite impacts in the regolith may expose the organic molecules to the space environment. The relative resistance of a range of organic materials to the influence of radiation promotes optimism for the detection and interpretation of in situ and ex situ organic assemblages on the Moon. In particular, encapsulated organic records of impact-ejected planetary surfaces on the Moon may not be extensively corrupted by the radiation environment. Our results, therefore, suggest that prebiotic and biotic chemistry from early Earth, and possibly from elsewhere in the Solar System, could be preserved in the Moon's nearsurface geological record. Identifying such materials will be an important objective for future lunar exploration missions.
9,443
2016-11-01T00:00:00.000
[ "Physics" ]
Prediction of Uni ed New Physics beyond Quantum Mechanics from the Feynman Path Integral: Elementary Cycles String Theory We prove that the Feynman Path Integral is equivalent to a novel stringy description of elementary particles characterized by a single compact (cyclic) world-line parameter playing the role of the particle internal clock. This clearly reveals an exact unified formulation of quantum and relativistic physics, potentially deterministic, fully falsifiable having no fine-tunable parameters, also proven in previous papers to be completely consistent with all known physics, from theoretical physics to condensed matter. New physics will be discovered by observing quantum phenomena with experimental time accuracy of the order of 10sec. Introduction The Feynman Path Integral (FPI) is one of the most important theoretical achievements of modern physics. It represents a mathematical formulation of Quantum Mechanics (QM) equivalent to the axiomatic one, as proven by Feynman [1], whose validity has been confirmed with impressing accuracy in countless experiments. We prove in App.(A) that the FPI, if correctly inquired in a bottom-up approach, unequivocally reveals an unedited nature of relativistic space-time beyond QM. It must be clear that the correctness of the ordinary mathematical formulations of quantum physics is absolutely not questioned in this paper, as well as the essence of relativity. We will pinpoint the physical principle at the origin of QM. The ordinary mathematical formulation of relativistic QM, such as the FPI formulation investigated here or QFT, is confirmed to be doubtlessly correct and indisputably exact in calculating observables, but it turns out to be an affective description emerging from ultrafast cyclic relativistic space-time dynamics, at the base of wave-particle duality, which can not be directly detected with the present experimental temporal resolution. Cyclic Feynman Paths In the standard interpretation of the FPI the probability amplitude of a quantum particle to travel between two space-time points is given by the interference of all the paths -classical and non-classical -joining them. The probabilistic weight is given by the particle action S evaluated along the paths. Only a classical path is possible between two end-points, as long as the classical action is defined on the ordinary, implicitly non-compact, relativistic space-time. As pointed out by Feynman, the price to pay to reconcile QM with the Lagrangian formulation seems that to give a physical meaning to the paths not allowed by least action principle of classical mechanics. Nevertheless, Feynman himself, with his checkerboard model [2], tried to go beyond this interpretation by investigating the possibility to write the FPI as a discrete sum of paths, i.e. labeled by an integer number. If this is the case it would be possible to conceive a classical action whose minimization yields a numerable infinity of (degenerate) classical paths. For instance, in a compact (or cyclic) geometry, infinite, degenerate, classical paths are possible between two arbitrary endpoints, due to the Boundary Conditions (BCs): they would be labeled by an integer number (winding number). In App.(A) -see demonstration -we rigorously prove that the FPI for a free scalar relativistic particle, here denoted by Z, can be written as a discrete sum of cyclic paths according to the fundamental mathematical identity (natural units = c = 1) where S is the relativistic free particle action (defined in the ordinary noncompact space-time), T C = 2π/M is the particle Compton time defined in terms of the particle mass M , τ i and τ f are the initial and final word-line points of the particle evolution. The demonstration, given in App.(A) for a free multiparticle state, is based on widely accepted mathematical identities of second quantization and their direct consequences. By construction it can be generalized to interacting particles and fields, see par. (6), as well as to the functional formulation of the FPI [3][4][5][6][7][8][9][10], see also [11][12][13][14][15][16][17][18][19]. From a physical point of view, the FPI clearly reveals in eq.(1) something extremely important about the nature of space-time beyond QM. The integer index n ′ labeling the paths is manifestly a winding number. The Dirac deltas describe all the possible classical paths between arbitrary τ i and τ f on a compact world-line τ of compactification length T C and Periodic BCs (PBCs) that we name elementary world-cycle (or proper-cycle). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 In order to deal with this puzzling physical result the reader must consider that the parametrization of the particle evolution in terms of the world-cycle (rather than a non-compact world-line) is, after all, the real essence of the wave-particle duality and undulatory mechanics, see also par. (5). As stated by de Broglie in his seminal PhD thesis [20] at the origin of QM, "to each elementary particle with proper mass M , one may associate a periodic phenomenon of Compton periodicity", or, in Penrose words [21], "any stable massive particle behaves as a very precise quantum clock, which ticks away with Compton periodicity", and according to Einstein [22] "a clock is a periodic phenomenon so that what it happens in a period is identical to what happens in any other period ". There is nothing wrong in describing quantum particles as intrinsic clocks ticking at Compton rates, this is implicitly done every time we use a wave function or a field in QM. Notice that massless particles such as photons or gravitons are "frozen clocks" [21] (infinite world-line compactification lenght T C = ∞). In particular this element is useful to figure out how the ordinary causal structure of relativistic physics is preserved and how massive particle can propagate in space-time despite their compact world-cycles. Other elements will be give when we will introduce interactions in par. (6). In general we must always consider that: 1) it is a fact that the universe is solely constituted of elementary particles; 2) it is a fact that, according to the waveparticle duality, every elementary particle is a periodic phenomenon [20,22,21]; thus it must be true that physics can be consistently formulated in terms of elementary cycles. Naively, the fact that each elementary free particle is a persistent periodic phenomenon does not mean that the world must be periodic in time as much as Newton's first law does not imply that everything moves in straight lines (persistent space-time periodicity means free particle). As for Newton's law our strategy is to clearly define the behavior of the isolated building-blocks of nature and then generalize to interactions between them, as we will see in par.(6) -a simple system of two periodic phenomena is already ergodic, when interactions are consider we will find the complexity of ordinary physics. Even though the present paper is self-consistent, a wider view of the following description can be found in [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. The reader should first convince himself about the absolute correctness of the mathematical demonstrations proposed here and in previous papers -the demonstrations are crosschecked and correct above any reasonable doubt as also certified by many peer-reviewed papersand rely on them for the non-trivial conceptual effort necessary to figure out such a radical new view of physics. Recovering the least action principle in QM The Feynman paths written as in eq.(1) are actually classical paths. They are the degenerate classical solutions, from τ i to τ f , making stationary the action where the symbol T C means that the action is defined on the world-cycle -contrarily to S which is defined on the ordinary non-compact space-time. The classical variational principle can be eventually concealed with QM. The price to pay is to give up with the emphatically non-compact formulation of minkowskian space-time. Novel string description of elementary particles Among the various interpretations of new physics beyond QM allowed by eq.(1) here we will focus on a possible novel stringy description, based on ECT, that we name Elementary Cycles String Theory (ECST). In fact, S Compact is manifestly a string action. It associates a closed string of novel type to each elementary particle. The idea of a stringy description of elementary particles is not new, of course, but surprisingly such a novel string theory emerging from the FPI is defined on a compact world-line (world-cycle) rather than on the two dimensional world-sheet of Ordinary String Theory (OST). According to eq.(1), see demonstration in App.(A), these novel relativistic strings, classical in the essence, constitute the fundamental quantum oscillators at the base of quantum fields. From a historical point of view we know from Regge et al. that the good mathematical properties of OST originates from the compact parameter of the ordinary two-dimensional world-sheet. Actually, most of the mathematical beauty of OST is inherited by ECT 1 as consequence of the compact world-line parameter [5]. Our result suggests that the non-compact world-line parameter of OST could be absolutely unnecessary to describe time evolution as soon as we take into account that elementary particles, the basic constituents of our universe, are the elementary clocks of nature according to QM. The whole physics can be formulated in terms of elementary cycles. On the other hand, a single compact world-line parameter as in ECST, contrarily to the ordinary two dimensional world-sheet, avoids problematic aspects of OST such as the proliferation of extra-dimensions. As we are going to see, in ECST 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 the target space-time is in fact the ordinary four-dimensional minkowskian space-time, provided the contravariant compactification at Compton lengths encoding undulatory relativistic mechanics directly into the structure of the four-dimensional minkowskian geometry of relativity. Last but not least ECST is phenomenological predictive: we have seen the equivalence with the FPIand with all QM axioms [3][4][5][6][7]; the elementary cycles strings are the fundamental oscillator of ordinary QFT; ECT has been successfully applied to describe quantum phenomena in different fields, from theoretical physics to condensed matter [3][4][5][6][7][8][9][10]. Elementary space-time cycles According to App.(A), see eq.(7), the FPI for the quantum evolution of a free particle in a generic inertial reference frame can be written as discrete sum of cyclic paths. In fact, by writing eq.(1) in covariant notation we get where ω µ is the particle four-momentum (persistent, as we are in the free case); ∆x µ = x µ f − x µ i is the interval between the final and the initial spacetime points of the free particle evolution. In complete analogy with the rest frame description eq.(1), here the FPI describes space-time cycles of temporal recurrence T = 2π/ω and spatial recurrences λ i = 2π/k i , i = 1, 2, 3 (these are global recurrences as we are in the free case, in the interaction case they are promoted to local quantities, as we will see below). These are, of course, the ordinary recurrences of QM. It is convenient to introduce the contravariant (global) four-vector λ µ = {T, λ} fixed by the (persistent) particle fourmomentum ω µ = {ω, − k} of the free particle through the Planck constant: ω µ λ µ = M T C = 2π. In fact, 2 these space-time recurrences are obtained by Lorentz transforming the world-line Compton periodicity T C = 2π/M . The paths in the FPI eq.(3) are the classical degenerate solutions linking the final and initial space-time points x i µ and x f µ of the world-circle action (ẋ µ (τ ) = dx µ /dτ ) For a free quantum particle of persistent momentum k, the cyclic paths in the FPI eq.(3), i.e. the elementary cycles string vibrations, interfere constructively if the end points x µ i and x µ f of the particle evolution are along the ordinary classical-relativistic trajectory (the ordinary one in non-compact space-time), 2 According to the Lorentz transformations: . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 whereas the interference becomes more and more destructive as the sum over cyclic path is evaluated with one of the end points placed away from the classical trajectory. This corresponds a lower probability to observe the particle away from the classical trajectory, in full agreement -also numerical -with the ordinary probabilistic description of the FPI, [3][4][5][6][7][8][9][10]. Of course, eq.(3) in the in the non-relativistic classical limit reduces on a single Dirac delta over the particle classical path, as it can be easily proven by noticing that the spatial separation between the cyclic paths tends to infinity (in the semiclassical-limit of massive particles | k| ≪ M so that | k| → ∞ and T C → 0). Our result reveals that the Compton periodicity of elementary particles must be "super-imposed" to the minkowskian space-time geometry, exactly as in ECT. This is not a conjecture, it is an exact implication of the FPI, eq.(1), as well as of a long list of striking equivalencies too long to be mentioned here, [3][4][5][6][7][8][9][10]. Einstein intuition on his latest years -after Bell's work on hidden variables -was that the unification of relativity and QM would be possible by "super-imposing" (he used this term) some sort of boundary conditions to relativistic dynamics [22,27]. Relativity only defines the differential structure of space-time without concerning about "what happens at the boundary of space-time" whereas it was clear since its early days that QM was about BCs. From a mathematical point of view the introduction of a space-time boundary completes the Cauchy problem of relativity (differential structure plus BCs) solving simultaneously relativistic and quantum dynamics in a unified formulation of physics: the minkowskian metric of ECT preserves the ordinary differential structure of relativity whereas the controvariant BCs of ECT yield ordinary quantization (e.g. here we have seen the equivalence with the FPI) without breaking relativity (as long as the BCs are allowed by the variational principle for the relativistic action, as for the PBCs of the scalar relativistic particle investigated here). Interactions, QED and Gauge/Gravity correspondence In order to introduce interactions we must consider that the four-momentum of an interacting particle is no longer persistent as in the free case, but it varies locally according to the interaction scheme, see detailed proofs in [4]. In turns, the locally varying four-momentum of the interacting particle fixes locally its quantum space-time recurrences through the Planck constant. We must thus promote the global λ µ of the free case to local space-time recurrences λ µ (x) in order to describe the local four-momentum ω µ (x) of the interacting particle. In other words the elementary cycles strings associated to particles have locally modulated phases during interactions -this explain causality, as it can be established a "before" and a "after" the interaction: ECT does not imply that the world periodic!. Technically this can be done in terms of space-time geometrodynamics by locally deforming the elementary space-time cycles compactification lengths, provided that the invariant world-line recurrence is the particle Compton length; that is, by locally deforming the minkowskian space -1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 time flat metric, similarly to general relativity. The interacting particle local classical action must be defined in a cyclic minkowskian space-time orbifold encoding the local modulations of space-time periodicity. Two types of geometrodynamics are possible, see [4,6,5] for a complete description and related formalism. The first type, the most obvious one, is characterized by local deformations of the metric tensor corresponding to a curved space-time. Of course this type of elementary cycles local deformations describes gravitational interaction exactly as in ordinary general relativity: e.g. it reproduces the ordinary clock rate modulations and ruler contractions encoded in a Schwarzschild metric. Remarkably, the second type of deformations reveals a geometrodynamical origin of gauge interactions analogous to that of general relativity, in a unified view [4,6,5], similar to original Weyl's proposal. Due to the compact nature of space-time in ECT it is possible to describe peculiar interaction schemes, i.e. particular local variations of space-time recurrences λ µ (x) (local modulations of space-time phases), by locally transforming the metric tensor in such a way that the space-time boundary is locally rotated whereas the metric stays flat. The local "rotations" of the space-time boundary of ECT leaving the metric flat turns out the describe exactly gauge interactions. Notice in ECT such local rotations of the space-time boundary corresponds to particular local variations of space-time recurrences λ µ (x) and thus to a peculiar type of interactions that turns out to be identical to gauge interactions. Notice that this type of transformations have no effect to ordinary quantum fields in which space-time has no boundary (they are in fact associated to Killing vector fields): this explains why in ordinary quantum fields it is not possible to observe the geometrodynamics associated to gauge interactions and, in turns, gauge invariance must be necessary postulated in ordinary QFT whereas in ECT gauge interactions can be deduced from space-time geometrodynamics. Particularly simple is the abelian case (for the sake of simplicity here we only mention bosonic QED), [4]. This corresponds to local U (1) rotations of compact space-time boundary of ECT. It results in local modulations of space-time phases (i.e. local modulations of space-time periodicities) formally identical to the ordinary minimal substitution ω µ (x) = ω µ −eA µ (x), where e is the charge of the particle and A µ (x) is the gauge field -defined in terms of the peculiar Killing vector field associated to the local U (1) rotation of the spacetime boundary. Generalizing the result of eq.(3), the FPI of bosonic QED turns out to describe a sum of cyclic paths with locally modulated phases. That is, the elementary cycles strings interacting under this peculiar interactions scheme have local modulations of phases ω µ (x) of ordinary electromagnetism. It is possible to prove the following identity for the FPI of the abelian gauge interaction (bosonic QED) [4]: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 The common geometrodynamical description of gauge and gravitational interactions allowed by ECT shed a new light on the problem of quantum gravity, [4,6,5]. Predictions of New Physics beyond QM It may be surprising that the scale of new physics deduced here from the FPI is at Compton scales (whereas it is common to assume that new fundamental physics is at Planck scale). But are we sure that we have observed everything about physics up to, say, the LHC energy scale? It is definitively true that we have explored physics up to the LHC energy scale, but we were only interested in probabilistic amplitudes, confirming the unquestionable correctness of QM probabilistic predictions of observables, according to Heisenberg, see also [33]. However, our result says something radically unedited, while keeping the mathematics of QM exactly valid at an effective level at every energy scale. The point is that in all these experiments we have not used ultra-accurate clocks. The new physics predicted here from the FPI can only be explored by means of ultra-accurate timekeepers able to detect and investigate the ultrafast cyclic dynamics of elementary particles. Among a plethora of unexplored phenomena associated to ECT, we predict that new physics beyond QED will be detected by investigating quantum phenomena with time accuracy better than the electron Compton time: 3 T elect C = λ elect C /c = 8.09330093 · 10 −21 sec, and the electrons will reveal its fundamental cyclic string nature. It is reasonable to hope that this experimental time accuracy will be reached in the next future as the related technologies are already approaching the atto-second accuracy [28,29]. If the time resolution of a quantum experiment is poorer than the Compton time scales of the particles involved, the ultra-fast cyclic dynamics can only be observed indirectly in a statistical way and the resulting probabilistic predictions are exactly described by the same mathematical rules of ordinary QM. QM emerges exactly as probabilistic, low time resolution description of the elementary cycles dynamics, as the outcomes of a rolling die can only be described probabilistically if the die is observed without ultra-fast timekeepers, see also [30][31][32][33]24]. We have rigorously proven in previous papers, [3][4][5][6][7][8][9][10], that the low time resolution description of ECT ultra-fast dynamics, at effective level is mathematically and phenomenologically equivalent to ordinary QM in all its fundamental aspects. For instance, besides the equivalence with the FPI eq.(3) proven in this paper, the exact equivalence between elementary cyclic dynamics and relativistic quantum dynamics has also been rigorously proven and crosschecked for all the fundamental identities of QM, including the canonical -Copenaghen axioms -formulation of QM and extended to QFT: all the postulates of QM as well the commutation relations has been mathematically 3 We have restricted this prediction to QED. E.g., the time resolution necessary to explore new physics beyond QM in electroweak dynamics is of the order T EW C ∼ 10 −26 sec, corresponding to the periodicity associated to the electroweak energy scale . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 derived directly from the condition of intrinsic periodicity. 4 ECT implies the same mathematical laws of ordinary QM (A ⇒ B), the mathematical laws of QM correctly describes all quantum phenomena (B ⇒ C), thus ECT correctly describes all quantum phenomena (A ⇒ C). Notice that in ECT there are not hidden variables of any sort: Bell's or similar no-go theorems cannot be invoked to rule out the equivalence with QM (the "hidden variable" would be the time parameter, which is not of course a hidden variable, whereas the PBCs is a strong element of non-locality conciliating the non-local aspects of QM and the request of locality of relativity) [30][31][32][33]24]. The Feynman paths, eq.(7), are on-shell paths: we have in fact a sum of Dirac deltas. This is clear indication of "onticity" as pointed out by 't Hooft. Actually, the evolution law of ECT can be alternatively written in terms of 't Hooft "ontic" coordinates as |τ i → |τ f MOD T C , or similarly for the free case as |x i µ → |x f µ MOD λ µ , similarly to continuous periodic Cellular Automata ("particles on a circles") [30][31][32][33]24]. 't Hooft arguments suggests that the new physics beyond QM predicted by ECT is potentially deterministic, [3][4][5][6][7][8][9][10]. Also, the space-time of ECT can be imagined as formed of Compton elementary space-time cells, therefore we find a conceptual relationship with Quantum Time Crystals [34]. Remarks The picture of new physics provided by ECT has exceptional beauty for many other aspects, besides the string description of elementary particles. It also supports many foundational ideas at the origin of the Kaluza-Klein theory, the Kaluza miracle, Holography, the AdS-CFT correspondence, Loop Quantum Gravity, just to mention a few; but it is extremely simple (harmonic), effective and refutable in addition to falsifiable. 5 It doesn't involve extra parameters (rare case in modern theoretical physics typically characterized by fine-tuning plays) being based exclusively on the ordinary elements of relativity and the Planck constant to determine the space-time BCs. We report some other essential facts about ECT. In more that ten years of serious researches, supported by solid mathematical demonstrations, and nearly 20 published papers on the topic, we can state without doubts that the cyclic (or more in general compact) formulation of space-time at Compton scales of ECT is completely consistent with all the theoretical foundations and phenomenology of both quantum physics and relativity (special and general), [3][4][5][6][7][8][9][10]. We have an unedited deep relationship between Feynman's and de Broglie's based interpretation of QM; between the "new" and the "old" formulations of QM. Another remarkable result proven mainly in [5] is that in ECT the "quantum to classical correspondence" at the base of the AdS/CFT correspondence turns out to be an exact mathematical identity rather than a conjecture: consider for instance the equivalence between quantum particles and classical strings presented in this paper. ECT is dual to an extra-dimensional theory where the compact world-line plays the role of a "virtual extra-dimension" with interesting insights into the Kaluza's miracle, whereas the common geometrodynamical description of gravitational and gauge interactions given above, see the extended proofs in [4], is actually a Gauge/Gravity correspondence [5]. Since, as we have seen, interactions, and the consequent local modulations of space-time recurrences, can be equivalently described as deformations of the compact space-time boundary, the theory is manifestly holographic [4,6,5] (the bulk physics is determined by the shape of the boundary). The success of ECT is not limited to theoretical physics but, for instance, has been confirmed also in condensed matter where it has produced exceptional results. ECT has been successfully applied to infer all the fundamental aspects of superconductivity and graphene physics, in a striking mathematical way and for the first time directly from a first physical principle (intrinsic periodicity) rather than from empirical models (e.g. , the BCS model), [8,9]. Conclusions The equivalence, proven in this paper in a bottom-up approach, between the FPI evolution and elementary space-time cyclic dynamics is such a remarkable and striking result that it can not be classified as a mere mathematical coincidence. On the contrary it imposes us a serious reconsideration about the fundamental nature of relativistic space-time, especially if we take into account all the others crosschecked and peer-reviewed mathematical and phenomenological equivalencies with ordinary physics already proven in previous papers [3][4][5][6][7][8][9][10], see also [11][12][13][14][15][16][17][18][19], which in turns represents only the tip of the iceberg of possible new physics beyond QM. All this confirms that the intrinsically compact (cyclic) nature of space-time is de facto a conceptually legitimated, mathematically correct, phenomenologically predictive hypothesis for a potentially deterministic [30][31][32][33]24], unified formulation of quantum and relativistic physics [3,6,7,23]. Despite its radical unconventionality ECT has given more than sufficient scientific evidences to allow us to claim a new viable scenario in fundamental physics. It clearly predicts, without fine-tunings or hiddenvariables of any sort, that new physics will be observed by investigating quantum phenomena with time precision better that 10 −21 s. With sufficiently accurate ultra-fast timekeepers elementary particles will reveal their fundamental string nature, characterized by a one-dimensional world-cycle rather than a two-dimensional world-sheet, manifestation of the intrinsically cyclic nature 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 of relativistic space-time. "God play dice?" Our result solves the quantum dilemma in this way: "God has no fun playing dice", having infinite accuracy in time He would always be able to predict the outcomes. A Counting the Feynman Paths (Proof ) We prove eq.(1): the ordinary FPI of QM can be explicitly written as a discrete sum of paths, labelled by an integer number. Let us consider the FPI for a free relativistic scalar particle of momentum k traveling between x i µ and x f µ . Assuming that t i to t f are the initial and final time of the evolution, respectively, the ordinary FPI is defined as Ldt is the relativistic action and L = P · ẋ−H is the free particle classical Lagrangian defined by the Hamiltonian and momentum operators, H and P, respectively. We perform the ordinary time slicing. In the limit N → ∞ we have where the elementary Feynman space-time evolutions are the time slice ∆tm = t m+1 − tm is associated to spatial intervals ∆ xm = x m+1 − xm whereas |Φ k is the multiparticle state 6 of momentum k associated to the particle, defined in terms of the local energy eigenstates |n k . In other words, since we are considering a relativistic scalar particle, |Φ k can be regarded as the single component of momentum k of the second quantized Klein-Gordon field state |Φ . It must have both positive and negatives frequencies k 0 = ±ω( k)/c as a consequence of the quadratic relativistic dispersion relation It is well known that the second quantization of each field component yields the normal ordered energy spectrum ωñ k ( k) :=:ñ k ω( k) withñ k ∈ N for both the positive and negative frequencies. As usual in the FPI analysis we will always assume normal ordering (:=:). So, when both positive and negative frequencies of |Φ are considered, the energy spectrum of the Hamiltonian operator H( k) in eq.(5) can be equivalently written as H( k)|n k :=: n k ω( k)|n k , with n k ∈ Z. Through the quadratic relativistic dispersion relation the energy spectrum fixes the momentum spectrum of |Φ k as well. It follows that P|n k :=: n k k|n k , with n k ∈ Z. 7 So far we have essentially recalled widely accepted identities of second quantization and theirs direct consequences. We evaluate eq.(5) by using the energy and momentum spectra associated to the Hamiltonian and momentum operators of |Φ k described above, then we apply the Poisson summation n∈Z e −iny = 2π n ′ ∈Z δ(y + 2πn ′ ), where n, n ′ ∈ Z. We find that a sum of Dirac deltas is associated to the elementary Feynman evolutions eq.(5), U m+1,m = nm∈Z e −inm[ω( k)∆tm− k·∆ xm] = 2π n ′ m ∈Z δ ω( k)∆tm − k · ∆ xm + 2πn ′ m .
7,267.4
2021-05-20T00:00:00.000
[ "Physics" ]
SOME MODELS OF RASTER CORRELATORS OF BINARY IMAGES In paper the outcomes of mathematical modeling of statistical recognition of binary images are proposed. The offered hypothesis that the pixels constituting boundary of recognition object and an image background are secondary attributes at recognition is experimentally confirmed. As consequence, recognition reliability can be raised due to exception of these pixels at recognition. Basing on the offered example of training of models on the fixed training sample and taking into account that the prototype model is a special case of the rejecting model we have noted that recognition reliability of offered model cannot be lower than reliability of the prototype. The obtained results will be used in hardware realization of binary comparators. INTRODUCTION The model of the binary statistical correlator, as pattern classifier, is the object of our research [1]. As the prototype the classical model consisting of the binary raster comparator and threshold unit have been examined. Its structure is represented on fig.1. The preference about the input image is given to that class where there were a maximal total number of pixels concurrences. Function of the threshold unit has the following kind: As a rule, raster correlators are used when quantity of classes is insignificant, for example, at recognition of alphabet characters, road signs, national emblems and etc. Input images should meet the certain requirements. They should be normalized on scale, oriented in a matrix and separate fragments of input images should be proportionate. Thus, restrictions to the input images are formulated as in the majority of statistical methods, for example, spectral or neuron-like. Main problem of these methods is the learning. For researched model of raster correlators is a reference images formation. Obviously, recognition quality will be caused by a discrepancy degree of the same input and reference images. Thus, the greatest probability of pixels discrepancies will belong to the object boundary. The working hypothesis will be that the pixels located on object boundary practically do not carry the information on an image. Moreover, such pixels account reduces recognition probability. Hence, reliability of recognition can be raised at the exception of these pixels at recognition. Thus it is necessary to solve a problem of optimum learning. International Scientific Journal of Computing The purpose of given article is experimental confirmation of this hypothesis. LEARNING OF BINARY CORRELATORS As optimum learning of correlators we shall understand a finding of such reference images which would provide recognition of the maximal number of input images. Learning can be divided into two stages [2]. On the first stage, for each class, with using of final set of learning images, the so-called statistical account of class (SAC) is computing. SAC is a matrix of the data of size Е. Each SAC element value determines quantity of the object pixels in learning images. At the second stage of learning it is necessary to determine reference images basing on obtaining SACs. For the initial model (1), the task of reference images determination is reduced to a finding of optimum boundary of the object and background [3]. On fig. 2 qualitative dependence of probabilities distribution of i th pixel belonging to a background of the image (solid line) and object of recognition (dashed line) is submitted depending on SAC ij S . Fig. 2 Probabilities distribution of object (background) pixel belonging The crossing point of two lines characterizes an optimum threshold (S OPT ) of the background and object boundary. Then Developing a working hypothesis, we assume, that there is a pair thresholds S OPTmin and S OPTmax which using will allow us to allocate a subset of boundary pixels. These pixels exception(rejection) at recognition finally should raise recognition efficiency. Thus, in the given approach it is necessary to determine for each class as the reference image, and rejection mask. Reference image and rejection mask pixels will be defined according to expressions: REJECTION MODEL OF THE RASTER CORRELATOR On fig. 4 it is submitted structure of rejection model of the raster correlator. The preference about the recognized image is given that class where there were a maximal relative number of coincidences. The threshold unit realizes the same function (2), but with a relative values: EXPERIMENTAL CONFIRMATION OF A WORKING HYPOTHESIS Researches were spent on the model realizing recognition of hand-written Arabian numbers. Reference images formation was carried out basing on of 25 binary learning images for each class. Theirs size equals 24x32 a pixels. In total 250 input images participated in experiment (25 images for each class). For the initial correlator at each optimal threshold and for the rejection correlator at various combinations of min and max thresholds the recognition probability was calculated. Results of experiment are submitted on fig.5. Fig.5. Recognition probability distribution with various thresholds for both models Apparently from fig.5 for the initial model (white graph) an optimum threshold S OPT =20. At the given threshold the correlator has correctly recognized 232 images, and the recognition probability has come to 92,8%. For rejection correlator optimum there was a pair of thresholds S OPTmin = 17 and S OPTmax = 22. In this case it has been recognized 236 of 250 submitted images and recognition probability has come to 94,4 %. In experiment for revealing the reasons causing increase of recognition efficiency additional characteristics have been obtained. At correct recognition of the image were calculated: total number of pixels coincidences of the input and reference images for that class which has a maximum total number coincidences; difference between total number of pixels coincidences for the two classes which have the most coincidences Average values of those characteristics are resulted in tab. 1. Increase of recognition efficiency is connected with improvement of "quality" of reference images. Excluding fuzzy boundary pixels from consideration, recognition is carried out on set of more informative attributes. Thus fuzzy boundary pixels inhibit recognition reducing number of pixels coincidences for correct class and increasing it for others. For revealing dynamics of decrease of recognition quality at increase of class number the following experiment has been carried out. For the classes set consisting 26 hand-written Latin alphabet symbols and 10 Arabic numbers, the reference images for the prototype-model and pair of the reference images and rejecting masks for offered model have been obtained. Then both models made recognition among 2, 3 and so on up to 36 classes. For each quantity of classes participating at recognition the relative recognition probabilities for both models were calculated. The test sequence contains on 35 samples for each class, in total 1260 images. Results of experiment are represented on fig. 6. Apparently from fig. 6 with increase in quantity of classes the percent of correctly recognized images decreases, but reduction occurs not so quickly at using rejecting model. For example, at 5 classes of images the difference equal 1,4 %, and at 36 classes the rejecting model advantage already 3,6 %. This experiment once again confirms advantage of use of offered rejecting model. CONCLUSION It is considered to be, that increase of efficiency is connected to a spent resource with nonlinear dependence. In the resulted example, increase of recognition probability has demanded double increase in a memory size. As a whole, the degree of increase of recognition probability depends on many factors: learning quality, width of learning set, number input images and, finally, from statistical characteristics of object and model of recognition. It is necessary to note that in a worst case when S OPTmin = S OPTmin = S OPT , and all pixels of rejection masks are true rejection model transformed to classical initial model. Hence, recognition reliability of rejection model of the raster correlator, by definition, cannot be lower than reliability of initial model. Thus, the working hypothesis about increase of recognition probability of raster correlators proves to be true at the expense of fuzzy boundary of recognition objects. The obtained results can be used for hardware realization of binary processing devices.
1,843.8
2014-08-01T00:00:00.000
[ "Computer Science" ]
Next Generation Air Quality Platform: Openness and Interoperability for the Internet of Things The widespread diffusion of sensors, mobile devices, social media and open data are reconfiguring the way data underpinning policy and science are being produced and consumed. This in turn is creating both opportunities and challenges for policy-making and science. There can be major benefits from the deployment of the IoT in smart cities and environmental monitoring, but to realize such benefits, and reduce potential risks, there is an urgent need to address current limitations, including the interoperability of sensors, data quality, security of access and new methods for spatio-temporal analysis. Within this context, the manuscript provides an overview of the AirSensEUR project, which establishes an affordable open software/hardware multi-sensor platform, which is nonetheless able to monitor air pollution at low concentration levels. AirSensEUR is described from the perspective of interoperable data management with emphasis on possible use case scenarios, where reliable and timely air quality data would be essential. Introduction The ways in which we create, manage and make use of data is fundamentally changing under the influence of several interdependent factors. For Earth sciences, this is similar to the revolution caused by the use of remote sensing data during the 1970s [1]. The number of devices, interconnected into the Internet of Things (IoT) is expected to reach 50 billion in 2020 [2]. Volunteers, also referred to as citizen scientists [3], empowered by inexpensive and readily available technology, are increasingly engaged in collecting and processing heterogeneous data, which has traditionally been collected by authoritative sources. In particular, in the field of air quality, many recent citizen science initiatives, such as [4][5][6], aim to complement and/or substitute official measurement networks in their attempt to monitor the quality of ambient air. The approaches that those projects adopt are different, but they all rely on inexpensive hardware and establish a community of volunteers who are engaged in collecting observation data. While those activities achieve very good results in raising visibility and engaging citizens on the importance of air quality, they are still not able to provide sufficient quality for observation data. That is why we consider that observation data collected by citizens, without a means to estimate the quality of observation data and/or compare to existing authoritative sources of information, should not be used as input to modeling and/or for decision making. At the same time, stations belonging to existing authoritative air quality networks are not dense; equipment is expensive; and the majority of available information technology solutions for data collection and exchange is vendor specific. It is thus difficult to combine observation data from different channels. Furthermore, interpolation techniques used in air quality modeling are usually country dependent, and not open enough, thus acting as "black boxes" with results that are very difficult, if at all possible, to evaluate and reproduce. For this reason, being able to provide good quality data with high granularity (e.g., at the street level) and mashing-up air quality data from heterogeneous sources is still challenging, particularly in urban and suburban areas [7]. Within this context, the Joint Research Centre of the European Commission (JRC) is working on the AirSensEUR project, which aims at the establishment of an affordable (under 1000 euro) open software/hardware multi-sensor platform, which is nonetheless able to monitor air pollution at low concentration levels. This manuscript describes the AirSensEUR platform from the perspective of spatial data infrastructures (SDI) and interoperable data management. We do not attempt to provide an exhaustive overview of the air quality-specific hardware configuration, as this is already done by Gerboles et al. [8,9]. The second section of the manuscript defines the theoretical foundation for the implementation of the platform, with particular emphasis on the research challenges for the establishment of open and transparent sensor networks interconnected through the means of the IoT. The third section provides an overview of the interoperable components of AirSensEUR, which we have intertwined to provide a single "plug-and-play" bundle capable of producing reliable observation data in different contexts. In Section 4, we describe several application scenarios for AirSensEUR, in particular for: (i) regulatory; and (ii) informative purposes. Finally, we conclude with the lessons learned, remaining challenges and the direction of our future work. Context Pervasive computing and citizen science provide completely new channels for environmental sensing. Official data can be complemented and even substituted through citizen-driven initiatives. This process is however not straightforward. The integration and fusion of data from different sources, which were acquired in different contexts with heterogeneous methods and tools, face serious interoperability issues on the technical, semantic, organization and legal levels [10]. Schade and Craglia [11] outline several challenges bounding the future development of the sensor web ( Figure 1). Whereas the concentric blue circles in the center of the figure illustrate the need to address different data aggregation levels (beginning with raw data in the very middle), an event-based architecture will be required to unite all datasets and streams, independent of their origin, this being sensor measurements, modeling results or people's observations. Three transversal challenges are cross-cutting through the figure: (i) the automation (optimized machine support) of underlying processes; (ii) the projection/re-application of general IT solutions (e.g., to address security and privacy issues); and (iii) data fusion/integration, including the propagation of uncertainties throughout the applied algorithms. Schade and Craglia provide a theoretical framework to address some of the key issues arising in the establishment of sensor networks. The authors apply the notion of the sensor web as an integrating concept addressing the common consideration of measurements and observations, independent of their distinct origin. In this way, outputs from authoritative networks (e.g., those of environmental protection agencies), scientific prediction and forecasting models (e.g., for the dispersion of air pollution, emission of pollutants into natural resources or the effects of climate change) and from citizens (see also the citizen-based systems section, below) can be seamlessly integrated. Although these multivariate sources can be integrated conceptually and examples of this application exist [12,13], a series of practical challenges still remain. A list is provided in Section 2.1. We consider that our work on AirSensEUR addresses together the majority of those challenges; thus, the lessons learned, and our proposed approach, if adopted, would lead to more open, transparent and interoperable sensor network infrastructures. Citizen-Based Systems Many citizen science projects, such as [4][5][6], take advantage of the rapidly developing field of mobile low cost sensors. They address data-related issues from different perspectives (e.g., smart cities, Internet of Things, digital single market, citizen science), and at different levels (local, national, international). There is also an emerging movement of projects initiated and developed by individuals or groups that do not have any affiliation with the scientific establishment. This do-it-yourself (DIY) movement has been paving the way for the next steps for citizen science. Anyone who is fascinated or curious about science now finds a lower threshold to enter expert realms, facing DIY options, tools and spaces to build anything from scientific instruments for environmental measurements and for genome sequencing to satellites and other machines or devices. Low cost sensors (for instance, CO 2 , light intensity, sound or humidity), several programming languages, open-source hardware prototyping platforms or microcontrollers (such as Arduino or Raspberry Pi) have become adaptable, modular and easy to use at the starter level. A wider ground for experimentation emerges when these solutions are coupled with access to digital tools (especially 3D printers) and hands-on activities in shared spaces. In addition, connection with on-line communities and access to web-based tutorials and documentation in repositories, such as Instructables or GitHub, facilitate the establishment of networks of support and collaboration with others with common interests and increase science literacy. Notwithstanding these positive developments, the use of low cost sensors by citizens still faces major challenges, which limit the establishment of scientifically-sound results. Those are described in [7] and include: • Difficult discovery of environmental sensor devices and networks, due to the lack of metadata and services that expose them; • Spatial/temporal mismatch of observations and measurements deriving data from unevenly-distributed monitoring stations that do not always form networks causing difficulties in data reuse for initially-unintended purposes; • Lack of interoperability between components (e.g., measurement devices, protocols for data collection and services) of acquisition and dissemination systems; • Information silos, created by the use of standalone vocabularies that are bound to particular environmental domains, such as hydrology and air quality; • Proprietary solutions for logging sensor measurements, which require custom code to be wrapped around the manufacturer's software development kit; • Accuracy of the pollution sensors, which, as described in [14], is the major fault in any environmental network of sensors due to their low sensitivity to ambient levels of air pollutants. International Standards To address these issues, we present in Section 3 the AirSensEUR open source platform. Its development leverages the increased convergence of international standards in the geographic and telecommunication domains (IEEE, Open Geospatial Consortium-OGC, International Telecomunication Union-ITU) [15] and the development of the European Spatial Data infrastructure (INSPIRE). The latter [16] is unlocking heterogeneous data, produced by public sector organizations in 28 European countries. Relevant work on sensors in INSPIRE covers both data encoding [17] and network services [18], together providing all necessary means for "plugging" spatio-temporal data into SDIs, thus enabling its use and reuse combined with other relevant resources [7]. AirSensEUR: An Interoperable Plug-and-Play Sensor Node In order to advance this research on citizen-based observation systems, and using the latest standards available, we developed an interoperable plug-and-play sensor node: AirSensEUR. It is designed as an open platform based on several pillars, which ensure that individual sensor nodes are capable of interoperating with heterogeneous sources of data. The high level objective, which determines the bounding conditions of AirSensEUR, is to design and build a platform that: (i) is capable under certain conditions of producing indicative observation data that meet the legal requirements of the EU Air Quality Directive [19]; and (ii) implements a download service, as required by the EU INSPIRE Directive [16]. The platform itself consists of a bundle of software and hardware ( Figure 2), which are configured to work together in a synchronized manner. The hardware (Subsystem A) consists of a sensor shield and host, further described in Section 3.1, while the software components being used are described in Section 3.2, both in terms of backend (Subsystem B) and client applications (Subsystem C). Further information about the platform is available online at [20]. Open Hardware In terms of hardware, the platform ( Figure 2) consists of a multi-sensor shield (A1), which is connected to a Linux-based host (A2). The individual components of AirSensEUR are shown in Figure 3 and described in further detail within Table 1. AirSensEUR documentation, together with computer-aided designs of boxing for 3D printing, is open by design, thus ensuring the ability to reproduce and reuse the results. All resources are made available at [20,21]. Currently, one shield with four amperometric sensors and an ancillary board with temperature, humidity and pressure sensors have been developed for AirSensEUR. The long-term objective is to interest the scientific community in validating and further developing additional shields for other pollutants (e.g., measuring particulate matter (PM)). Shields might be connected through one of several available communication (COM) ports of the platform. The AirSensEUR shield is a high precision four-channel three-electrode sensor board. It also includes a daughter board with temperature/humidity (UR100CD, Technosens-IT) and pressure (BMP180, Bosch-DE) sensors together with I2C level shifters to interface to the ATMega328 microcontroller managing the shield. Each sensor channel is composed of a fully-programmable analog front end (AFE, TI LMP91000, Texas Instruments, U.S.), a 16-bit analogue to digital (A/D) converter (TI ADC16S626) and a 12-bit digital to analogue (D/A) converter (AD5694RB). The D/A converter dynamically sets the range of the A/D converter in order to keep the converter resolution in the sensor output range, making AirSensEUR suitable to measure extreme low voltages (15-µV resolution on a range set to ±0.5 V), as needed with the sensitivity of the selected sensors. The ATMega328 controls the AFE of the sensor channels, A/D and D/A registers, daughter board for ancillary data. It then retrieves, filters and averages the responses of the seven sensors and concatenates all into a hexadecimal string. The ATMega328 receives a firmware developed in the Arduino framework and Integrated Development Environment (IDE) through a serial line on the shield. A USB board accommodated on the shield allows real-time data acquisition of AirSensEUR data for laboratory calibration. Additionally, a communication protocol and a Java control panel have been developed in order to easily configure the AFE of each channel (sensor voltage, D/A outputs in order to fix A/D conversion limits, gain of the signal, load resistance of each sensor (RL), bias, Infinite Impulse Response (IIR) filtering [22], data acquisition periodicity and averaging time) and read sensor responses. To the best knowledge of the authors, the AirSensEUR shield is among the boards with the widest control by the user of all sensor parameters, allowing maximum flexibility. The schematic representation of the chemical sensor board is given on the upper left corner of Figure 2. So far, tests have been conducted with four City Technology Sensoric sensors: O 3 3E1F, NO 2 3E50, NO 3E100 and CO 3E300 [23]. However, the shield can accommodate other two and three-electrode amperometric sensor brands and models, including: • the Sensoric model (diameter of 16 mm, mounted with a TO5 connector), • sensors with a 20-mm diameter: the 4 series of City Technology or SGX Sensortech [24], the "miniature" series of Membrapor [25] and the A sensor series of Alphasense [26]), • and sensors with a 32-mm diameter: e.g., the 7 series of City Technology or SGX Sensortech, the Membrapor "Compact" sensor series or the "B" sensor series of Alphasense. The sensor host (A2 in Figure 2) is based on the Arietta G25 (ACMESystem-IT) and consists of a low cost Linux embedded module CPU Atmel (400 MHz ARM9 TM processor) loaded with 256 MB of DDR RAM. It also accommodates other devices: a 32-GB SD card with pre-installed Linux, a GPS, a GPRS and a WiFi access point. Power supply comes from either a battery or through USB/power line. The power budget of AirSensEUR was estimated summing power requirements for each individual subsystem. For the shield with four sensors and the ancillary daughter board, 20 mA@5 V was measured; 70 mA@5 V is required by the ARM module of the Arietta, 20 mA by the GPS and 15 mA by the optional external active antenna. This aggregates to a steady total of 125 mA@5 V (0.625 VA). Introducing possible losses generated by switching power supplies, with efficiencies up to 80%, we expect a consumption of 0.780 VA. A 20-Ah, 3.3-V (64 Wh) single cell Lithium iron phosphate battery (LiFePO4) will be able to power up the system for more than 80 h. Measurements done when sending data through the GPRS dongle would however yield an average consumption of 300 mA@5 V (1.5 VA). With an estimated session time of 30 min and introducing losses caused by the switching power supply, this generates an estimated 1 Wh for each data session. Planning four updates a day requires 4/5 Wh, thus reducing the expected overall running time to about 60 h depending on external conditions, mainly due to (i) temperature and (ii) battery life. Open Source Software We used open source software in order to take advantage of the rapid development cycle and outreach to existing communities. The server side component of the platform is by design based on OSGEO-Live-the free and open source bundle of the Open Source Geospatial Foundation [27]. This provides many opportunities, as data can be further used within both web and desktop Geographic Information System (GIS) clients. Furthermore, through using OSGEO-Live as the software environment for handling data from AirSensEUR, we ensure that the open source projects that we use are supported by a healthy community and meet baseline quality criteria [28]. The components that are chained together in AirSensEUR are provided in Table 2. JavaScript SOS client with functionality to process and analyze air quality data with R [29] 5. Visualization R Post-processing of data (e.g., for calibration or further statistical analysis) The orchestration of individual open source tools is described in the subsections below. For clarity, the overview is split into: (i) sensor host; (ii) server components; and (iii) clients. Sensor Host A set of Java programs retrieves data from the shield and the GPS. Together with the timestamp, these data are added to a local sqlite3 database (A2 in Figure 2), stored on the SD card of the Arietta. Finally, the data of the local database are pushed via GSM/GPRS to an external server through standardized JSON requests, based on a transactional sensor observation service (SOS-T). This functionality for web transactions is provided by the JSON binding of the 52 • North SOS implementation, described by [30]. The use of SOS-T as the means for the migration of data from the sensor host to the server provides us with several significant advantages over a direct web access to the AirSensEUR database. Those include (i) high level of security (the sensor host does not provide credentials for access to the database, and InsertObservation requests are limited to a predefined number of IP addresses), as well as (ii) independence from the database schema. Furthermore, the JSON syntax of the request is minimalistic in terms of size and is therefore well suited for the transmission of big volumes of observation data. A sample InsertObservation request is provided in Figure 4. Server Components An SOS exposes observation data in an interoperable manner, so that it can be retrieved and directly re-used by standard clients without any need to adopt an access protocol or data structures on the consumer side. Such functionality is, for example, fundamental in order to integrate citizens' observations with institutional measures on-the-fly. Through SOS, the platform implements by design an INSPIRE download service and "plugs" data into spatial data infrastructures (SDI), established as a result of the implementation of the European INSPIRE Directive [16]. This is possible because the SOS implementation that is used within the platform is already extended as an INSPIRE download service [18]. This provides numerous opportunities for combined use of data, e.g., for analysis of air quality together with the distribution of population or species, thus trying to understand the effect of pollution on human well-being or species. Clients The SOS-based web service allows direct interaction with the observation data through standard (POST, GET) requests. That is why the only precondition for interaction with the AirSensEUR server is a web browser and some basic knowledge of the SOS interface standard. Observations in SOS can also be consumed by an increasing number of desktop (e.g., QuantumGIS) and web (e.g., OpenLayers, 52 • North SensorWeb client, ESRI ArcGIS for Server, RStudio server) clients, which makes the retrieval of data even easier. The 52 • North SensorWeb client ( Figure 5) is the main means for communication of observation data from AirSensEUR, as it provides an easy to use environment, which is also mobile friendly. Furthermore, data from AirSensEUR can be pulled directly from the console environment of the "R" statistical package ( Figure 6) through the sensorweb4R library [31]. This provides numerous opportunities for additional processing (e.g., calibration) and visualization. Use Cases The AirSensEUR platform, as just detailed above, is designed to enable a rich portfolio of possible applications. In the section, we illustrate the potentials of our solution by providing application examples where reliable and timely air quality data are absolutely essential. Within this context, we distinguish two types of applications, related to (i) the monitoring of air pollution for regulatory purposes; and (ii) other applications for informative purposes. Monitoring for Regulatory Purposes In Europe, the mandatory monitoring of air pollution is managed by the European Directive for Air Quality [19]. This Directive, which does not consider mobile monitoring, but only fixed measurements, sets different categories of measurement methods according to the data quality objectives (DQOs) they can meet. The DQOs set maximum levels of measurement uncertainty that each method shall meet at limit values, defined for each pollutant based on health effects. The Directive establishes a framework of methods for air pollution monitoring for regulatory purposes as presented here: • reference methods that can be applied everywhere and for all purposes with a maximum measurement uncertainty of 15% for O 3 , NO 2 , NO x and CO; • indicative methods that can be applied in areas where a defined level, the upper assessment threshold (UAT), is not exceeded, and they permit a reduction of 50% of the minimum reference measurements where the UAT is exceeded, thus allowing one to diminish the cost of monitoring by reducing the mandatory number of reference methods. Indicative methods are associated with a DQO of 25% for NO 2 , NO x and CO, 30% for O 3 ; • objective estimation that can only be implemented in an area of low levels of air pollution with a DQO of 75% for O 3 , NO 2 , NO x and CO. Recently, several evaluations of sensor performance were performed, including both laboratory and field experiments [32][33][34][35]. With these results, low cost sensors are not able to meet the DQOs of hourly reference measurements set in the Air Quality Directive. Conversely, these evaluations suggest that some sensors could reach the DQOs for indicative measurements. We expect that AirSensEUR can meet the DQOs of indicative measurements for some compounds. These DQOs are about half less stringent than the one of reference measurements for O 3 , NO 2 , CO and SO 2 . The first protocol of evaluation of sensors for indicative measurements has been developed [36]. It is currently used by the European Committee for Standardization-CEN (Technical Committee 264 on air quality-Working Group 42 on sensors), which is currently drafting such a protocol. Fixed Measurements Views are currently evolving the thinking that the presented legislative framework is not completely fit for the use of low cost sensors. In particular, a new method category "informative methods" not linked with DQOs would be beneficial in order to allow for simpler and faster evaluations. The aim would be to base these evaluations only on field tests by comparing co-located sensors with reference methods. Recently, the South Coast Air Quality Management District of California (USA) released a number of these comparisons [37] using the coefficient of determination (R 2 ) as the main indicator of the quality of sensor values. Spinelle et. al. [38] proposed to use a target diagram [39] to easily compare the performances of sensor measurements. That is why the usefulness of fixed informative measurements with lower DQO, prescribed by the European Directive, remains an open question. Nevertheless, low cost sensors carry a number of advantages compared to reference measurements. Sensors, including AirSensEUR, are less expensive than reference methods, allowing them to be deployed in dense networks and to provide detailed information with larger spatial coverage than the one of traditional monitoring stations. For example, within the RESCATAME (Pervasive Air-quality Sensors Network for an Environmental Friendly Urban Traffic Management) project [40], sensors were installed at 35 points on two busy streets of Salamanca (Spain), and each point was equipped with seven sensors: CO, NOx, O 3 , fine particles (PM), noise, humidity and temperature. The sensors were used to simultaneously assess air pollution and to monitor traffic. Based on this information, prediction models estimated the level that air pollution could reach in the next one and three hours. This allowed the traffic department to foresee high pollution episodes and act accordingly. High air pollution estimates triggered changes in the timing of traffic lights, temporary blocking of a lane or regulations imposed by local police officers. Other projects with fixed low cost air quality sensors aim at increasing the spatial and temporal scale of information in highly granular environments, i.e., in areas that are spatially heterogeneous with variable emission sources. For example, within the SNAQ (Sensor Networks for Air Quality Heathrow) project [41], a network of 50 sensors was installed around Heathrow airport. Emissions inventories and dispersion model results were improved using the sensor data. In addition, source apportionment was studied around the airport through the use of sensor data. Other applications of sensors at fixed sites include monitoring in remote areas where power supply is not readily available because of their limited needs in electricity and the absence of required routine maintenance. The assessment of concentration gradients or alerts and industrial fence line monitoring within industrial areas where high pollution levels are expected has been a typical area of application for low cost sensors for decades. Mobile Measurements, Outdoor/Indoor Environments and Citizen Observatories Exposure to air pollution and the associated health risks are tightly related to the spatial and temporal occurrence of individual activities. There is an increasing body of knowledge that evaluates the human exposure to air pollution [42]. Significant variations are identified in the exposure, even between individuals from the same household [43]. Still, the integration of the spatio-temporal dynamics of pollution together with the spatio-temporal trajectories of individuals into a suitable analytical framework is challenging [44]. Within this context, a major advantage of low cost sensors, such as AirSensEUR, is their portability, which together with their limited needs of power supply allows a number of mobile applications generally aimed at monitoring direct population exposure to air pollution. This is a unique feature of sensors that is generally not possible to achieve with reference methods. The EU FP7 project Citi-Sense is developing a sensor-based Citizens' Observatory Community to improve the quality of life in cities [45]. In this project, citizens are proposed to contribute to and participate in environmental governance by using novel technologies as sensors. A number of Citizen's Observatory Projects of this type have been implemented in which mobile monitoring is carried out by citizens, for example Common Sense, the forerunner of this type of project [46,47], and Citi-Sense [48,49]. An exhaustive review of these types of projects can be found in [50]. It is worth mentioning Citi-Sense-Mob [51], which aims at using sensors mounted on buses and bikes combined with models and monitoring stations to produce personalized data as alerts and exposure through web and smartphones applications. The OpenSense project [52] also used mobile monitoring on buses to produce high spatially-resolved maps of pollution distribution. In this type of project, mobile monitoring is restricted to outdoor measurements. This is an important aspect for the data quality of measurements with sensors. It is more difficult to control the data quality of measurements that are carried out moving very fast from outdoor to indoor environments, which is typical when sensors are worn. In fact, sensors are generally strongly affected by the rapid change of air composition, temperature and humidity, which are typical for mobile applications going from outside to indoors. That is why the speed effect, associated with the movement of sensors, must be considered throughout the whole life-cycle (design, deployment and analysis) of a measurement campaign [53]. AirSensEUR has been designed so that it can be used in all of the applications presented above. The interoperability of data and the power supply by both a 220-V socket and long-autonomy battery allow for fixed and mobile measurements for regulatory, informative or citizen observatory projects. Moreover, a specialized algorithm using NMEA (National Marine Electronics Association) traces to determine outdoor and indoor environments is also in development. It could be used for: (i) health-related information on population exposure; and (ii) for the application of new ranges of calibration functions. Strategies to Ensure Data Quality of AirSensEUR The major limitation of the diffusion of low cost sensors in the last few years has been the questionable quality of observation data. Once the design of the AirSensEUR prototype has reached a satisfactory state, we will be working on a procedure for calibration. The list of parameters that affect the electrochemical sensor responses is now well known and includes: cross sensitivities to gaseous interfering compounds, long-term drift, temperature and humidity effects. We have already foreseen the different possible routes towards an effective procedure for calibration: • establishing a deterministic model based on laboratory and field experiments based on a strict protocol of the sensor test [32]; • as the AirSensEUR includes seven sensors, cross sensitivities may be solved in a multivariate system of equations; • design of an active sampling system on top of the sensors to easily control the humidity of the air beam and to filter the gaseous interfering compounds; • calibration at the field monitoring station using co-located pair of reference and sensor data. The types of calibration methods can include linear, multi-linear equations, sensor cluster coupled with artificial neural network (ANN), etc. A good comparison of these techniques is given in [38]. ANN was found to be the most effective technique though requiring additional metal oxide (MOx) sensors not yet present on the AirSensEUR shield; • in the case of mobile sampling, a few algorithms have been developed for the re-calibration of mobile sensors versus reference measurements or recalibration of sensors versus freshly-calibrated sensors in a mobile environment [52,54]; • future development of calibration facilities (including zero and span) directly on the sensor platform can be imagined. This solution, likely expensive, may only be adopted in association with the active sampling system a few points above. Both of them would use the same pneumatic system. While designing zero air using selective chemical filters seems possible, for example thriethanolamine (TEA) for NO 2 , 1,2-di(4-pyridil)-ethylene (DPE) or indigo for O 3 , the development of a span gas generator appears quite challenging. Discussion and Conclusions Following initial testing of the AirSensEUR platform of approximately 2.5 months, including the collection from one shield of 4.5 million observations (seven observed properties, collected every 10 s), we consider that AirSensEUR is easy to configure and expect it to be sensitive enough to measure ambient air pollution in the range expected at background and traffic sites placed in rural, urban and suburban areas. The authors would like to point out that the manuscript presents a platform, rather than the performances of sensors. In effect, the list of sensors that can be mounted on AirSensEUR (approximately 230 sensors, as described in [9]) is too long to be tested. An example of an application with the CityTech sensors, O 3 3E1F, NO 2 3E50, NO 3E100 and CO 3E300 is given in [8,9]. The references show by calculation and experiments that the combination of sensors with the AirSensEUR platform allows one to reach an electronic resolution of 15.3 µV for individual measurements [9]. This resolution corresponds to detection limits of respectively 3.2 ppb·min, 16.7 ppb·min, 74.9 ppb·min and 0.056 ppm·min for the cited CityTech sensors. Lower limits of detection would be obtained with sensors that are more sensitive. The sensitivity of CityTech sensors allows monitoring O 3 , NO 2 and CO when averaged over one hour as required by the European Directive for Air Quality [19]. The platform provides a promising technological approach for the monitoring of population exposure in mobile context. What makes AirSensEUR different from other similar solutions is: • "Plug-and-play" architecture, which is transparent, allows configuration of each individual component and can be adapted to different mobile and in situ use cases; Technical capability for the implementation of on-the-fly calibration through the possibility to push data directly from each sensor node to the "R" statistical package, where calibration curves and other post-processing can be done. We will focus our future work on implementing the use case scenarios, as described in Section 4. Particular emphasis would be put on learning and documenting the experiences from the implementation of the use cases, which might lead to an improvement of the individual components and their interdependencies. In terms of hardware and software, AirSensEUR has been developed to prioritize modularity and fast development time, thus trading power efficiency and component costs with design change requirements. By focusing on aspects of the system that have been already consolidated, a set of improvements can be implemented for the hardware and software infrastructure. For example, on the software side, especially for applications running on the host, translating parts of the currently-existing Java-based code to plain C or C++ will significantly reduce the computational costs on the main CPU and, consequently, the overall power consumption. The AirSensEUR hardware can benefit from component improvements, like, for example, in the A/D conversion area, thus reducing the number of onboard generated voltage references and power supply, or by introducing powerful microcontrollers, thus increasing the complexity of onboard filtering algorithms for better analysis performances. New sensors will be connected through the available communication peripherals or via modifications of the existing data protocol that would allow for several shields to be chained together. Modularity can also be improved via low cost specialized shields able to accommodate a single sensor. The open nature of AirSensEUR will benefit from new communication technologies and standards, especially targeted to IoT, which could reduce the total system power consumption and operational costs and increase data accessibility. Last, but not least, the release of the code in the open source community and the sharing of experiences in the use of the platform will harness the creativity of the community and lead to collective improvements. Author Contributions: Alexander contributed to all chapters of the paper and worked extensively on the literature review, particularly on issues related to data management and applicable standards. He participated in scoping the architecture of AirSensEUR, implemented it with existing software and provided use case examples. Sven contributed to the overall storyline, positioning of the specific work on the air quality sensors into the wider research context, as well as the consolidation of the conclusions and shaping of future work. Max worked on scoping the architecture, defining the theoretical framework, as well as on the identification of relevant use cases for the implementation of AirSensEUR. The original idea and design of AirSensEUR comes from Michel and Laurent, who contributed to the sections on the architecture of AirSensEUR, the European legislative framework of air quality measurements and the state of the art of low cost sensors, the data quality of measurements of the platform and future development of calibration strategies to ensure data quality. Marco has been collaborating since the very beginning of the project with Michel and Laurent to design and develop the AirSensEUR electronics, related firmware and Java applications for data processing and exchange. He is involved in managing the prototype replicas and product engineering. Conflicts of Interest: The authors declare no conflict of interest.
8,066.8
2016-03-01T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Establishing Causal Claims in Medicine ABSTRACT Russo and Williamson [2007. “Interpreting Causality in the Health Sciences.” International Studies in the Philosophy of Science 21: 157–170] put forward the following thesis: in order to establish a causal claim in medicine, one normally needs to establish both that the putative cause and putative effect are appropriately correlated and that there is some underlying mechanism that can account for this correlation. I argue that, although the Russo–Williamson thesis conflicts with the tenets of present-day evidence-based medicine (EBM), it offers a better causal epistemology than that provided by present-day EBM because it better explains two key aspects of causal discovery. First, the thesis better explains the role of clinical studies in establishing causal claims. Second, it yields a better account of extrapolation. 1. An Epistemological Thesis Russo and Williamson (2007, § §1-4) put forward an epistemological thesis that can be phrased as follows: In order to establish a causal claim in medicine one normally needs to establish two things: first, that the putative cause and effect are appropriately correlated; second, that there is some mechanism which explains instances of the putative effect in terms of the putative cause and which can account for this correlation. This epistemological thesis, which has become known in the literature as the Russo-Williamson thesis or RWT, has generated some controversy-see, e.g. Weber (2007Weber ( , 2009, Broadbent (2011), Campaner (2011, Clarke (2011), Darby and Williamson (2011), Gillies (2011), Illari (2011), Howick (2011a, 2011b, Williamson (2011a, 2011b), Campaner and Galavotti (2012), Claveau (2012), Dragulinescu (2012), Clarke et al. (2013Clarke et al. ( , 2014 and Fiorentino and Dammann (2015). The aim of this section is to explain what the thesis says, why it is true, and why it is controversial. In section 2, I argue that an approach to medical methodology based on RWT fares better than present-day evidence-based medicine (EBM) in explaining three basic facts about how clinical studies (CSs) can be used to establish causal claims in medicine. In section 3, I argue that RWT motivates a better account of extrapolation inferences too. What the Thesis Says First, let us clarify what the thesis says. This is important because RWT has occasionally been misinterpreted, particularly with respect to the following point. RWT requires establishing the existence of a correlation and the existence of a mechanism, not the extent of the correlation, nor the details of the mechanism. In some cases, of course, establishing the extent of a correlation is a means to establishing its existence, and establishing the details of a mechanism is a means to establishing its existence, but these means are not the only means. We shall return to this point in section 2. The second general point to make is that RWT is a purely epistemological thesis, concerning the establishing of causal relationships. Russo and Williamson (2007) used the thesis to argue for a particular metaphysical account of causality-the epistemic theory of causality-but RWT itself does not say anything directly about the nature of causality. The thesis is intended to be both descriptive and normative: i.e. as capturing typical past cases of establishing causality in medicine (e.g. Clarke 2011;Gillies 2011), as well as characterising the logic of establishing causality. Let us now clarify some of the terms that occur within the statement of the thesis. Medicine Here 'medicine' is to be construed broadly to include the health sciences as well as practical medicine. Causal claims of interest to medicine include claims about the effectiveness of drugs, medical devices and public health interventions, and claims about harms induced by such interventions or by pathogens or environmental exposures, for example. Henceforth, we will primarily be interested in generic claims (repeatably instantiatable or 'type-level' claims, such as the claim that taking aspirin relieves headache), but RWT may be taken to apply also to single-case claims ('token-level' claims, such as the claim that Bob's taking aspirin this morning relieved his headache). Mechanism In the statement of RWT above, 'mechanism' can be understood broadly as referring to a complex-systems mechanism, a mechanistic process, or some combination of the two. A complex-systems mechanism consists of entities and activities organised in such a way that they are responsible for some phenomenon to be explained (Machamer, Darden, and Craver 2000;Illari and Williamson 2012). An example is the mechanism by which the heart pumps blood. A mechanistic process is a spatio-temporally contiguous process along which a signal is propagated (Reichenbach 1956;Salmon 1998). An example is an artificial pacemaker's electrical signal being transmitted along a lead from the pacemaker itself to the appropriate part of the heart. A mechanism might also be composed of both these sorts of mechanisms: for example, the complex-systems mechanism of the artificial pacemaker, the complex-systems mechanism by which the heart pumps the blood and the mechanistic process linking the two. Note that a mechanism cannot in general be thought of simply as a causal network. A causal network can be represented by a directed graph whose nodes represent events or variables and where there is an arrow from one node to another if the former is a direct cause of the latter. On the other hand, a mechanism is typically represented by a richer diagram, such as is frequently found in textbooks and research articles in medicine. Figure 1, for instance, exemplifies the fact that organisation tends to play a crucial explanatory role in a mechanism. Organisation includes both spatio-temporal structure and the hierarchical structure of the different levels of the mechanism. 1 Note also that high-quality evidence of mechanism can be obtained by a wide variety of means. Table 1 provides some examples. Establishing A causal claim is 'established' just when standards are met for treating the claim itself as evidence, to be used to help evaluate further claims. This requires not only high confidence in the truth of the claim itself but also high confidence in its stability, i.e. that further evidence will not call the claim into question. Table 1. Examples of sources of evidence of mechanisms in medicine (Clarke et al. 2014). Direct manipulation: e.g. in vitro experiments Direct observation: e.g. biomedical imaging, autopsy Clinical studies: e.g. RCTs, cohort studies, case control studies, case series Confirmed theory: e.g. established immunological theory Analogy: e.g. animal experiments Simulation: e.g. agent-based models That establishing a proposition gives rise to evidence tells us something about establishing, but leaves open the question of what constitutes evidence. Evidence has variously been analysed as one's knowledge, or one's full beliefs, or those of one's degrees of belief which are set by observation, or one's information, or what one rationally grants (Williamson 2015). We need not settle the question of what constitutes evidence here. It is worth noting, though, that on some of these accounts evidence must be true, while others admit the possibility that some items of evidence are false. This has consequences for whether establishing is factive. For example, if, as argued by Williamson (2015), one's evidence consists of the propositions that one rationally grants, then establishing a claim does not guarantee its truth, because not everything that one rationally grants need be true. That establishing is not factive is suggested by apparently true assertions such as, 'Certain researchers had established that stress is the principal cause of stomach ulcers, but further investigations showed that it is not'. (One cannot substitute 'knew' for 'had established' in this sentence, because knowledge implies truth; one would need 'thought they knew' instead.) Whether or not establishing is factive, it requires meeting a high epistemological standard. In particular, establishing a causal claim should be distinguished from acting in accord with a causal claim as a precautionary measure: in certain cases in which a proposed health action has a relatively low cost, or failing to treat has a high cost, it may be appropriate to initiate the action even when its effectiveness has not been established, so that benefits can be reaped in case it turns out to be effective. Correlation The epistemological thesis says that one needs to establish that the putative cause and effect are 'appropriately correlated'. Here 'appropriately correlated' just means probabilistically dependent conditional on potential confounders, where the probability distribution in question is relative to a specified population or reference class of individuals. 2 Thus, if A is the putative cause variable, B the putative effect variable and C is the set of potential confounder variables, one needs to establish that A and B are probabilistically dependent conditional on C, often written A ⊥ ⊥ / B | C. A confounder is a variable correlated with both A and B, e.g. a common cause of A and B (Figure 2). The dependence needs to be established conditional on confounders because otherwise an observed correlation between A and B might be attributable to their correlation with C, rather than attributable to A being a cause of B. The set of potential confounders should include any variable that plausibly might be a confounder, given the available evidence of the area in question. Establishing correlation is non-trivial for two reasons. First, because it requires establishing a probabilistic dependence in the data-generating distribution, rather than simply in the distribution of a sample of observed outcomes. The method of sampling and size of sample can conspire to render an observed sample correlation a poor estimate of a correlation in the population at large. Second, establishing correlation requires considering all potential confounders, and there can be very many of these. To be clear, we shall use 'observed correlation' to refer to a correlation found in the data, 'genuine correlation' to refer to a correlation in the population from which the data are drawn, and 'established correlation' to refer to a claimed genuine correlation that has met the standards required for being considered established. If establishing is fallible, that a correlation is established does not guarantee that there is a genuine correlation, though it makes it very likely. Moreover, to establish a correlation between A and B, it is not necessary that every relevant dataset yields an observed correlation between A and B, although some observed correlation would typically be required. Qualifications RWT says that one 'normally' needs to establish both correlation and mechanism. This is because there are certain cases in which causality is apparently not accompanied by a correlation and there are also cases in which causality is apparently not accompanied by an underlying mechanism. If this is so, one cannot expect to establish both correlation and mechanism in these cases. In cases of overdetermination, where the cause does not raise the probability of the effect because the effect will happen anyway, there is no actual correlation between the cause and the effect. In many such cases, one can expect a counterfactual correlation: if things had been different in such a way that the effect would not have happened anyway-e.g. had a second, overdetermining cause been eliminated-then the cause and effect would indeed be correlated. One might think, then, that one ought to be able to establish a counterfactual correlation for any causal claim, if not an actual correlation. However, there are cases in which the cause of interest and a second, overdetermining cause are mutually exclusive, so that it is not possible both to eliminate the second cause and allow the first cause to vary so as to establish a correlation. For example, an unstable atom may decay to one of two mutually exclusive intermediary states, B and B ′ , on the way to a ground state C; attaining either one of the intermediary states causes the particle to reach the ground state, even though there may well be no correlation, P(C | B) = P(C | B ′ ) = P(C); here one cannot eliminate B ′ and vary B (see Williamson 2009, §10). Therefore, even the demand for a counterfactual correlation may be too strong. Let us turn next to causality without mechanisms. Where the cause and/or the effect is an absence, it cannot be connected by an actual mechanism. In many such cases, one can expect a counterfactual mechanism. Suppose cause and effect are both absences: e.g. failing to treat causes a lack of a heartbeat. If things had been different in such a way that what was absent in the cause were present (e.g. the treatment is administered), then one would expect a mechanism from this presence to a presence corresponding to the effect (e.g. a heartbeat). One might think, then, that one ought to be able to establish the existence of a counterfactual mechanism for any causal claim, if not an actual mechanism. However, there are cases where one of the cause and effect is an absence and the other is a presence, and this strategy does not work. For example, suppose that failing to treat causes a blood clot. That the cause is an absence precludes a mechanism here, but the effect being an absence precludes a mechanism in the obverse case, namely, administering the treatment causes an absence of a blood clot. 3 Now, establishing causality in these cases is not particularly problematic in practice. However, it is more subtle than simply establishing both correlation and mechanism, even where counterfactual correlations or mechanisms are admitted. The question as to how RWT needs to be modified to say something useful in such cases will be not be considered here, because it is not central to the following arguments. The use of 'normally' is intended to leave open the possibility that in certain cases of overdetermination or causation between absences one might not need to establish both correlation and mechanism. Why the Thesis is True Having clarified the statement of the epistemological thesis RWT, let us turn to its motivation. To see why one ought to establish causality this way, consider that an observed correlation between two variables might be explained in a wide variety of ways, as depicted in Table 2. Some of these explanations provide reason to doubt that there is a genuine correlation in the underlying population. For example, one of the potential confounders might not have been adequately controlled for, or the sample may be rather small. On the other hand, some of these explanations provide reason to doubt that A is a cause of B, even where there is a genuine correlation between these variables. For example, there might be some variable that could not possibly be considered a potential confounder, given the evidence available, but nevertheless is a confounder, and has not been adequately controlled for. In such a case A and B can be genuinely correlated yet A may not be a cause of B-the correlation is attributable to a common cause. Or there may be a genuine correlation that is entirely non-causal, explained by a semantic relationship, for instance. Thus there are two forms of error: error when inferring correlation in the data-generating Table 2. Possible explanations of an observed correlation between A and B. Causation A is a cause of B Reverse causation B is a cause of A Confounding (selection bias) There is some confounder C that has not been adequately controlled for by the study Performance bias Those in the A-group are identified and treated differently to those in the ¬A-group Detection bias B is measured differently in the A-group in comparison to the ¬A-group Chance Sheer coincidence, attributable to too small a sample Fishing Measuring so many outcomes that there is likely to be a chance correlation between A and some such B Temporal trends A and B both increase over time for independent reasons. E.g. prevalence of coeliac disease & spread of HIV Semantic relationships Overlapping meaning. E.g. phthiasis, consumption, scrofula (all of which refer to tuberculosis) Constitutive relationships One variable is a part or component of the other Logical relationships Measurable variables A and B are logically complex and logically overlapping. E.g. A is C^D and B is D _ E Physical laws E.g. conservation of total energy can induce a correlation between two energy measurements Mathematical relationships E.g. mean and variance variables from the same distribution will often be correlated distribution from an observed correlation, and error when inferring that A is a cause of B from an established correlation. Evidence of mechanisms can help to eliminate both forms of error. For instance, it can help to determine the direction of causation, which variables are potential confounders, whether a treatment regime is likely to lead to performance bias, and whether measured variables are likely to exhibit temporal trends. 4 The existence of the second kind of error-error when inferring that A is a cause of B from an established correlation-shows that it is not enough to simply establish correlation. If it is indeed the case that A is a cause of B, then there is some combination of mechanisms that explains instances of B by invoking instances of A and which can account for the correlation. Hence, in order to establish efficacy one needs to establish mechanism as well as correlation. 5 This is enough to motivate RWT. Let us consider an example. The International Agency for Research on Cancer (IARC) Monographs evaluate the carcinogenicity of various substances and environmental exposures. When evaluating whether mobile phone use is a cause of cancer, IARC found that the largest study (the INTERPHONE study) showed a correlation between the highest levels of call time and certain cancers. This correlation was confirmed by another large study from Sweden. However, evidence of mechanisms was judged to be weak overall, and certainly failed to establish the existence of an underlying mechanism. For this reason, chance or bias was considered to be the most likely explanation of the observed correlations, and while causality was not ruled out, neither was it established (IARC 2013, § §5-6). Further discussion of the descriptive and normative adequacy of RWT can be found in the references provided at the start of this section. We will not revisit these arguments here. Instead, I shall argue here that RWT provides a better account of the epistemology of causality than a rival approach, namely the approach of present-day EBM. Let us now consider this rival approach. Why the Thesis is Controversial One reason why the epistemological thesis RWT is controversial is that it conflicts with the current practice of EBM. EBM is concerned with making the evaluation of evidence explicit: Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. (Sackett et al. 1996) Of course, this goal is hardly controversial. What characterises present-day EBM is not the goal itself but the means by which it attempts to achieve this goal. EBM employs hierarchies of evidence in order to evaluate evidence and these hierarchies of evidence tend to favour clinical studies and statistical analyses of these studies over other forms of evidence. Clinical studies (CSs) measure the putative cause and effect, together with potential confounders. CSs include controlled experiments such as randomised controlled trials (RCTs) as well as observational studies such as cohort studies, case control studies, case series and collections of case reports. Non-CS evidence of mechanisms, i.e. evidence of mechanisms obtained by means other than clinical studies, tends to be either ignored or relegated to the bottom of the hierarchy. For example, Figure 3 depicts an evidence hierarchy of SUNY (2004), used for EBM training. This places animal research and in vitro research, which in the right circumstances can provide high-quality evidence of mechanisms, below 'opinions', and well below evidence obtained from clinical studies and statistical analyses of CSs. Figure 4 depicts the current evidence hierarchy of the Oxford Centre for Evidence-Based Medicine, which places 'mechanism-based reasoning' at the lowest level. Other approaches, such as the GRADE system, tend to overlook non-CS evidence of mechanisms entirely (Guyatt et al. 2011, Fig. 2). The main feature of contemporary EBM that is of relevance to this paper, then, is that it views non-CS evidence of mechanisms as either irrelevant to the process of evidence evaluation or as strictly inferior to evidence obtained from clinical studies and analyses of CSs. In the latter case, opinions differ as to whether or not clinical studies trump non-CS evidence of mechanisms, i.e. whether or not one should ignore non-CS evidence of mechanisms when clinical studies are available. Either way, however, clinical studies are viewed as superior to other kinds of investigation that provide high-quality evidence of mechanisms. As a consequence, contemporary EBM stands in conflict with RWT. EBM prioritises clinical studies over evidence of mechanism that arises from other sources. RWT, on the other hand, treats all sources of evidence of mechanism equally. Figure 5 represents the approach motivated by RWT, as suggested by Clarke et al. (2014). Evidence of correlation includes any evidence that is relevant to the claim that there is the appropriate sort of correlation between the putative cause and effect. Individual items of such evidence are likely to vary in quality and in the direction to which they point, so they need to be made explicit and evaluated in order to determine the extent to which the body of evidence as a whole confirms the correlation claim. Similarly, evidence of mechanisms includes any evidence relevant to the claim that the putative cause and effect are linked in the appropriate way by a mechanism. This evidence needs to be made explicit and evaluated to determine the extent to which it confirms the mechanistic claim. Finally, the extent to which evidence confirms the causal claim of interest depends on the extent to which it confirms the correlation and mechanistic claims. In particular, RWT says that if the evidence establishes both the latter claims then it establishes the causal claim. Given the conflict between present-day EBM and RWT, and the fact that EBM is now widely championed, it is no wonder that RWT is controversial. However, we shall see that there are good reasons to prefer the RWT account of establishing causal claims to the EBM-motivated view. Next, in section 2, I shall argue that RWT better explains the role of clinical studies in establishing a causal claim. In section 3 I shall argue that RWT better explains the process of extrapolating a causal claim from a source population to a target population. If these arguments are correct, present-day EBM fails to provide an adequate epistemology of causality. However, this does not imply that the whole enterprise of EBM is doomed. Current EBM provides a reasonable first approximation to the correct epistemology, and has led to numerous advances in patient care. The claim made here is that improvements can be made to contemporary EBM, and that the picture of Figure 5 provides a better approximation. This picture can thus be viewed as a way to develop 'EBM+', i.e. as a proposal to advance the methodology of EBM by taking better account of evidence of mechanisms (cf. ebmplus.org (http://ebmplus.org)). The main ideas behind EBM+ are (i) that it can be useful to explicitly scrutinise and evaluate all kinds of evidence of mechanisms, not just evidence arising from CSs (Table 1), and (ii) that this evidence needs to be considered alongside evidence of correlation-rather than as inferior to it-in order to establish effectiveness in medicine, as per Figure 5. 6 No claim is made that Figure 5 is the end of the story; further improvements can be made, no doubt. The RWT-motivated EBM+ approach is thus in line with the goal of EBM, as stated by Sackett above, but not the practice of present-day EBM. While present-day EBM advances an essentially monistic account of causal evaluation, in terms of CSs, the RWT-motivated EBM+ approach is dualistic, treating evidence of mechanisms and evidence of correlation separately, but on a par. In this sense, RWT and EBM+ have a close affinity to the approach of Austin Bradford Hill, in which causal claims are established by means of a number of indicators, some of which provide good evidence of mechanisms and some of which provide good evidence of correlation (Hill 1965;Russo and Williamson 2007, §2;Clarke et al. 2014, §2.2). This sort of dualist approach can perhaps be traced back another century to Claude Bernard, who viewed it as essential to medicine in general: Scientific, experimental medicine goes as far as possible in the study of vital phenomena; it cannot limit itself to observing diseases or content itself with expectancy or stop at remedies empirically given, but in addition it must study experimentally the mechanism of diseases and the action of remedies. (Bernard 1865, 207) Explananda Concerning Clinical Studies In this section, I shall argue that RWT can successfully explain three fundamental facts about the role of CSs in establishing a causal claim, and that the view motivated by present-day EBM cannot account for all of these facts (although it can account for the first fact). The three facts are these: (i) in some cases, CSs suffice to establish a causal claim; (ii) in some cases, randomised studies are not required to establish a causal claim; (iii) in some cases, randomised studies are trumped by other evidence of mechanisms. We shall examine each of these facts in turn. In Some Cases, Clinical Studies Suffice to Establish a Causal Claim Howick (2011a) suggests that in a number of cases, medical interventions have been accepted on the basis of comparative clinical studies alone. He cites the following cases: the use of aspirin as an analgesic; the use of general anaesthesia; and the use of deep brain stimulation in treating patients with advanced Parkinson's disease or Tourette's syndrome. He argues that these cases are a problem for the epistemological thesis RWT, because the mechanisms of action were not-in some cases, still are not-known. Howick points out that these cases are quite compatible with contemporary EBM, which focuses overwhelmingly on clinical studies. In response to this objection, one might question whether, in these examples, the causal claims really were established on the basis of comparative clinical studies alone. Cases such as aspirin and general anaesthesia pre-date EBM and their effectiveness was arguably established before they were tested in a systematic comparative clinical study. In all cases, background knowledge was important and it is far from obvious that the causal claims were established on the basis of comparative clinical studies alone. However, I do not want to dwell on the particular examples here, because I want to accept the general principle that it is possible that clinical studies alone can be used to establish a causal claim in medicine. The point I want to make is that this general principle is quite compatible with RWT. Consider the RWT-motivated picture of Figure 5. Some of the total available evidence can be considered to provide evidence of correlation, in the sense that these items of evidence contribute to support or undermine the claim that the putative cause and effect are appropriately correlated. (An item of evidence contributes to support a claim if, when taken together with other items, it supports the claim, and the other items do not on their own support the claim to the same degree.) Some of the total available evidence can be considered to provide evidence of mechanisms, in the sense that these items of evidence contribute to support or undermine a claim that there is some mechanism which explains instances of the putative effect in terms of the putative cause and which can account for the extent of the correlation. There is no suggestion that an item of evidence cannot provide both evidence of correlation and evidence of mechanisms. In particular, clinical studies not only provide evidence of correlation, they can also-in the right circumstances-provide high-quality evidence of mechanisms (Table 1). The inference here can be represented as follows: There are sufficiently many independent clinical studies They are of sufficient quality Sufficiently many studies point in the same direction They observe a large enough correlation Fishing, temporal trends and non-causal relationships are ruled out No other evidence suggests a lack of a suitable mechanism There must be some underlying mechanism that explains the correlation This inference can be understood as follows. Suppose that there are sufficiently many independent clinical studies that sample the study population in question, they are of sufficient quality (e.g. they are sufficiently large, well-conducted RCTs), sufficiently many studies point in the same direction, and they observe a large enough correlation (aka 'effect size'). Here 'sufficiently' is to be construed in such a way that the threshold is reached for establishing a genuine correlation, and that bias and confounding are ruled out as explanations of this correlation. Suppose further that available evidence rules out fishing, temporal trends and non-causal relationships such as semantic, constitutive, logical, physical and mathematical relationships (cf. Table 2). Suppose, moreover, that there is no other evidence against the existence of an underlying mechanism of action: e.g. such a mechanism does not conflict with confirmed theory. Then, by a process of elimination, causation or reverse causation are the two remaining explanations (Table 2). Either way, there must be some underlying mechanism linking the putative cause and effect that explains this correlation. (Note that this inference scheme is non-deductive; there is no suggestion that the premisses guarantee the truth of the conclusions.) In cases that satisfy the premisses of this inference, clinical studies can provide evidence of the existence of a mechanism even though they may fail to shed light on the details of the mechanism. If, in addition, temporal considerations rule out reverse causation, then one can reach the conclusion that the putative cause is indeed the cause of the putative effect. Figure 6 depicts this kind of inference, from the perspective of RWT. In this diagram, a thick arrow from node X to node Y signifies that X on its own would suffice to establish Y; a thin arrow is used if X is insufficient on its own to establish Y, but nevertheless contributes to support Y. In sum, then, while Howick cites as counterexamples to RWT cases in which clinical studies have sufficed to establish causality, any such cases are in fact quite compatible with RWT. There are two separate distinctions at play here. The first is the distinction between evidence of correlation and evidence of mechanisms, which is invoked by RWT. The second is the distinction between clinical studies and evidence obtained by other means, which is central to present-day EBM. These distinctions do not coincide, and is only by erroneously conflating the two distinctions that one might think that instances of the above inference scheme refute RWT: by erroneously assuming that clinical studies provide only evidence of correlation and so inferring that RWT requires evidence obtained by other means. RWT requires evidence of two different kinds of connection-correlation and mechanism. It does not require two different kinds of evidence in the sense of requiring two independent sources of evidence-clinical studies and non-CS evidence of mechanisms. 7 While the above inference scheme is compatible with RWT, it is important to observe that the conditions of the inference are very rarely met in practice. For example, instances of this form of inference are very hard to find in IARC evaluations: establishing the carcinogenicity of mists from strong inorganic acids may offer one rare example (IARC 2012a, 487-495). Thus, although non-CS evidence of mechanisms is not always essential to establishing causality, it is typically an important part of an inference to cause. Confusingly, Howick also cites as evidence against RWT a range of cases in which evidence of mechanisms alone led to erroneous causal inferences; see also Howick (2011b, chapter 10). These cases clearly confirm-rather than disconfirm-RWT, which says that causal claims cannot be established just by establishing mechanism since one needs to establish correlation as well. Moreover, these cases also support EBM+, which holds that evidence of mechanisms needs to made explicit and its quality scrutinised. This is because in many of these cases the evidence of mechanisms was rather weak. In Some Cases, Randomised Studies are Not Required to Establish a Causal Claim The second key fact that needs to be explained by an account of establishing causal claims in medicine is the fact that in some cases there is no need for RCTs when establishing causality. To see that this is so, consider three examples. First, consider the tongue-in-cheek conclusions of Smith and Pell (2003), who study 'parachute use to prevent death and major trauma related to gravitational challenge': As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only Figure 6. Clinical studies can, in the right circumstances, establish a causal claim. observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute. (Smith andPell 2003, 1459) From the point of view of contemporary EBM, the evidence for the effectiveness of parachutes is very weak: no systematic studies, let alone RCTs, and some mechanistic evidence which sits at the bottom of the evidence hierarchy, if it features at all. It is hard to see how causality could be established on the basis of this evidence, if present-day EBM is right. From the point of view of EBM+, however, the evidence is strong: excellent evidence of mechanisms, and, although unsystematic, plenty of observational evidence relating to instances where parachutes were and were not used, and a very large observed effect size. From the point of view of EBM+, the evidence of mechanisms on its own suffices to establish the existence of a suitable mechanism, and, when combined with the unsystematic observations, the total evidence suffices to establish correlation too. Hence causality is established. This inference is depicted in Figure 7. (Again, the thick arrow signifies that other evidence of mechanisms is sufficient to establish the existence of a mechanism.) Having clarified the structure of this inference, let us consider a second example (see Worrall 2007). The question here is how to establish the effectiveness of extracorporeal membraneous oxygenation (ECMO) for treating persistent pulmonary hypertension (PPHS). With PPHS, immaturity of the lungs in certain newborn babies leads to poor oxygenation of the blood. ECMO oxygenates the blood outside the body (Figure 8). Observational studies suggested that ECMO increases survival rate from about 20% to about 80% (Bartlett et al. 1982). However, under standard EBM procedures for evaluating evidence, the available evidence was viewed as insufficient to establish causality, and it was felt necessary to conduct an RCT (Bartlett et al. 1985). At least five subsequent RCTs were carried out, leading to loss of life in the control groups. Conducting RCTs in such a case is considered standard EBM procedure. That non-RCT evidence is viewed as insufficient by contemporary EBM was confirmed by a recent Cochrane Review of ECMO, which explicitly disregarded any evidence that did not take the form of an RCT (Mugford, Elbourne, and Field 2010). On the other hand, Worrall (2007) suggests that RCTs were unnecessary in the ECMO case. This conclusion is supported by the RWT-motivated EBM+ approach. This case is analogous to the parachute case: before the first RCT there was strong observational evidence which indicated a large effect size, as well as excellent evidence of mechanisms. Indeed, as in the parachute case, the details of the mechanism of action were very well established. Thus Figure 7 captures the evidential situation in the ECMO case before the first RCT. There is little doubt that conducting RCTs led to yet greater surety; however, despite being mandated by EBM, RCTs were arguably unnecessary to establish causality. As a third example, consider the case of establishing the carcinogenicity of aristolochic acid. When IARC originally investigated aristolochic acid in 2002, it found that, while there was observational evidence that Chinese herbs which contain aristolochic acid cause cancer, there was 'limited' evidence in humans concerning the carcinogenicity of aristolochic acid itself as an active ingredient, so carcinogenicity could not be established (IARC 2002, 69-128). IARC re-examined the question some years later and found that there was little in the way of further observational evidence in humans, so the study Figure 8. The ECMO mechanism, as depicted by Bartlett et al. (1976). evidence involving humans was still 'limited'. However, there was much more evidence of the underlying mechanisms available, to the extent that the mechanistic evidence could now be described as 'strong' and causality could be considered established (IARC 2012b, 347-361). The key point here is that the change in evidence that warranted establishing causality was a change in evidence of the underlying mechanisms. These three cases instantiate the following form of inference: The mechanisms involved are established Observational studies suggest a sufficiently large effect size Sufficiently many studies point in the same direction The mechanisms involved can clearly account for the effect size Fishing, temporal trends and non-causal relationships are ruled out No other evidence suggests a lack of a correlation There is a genuine correlation In these cases, evidence of mechanisms obtained by means other than clinical studies provides evidence of correlation. When taken in conjunction with the observational studies, this can be sufficient to establish a genuine correlation. This correlation, when taken in conjunction with the established mechanism of action, can thereby establish causation ( Figure 7). Note that the observational studies do not need to be very systematic: this is so in the parachute example; it may also be true when establishing some adverse drug reactions (Aronson and Hauben 2006;Hauben and Aronson 2007), and it is also true of many interventions that pre-date EBM, such as the use of ileostomy surgery. While this mode of inference clearly fits the EBM+ approach, motivated by RWT, it is harder for contemporary EBM to explain, because, as we saw in the ECMO case, much of the practice of present-day EBM demands randomised studies in order to establish causality. To be sure, some deny that randomised trials are required. For example, Glasziou et al. (2007) argue that in cases where there is a large effect size, RCTs may be unnecessary. However, they struggle to explain from within the EBM paradigm how evidence of mechanisms can be treated on a par with observational studies to help establish causality. Instead they evoke Hill's indicators of causality, and Hill's approach is much more in line with RWT and EBM+ than with contemporary EBM (see section 1.3). In Some Cases, Randomised Studies are Trumped by Other Evidence of Mechanisms So far, we have seen that while present-day EBM can account for situations in which RCTs are sufficient to establish causality, it is doubtful whether EBM adequately handles cases in which RCTs are unnecessary. As we shall now see, it is clear that EBM cannot capture cases in which randomised studies are trumped by other evidence of mechanisms. This is because evidence of mechanisms obtained by means other than randomised studies is viewed-when it is considered at all-as strictly inferior to evidence arising from randomised studies (section 1). There are two kinds of example here. One sort of example involves positive evidence of causality from randomised studies; this evidence is trumped by evidence that there is no mechanism by which causality can operate. To start with another tongue-in-cheek example, Leibovici (2001) presented an RCT which observed a correlation between remote, retroactive intercessionary prayer and length of stay of patients in hospital. The patients in question had bloodstream infections in Israel during the period 1990-1996; the intervention involved saying 'a short prayer for the well being and full recovery of the group as a whole' in the year 2000 in the USA, long after recovery or otherwise actually took place. The study also found a correlation between the intervention and duration of fever. The author concludes: No mechanism known today can account for the effects of remote, retroactive intercessory prayer said for a group of patients with a bloodstream infection. However, the significant results and the flawless design prove that an effect was achieved. (Leibovici 2001(Leibovici , 1451 Present-day EBM clearly accords with this inference to an effect, because it views considerations to do with mechanisms as strictly inferior to evidence produced by clinical studies. However, the implicit conclusion is that this line of reasoning is ridiculous: no effect should be inferred. This contrary conclusion goes against EBM. It is not possible for present-day EBM to account for the possibility that a large, well-conducted RCT can be trumped by the fact that current science has no place for a mechanism between remote, retroactive intercessionary prayer and length of stay of in hospital. On the other hand, this is quite compatible with EBM+. Figure 9 depicts the inference here, from the perspective of RWT. Undermining evidence is represented by dashed arrows. The thick dashed arrow depicts an inferential connection that is enough on its own to rule out a mechanism. As before, the thick solid arrow depicts a connection that would normally be enough on its own to establish the conclusion (correlation): a significant result from a large well-conducted RCT. However, there is evidence which undermines this conclusion: wellconfirmed scientific theory. The presence of this undermining evidence blocks any inference to either correlation or mechanism, and thereby blocks an inference to causation. Other inferences follow the same pattern. Some comparative studies for precognition have observed a significant correlation (see, e.g. Bem 2011), as have others in the case of homeopathy (e.g. Cucherat et al. 2000;Faculty of Homeopathy 2016). What are the options for resisting an inference to causality in such cases? EBM will point to the fact Figure 9. Evidence of a lack of mechanism can trump RCTs. that the evidence base shows mixed results and is thus inconclusive. However, while this may be so for precognition and homeopathy in general, it is not the case for certain specific interventions which are instances of precognition or homeopathy; as the above references show, there are specific interventions for which only positive studies are available. A second possible way to resist an inference to causality in such cases is to invoke the machinery of Bayesianism: to argue that the prior probability of effectiveness is so low that the posterior probability remains low, despite confirmatory trials. This strategy is open to the charge of subjectivity. Clearly, the proponent of a subjective Bayesian analysis will have to admit that the choice of prior is subjective here. But even objective Bayesian analyses typically require a high prior probability of deception or experimental error (Jaynes 2003, § §5.1-2), and detractors can take issue with this presumption. A third alternative is to apply the RWT-motivated EBM+ approach. According to RWT, the inference in these cases follows the pattern of Figure 9, and it is clear that causality has not been established, even in specific cases where trials would be sufficient in the absence of other evidence to establish correlation. Arguably, then, the RWT-motivated approach is the most promising of these three strategies. In the kind of example considered above, positive evidence from randomised studies is trumped by evidence of absence of mechanism. But there is another sort of example, in which there is observational evidence, evidence from RCTs and other positive evidence of mechanisms, and in which the other evidence of mechanisms plays more of a role in establishing causality than do the RCTs. The ECMO case takes this form at the point after the first randomised trial. The first randomised trial provided weak evidence, because after the first baby was randomly assigned to the control arm of the trial and subsequently died, no more individuals were assigned to this arm. Thus the size of the trial was not sufficient to draw any strong conclusions. Arguably, at that point in time the evidence of mechanisms was stronger than the evidence arising from RCTs and it played more of a role in establishing causality. Indeed, if the analysis of section 2.2 is correct then the RCT evidence was redundant. The evidence of mechanisms trumps the RCT evidence in such a case. Summary To conclude, the causal epistemology motivated by RWT can validate all three facts about the role of clinical studies in establishing a causal claim. The EBM approach certainly captures the first fact (in some cases, clinical studies suffice to establish a causal claim). However, the practice of EBM goes against the second fact (in some cases, randomised studies are not required to establish a causal claim) and EBM certainly fails to explain the third fact (in some cases, randomised studies are trumped by other evidence of mechanisms). The proponent of present-day EBM might object that one should not infer a normative thesis about appropriate methodology from a description of actual practice-i.e. from the three facts about the role of clinical studies in actual instances of causal discovery. It is no doubt true that some actual instances of causal discovery were methodologically flawed, and that in some cases researchers thought that they had established a causal claim when in fact they had failed to establish it. Thus one must be cautious when generalising from actual instances to normative claims. However, it is also beyond doubt that-in recent times-medicine has successfully discovered a great number of causal claims. Methods employed in actual medical examples work, by-and-large, and so they tell us something about appropriate methodology. Given this, the three facts do indeed admit a normative interpretation. It is thus incumbent upon the proponent of EBM who denies the normative interpretation of one (or more) of these facts to explain away all the apparent instances of causal discovery which seem to support it. Each of the three facts considered above, under a normative reading, says only that in some cases certain methods are appropriate, so in order to deny one of these facts the onus is on the proponent of present-day EBM to show that in all cases the corresponding methods are inappropriate. Three Approaches to Extrapolation We now turn to the question of how a causal claim can be extrapolated from a source population to a target population of interest. This mode of inference is ubiquitous, because the population within which a typical clinical study establishes a correlation (e.g. hospital patients in a particular region who are not too young, not too old, not too ill and not pregnant) is almost never the same as the population within which the treatment is intended to be used. It is also very common-and particularly challenging-to extrapolate causal claims from animals to humans. Any adequate causal epistemology needs to explain how extrapolation is possible and needs to clarify the logic of extrapolation. Here is a first approximation to the logic of extrapolation: The causal relationship holds in the source population The source and target populations are similar in causally relevant respects The causal relationship holds in the target population As Steel (2008) points out, this explication faces two immediate problems. The first, which Steel calls the extrapolator's circle, is that 'it needs to be explained how we could know that the model and the target are similar in causally relevant respects without already knowing the causal relationship in the target' (78). The worry is that extrapolation seems redundant since the conclusion of the above rule of inference is apparently needed to establish the second premiss. The second problem, which we shall call the extrapolator's block, is that 'any adequate account of extrapolation in heterogeneous populations must explain how extrapolation can be possible even when [causally relevant differences between the model and the target] are present' (78-79). That is, the source and target population are rarely entirely similar in all causally relevant respects-particularly when extrapolating from animals to humans-and it needs to be made clear what sort of differences are permissible in order to prevent the second premiss of the above argument from failing and the inference thereby being blocked. Thanks to these two problems, this first attempt at a logic of extrapolation fails, and we must look further afield. Note that a source population is chosen for investigation precisely because one can conduct more conclusive clinical studies on this population than on the target population. Thus the clinical studies that one can perform on the source population-typically, experimental studies-tend to be of a higher standard than those-typically, observational studies -which are directly obtained on the target population. Indeed, there would be no point extrapolating from source to target if the studies in the source population were less conclusive than those conducted on the target population. In the light of this point, one can sketch an approach to extrapolation motivated by contemporary EBM as follows: High quality CSs establish a causal relationship in the source population Lower quality CSs in the target population are consistent with this relationship The causal relationship holds in the target population This approach to extrapolation circumvents the aforementioned two problems very nicely. There is no extrapolator's circle because one does not need to know that the causal relationship holds in the target population to obtain observational studies in the target population. There is no extrapolator's block because this theory of extrapolation makes extrapolation possible even when there are substantial differences between the source and target populations. That there may be substantial differences between the source and target populations points to two new problems that face the EBM-motivated approach. First, we have what we might call the extrapolator's fallacy: it needs to be explained how extrapolation is a reliable form of inference, rather than simply fallacious. The worry is that the EBM-motivated account will lead to lots of mistaken conclusions, because lower quality CSs in the target population, such as observational studies, typically provide weak evidence that the target population is similar to the source population in causally relevant respects. This problem may explain some recent scepticism about extrapolation amongst those interested in medical methodology (see, e.g. Ioannidis 2012). However, since almost every causal claim of interest has to be extrapolated from some source population, fallacious extrapolation is hardly a viable option. The second, related problem is that the extrapolator's standards are slipping. In the EBMmotivated approach, there is a high standard for internal validity but a low standard for external validity: evidence deemed to be of high quality by EBM (such as that obtained from RCTs) is used to establish causality in a source population, while lower quality evidence (such as that obtained from observational studies) is used to establish causality in the target population. In general, an account of extrapolation should not have double standards-the burden of proof for causality should be similar in the source and target populations. As Steel (2008, chapter 5) suggests, in order to extrapolate a causal claim from a source population to a target population, one needs evidence that similar mechanisms operate in the two populations. 8 This is particularly important in contexts where mechanisms are likely to differ, such as with extrapolations from animals to humans, or interventions involving long causal pathways. It turns out that this feature of extrapolation can be captured by the following RWT-motivated account. Figure 10 depicts an account of the logic of extrapolation that is motivated by RWT. In the source population, one can carry out clinical studies that normally cannot be carried out in the target population; these studies are often enough on their own to establish correlation. By also establishing mechanism, one can then establish causality in the source population. Let us turn to the target population. Clinical studies conducted on the target population, even when augmented by other evidence of the mechanisms of the target population, are insufficient to establish both correlation and mechanism-otherwise there would be no need for extrapolation. Extrapolation is possible when evidence of mechanisms in the target population is strong enough not only to establish the existence of a suitable mechanism M ′ in the target population, but also to establish that this mechanism is similar in key respects to the mechanism M inferred in the source population. The expression M ′ ; M in Figure 10 denotes this similarity claim. By means of this similarity of mechanisms, one can use the claim that A is a cause of B established in the source population to further support the correlation claim in the target population. In sum, where clinical studies and other mechanistic investigations in the target population are not jointly sufficient to establish correlation in the target, if the corresponding causal claim is established in the source population and it is also established that the mechanisms in the target population are sufficiently similar to those which underpin causation in the source population then this combination of evidence may be enough to establish correlation in the target population. If so, since mechanism in the target is also established, causality can be inferred. As an extreme case, there may be no clinical studies in the target population; this in itself does not preclude extrapolation under the RWT-motivated account. For example, when IARC evaluated the carcinogenicity of benzo[a]pyrene, they found no human studies measuring exposure to benzo[a]pyrene together with relevant cancer outcomes. However, there were excellent animal studies and enough evidence of mechanisms in animals to establish carcinogenicity in the relevant animal models and to determine the details of the mechanism of action there. Furthermore, there was excellent evidence that the human mechanisms were similar to the mechanisms found in animals. This was considered enough to establish carcinogenicity in humans (IARC 2012a, 111-144). Note that this inference is not validated by the EBM-motivated account of extrapolation provided above, because there were no relevant clinical studies in humans. Thus the example favours the RWT-motivated account of extrapolation. To take another case where there were no clinical studies in the target population, consider the IARC evaluation of d-Limonene as a cause of cancer. In this case too, there were no studies available in humans. Carcinogenicity of d-Limonene was established in male rats, so this seemed to be a candidate for extrapolation. However, there were crucial dissimilarities between the mechanism of action in rats and the corresponding human mechanisms: in particular, a protein responsible for nephrotoxicity in male rats is specific to male rats. Thus no extrapolation was possible and carcinogenicity was not established (IARC 1999b, 317-327). This example, which is also in accord with the RWT-motivated account, shows how crucial it is to establish similarity of mechanisms. Determining similarity of mechanisms can be rather tortuous. With regard to the question of the carcinogenicity of Di(2-ethylhexyl)phthalate (DEHP), causality was established in animals by 1982. In 2000, however, IARC downgraded its carcinogenicity rating in humans-to some controversy (Huff 2003)-because new evidence suggested that 'DEHP caused liver tumours in rats and mice by a non-DNA-reactive mechanism involving peroxisome proliferation, which was considered not relevant to humans' (Grosse et al. 2011, 329). In 2011, a third IARC working group had substantially more mechanistic evidence available, and this evidence suggested that there are other pathways in the cancer mechanism, some of which are relevant to humans. This led to the carcinogenicity rating to be upgraded again (Grosse et al. 2011). That the evaluation of carcinogenicity tracks evidence of mechanistic similarity simply cannot be explained by present-day EBM. In some cases, new clinical studies in the target population can lead to a re-evaluation of a mechanistic similarity claim. IARC first examined acrylonitrile in 1979 (IARC 1979, 73-86), and in 1987 decided that carcinogenicity in rats was established and carcinogenicity in humans was likely (IARC 1987, 79-80). Carcinogenicity was not considered to be established in humans because studies in humans provided limited evidence of correlation and other evidence of similarity of mechanisms between rats and humans was also limited. Nevertheless, similarity of mechanisms was credible enough for carcinogenicity in humans to be considered likely. By 1999, further studies in humans had suggested that earlier observed correlations were probably due to confounding by smoking (IARC 1999a, 43-108). These studies cast doubt both on correlation and on similarity of mechanisms and led to a downgrading of the likelihood of carcinogenicity. It is important to note that demonstrating mechanistic similarity requires showing that the whole structure of relevant mechanisms is sufficiently similar, not just that the mechanism M by which causality operates in the source population has an analogue in the target population. Thus, one needs to establish that any new counteracting mechanism in the target population is not so significant that it can cancel out ('mask') the action of the analogue of M. This masking problem was a stumbling block for Anitschkow when he tried to establish that dietary cholesterol causes atherosclerosis by appealing to animal experiments (Anitschkow 1933). He provided compelling evidence that the causal relationship holds in rabbits and that the mechanism responsible for this relationship also occurs in humans. However, various non-herbivorous animals, including rats, did not exhibit the correlation between dietary cholesterol and atherosclerosis that was found in rabbits. This lack of robustness suggests the presence of a counteracting mechanism in certain non-herbivorous species which masks the action of the positive mechanism of action that was found in rabbits. The presence of such a masking mechanism in humans would count as an important difference between the relevant mechanistic structures in rabbits and humans. Thus, similarity of mechanisms was not established, and causation in humans was rightly not considered established by Anitschkow's work (see Parkkinen 2016). The Four Problems for Extrapolation We shall now see that this RWT-motivated account of extrapolation survives the four problems for extrapolation identified above. First, let us consider the extrapolator's circle. That there is no circle should be apparent from the fact that Figure 10 is acyclic: one does not need to have already established causality in the target population in order to meet any of the requirements for establishing causality. Of course, once these requirements are all met, causality in the target is thereby established, but there is no inferential circle here. See Steel (2008, §5.4.2) for further discussion of how mechanism-based approaches can avoid the extrapolator's circle. Turning next to the extrapolator's block, one might worry that we are lacking an account of how extrapolation is possible when mechanisms in the source and target populations are not identical. Similarity of mechanisms is a matter of degree, and the more similar the mechanisms, the more that causation in the source population confirms correlation in the target population. Steel (2008, §5.3.2) discusses this question and presents comparative process tracing as a method for establishing similarity: First, learn the mechanism in the model organism, by means of process tracing or other experimental means. For example, a description of a carcinogenic mechanism would indicate such things as the product of the phase I metabolism and the enzymes involved; whether the metabolite is a mutagen, an indication of how it alters DNA; and so on. Second, compare stages of the mechanism in the model organism with that of the target organism in which the two are most likely to differ significantly. For example, one would want to know whether the chemical is metabolized by the same enzymes in the two species, and whether the same metabolite results, and so forth. In general, the greater the similarity of configuration and behavior of entities involved in the mechanism at these key stages, the stronger the basis for the extrapolation. (Steel 2008, 89) In fact, comparative process tracing is but one of several methods for establishing similarity of mechanisms. One can also establish similarity of mechanisms without determining the details of the mechanisms M and M ′ , by employing phylogenetic reasoning, robustness analysis or even enumerative induction (Parkkinen and Williamson 2017, §4). Thus there is a portfolio of methods for overcoming the extrapolator's block. Let us consider the extrapolator's fallacy next. Unlike the EBM-motivated approach, the RWT-motivated analysis of extrapolation requires evidence that ensures that the source and target populations are similar in causally relevant respects. Mechanistic evidence plays a key role here, in ensuring that M ′ ; M. By being more demanding than the EBM-motivated approach in terms of the evidence required in the target population, extrapolation promises to be more reliable under the RWT account than under the EBM account. Finally, we can ask whether the extrapolator's standards are slipping. That this is not the case is apparent from Figure 10: the inferential requirements-establishing correlation and mechanism-are the same in both the source and target populations. If anything, one might one worry that the standards of evidence are higher in the target population than in the study population since Figure 10 includes the extra requirement of establishing similarity of mechanism there. However, this is just an artefact of the diagram. Similarity of mechanisms concerns the relation between the source and target populations, not just the target population. Therefore, there is a genuine symmetry between what is required of the source and target populations. That the RWT account of extrapolation overcomes the latter two problems, while the EBM approach does not, speaks in favour of the RWT approach and against the EBM approach. Criticisms of Mechanistic Accounts of Extrapolation Having developed the RWT-motivated theory of extrapolation, we shall now consider some criticisms of mechanistic accounts of extrapolation in the light of this theory. Guala (2010, §6) suggests that there are cases of extrapolation that do not proceed via comparative process tracing. Guala develops an example involving outer continental shelf auctions, which are used to sell oil leases in the Gulf of Mexico, to show that it is not always necessary to determine the details of the relevant mechanisms, as would be required by comparative process tracing. As noted above, however, the RWT-motivated account sees comparative process tracing as but one of several strategies for establishing similarity of mechanisms, and Guala's case is perfectly in accord with this. What is important to the RWT account is the inferential step M ′ ; M: strategies for extrapolation seek to demonstrate similarity of mechanisms. As Guala notes, This clearly falls short of a proper articulation of the mechanism … And yet, it is perfectly adequate for extrapolation purposes. Large parts of the mechanism can be "black boxed" as long as there are good reasons to believe that they are analogously instantiated in the laboratory and target system'. (Guala 2010(Guala , 1080 One of the advantages of the RWT-motivated approach, then, is that by situating extrapolation in the inference scheme depicted in Figure 10 it covers much a broader range of scenarios than comparative process tracing does. Aronson (2013a, 2013b) are broadly sceptical of mechanismbased extrapolation. They identify several problems for basing extrapolations on mechanistic evidence. First, our understanding of mechanisms is often incomplete. In response one can note that this is of course true, but insufficient knowledge of the details of M and M ′ for comparative process tracing does not always preclude establishing that M ′ ; M: one can often employ the other strategies mentioned above. Second, knowledge of mechanisms is not always applicable outside the tightly controlled laboratory conditions in which is gained. This is also true, but it is symptomatic of science in general: whatever approach one takes, one must make sure that one's conclusions are robust enough to extend to the application of interest. In particular, an EBM-motivated approach has to ensure that conclusions based on trials with strict exclusion criteria are transportable to the population to be treated. The third problem that they identify is that mechanisms can behave 'paradoxically', e.g. a drug can have opposite effects in different contexts. In response, observe that it is only by understanding the underlying mechanisms that one can explain these paradoxical effects and improve treatment. Moreover, clinical studies are crucial for identifying the presence of such effects. All this confirms the RWT-motivated account of extrapolation, which takes both clinical studies and non-CS evidence of mechanisms seriously. The fourth problem that Howick et al. pick out is the extrapolator's circle. Their worry is that the evidence of the target population required to establish that M ′ ; M makes the evidence on the source population redundant. As Figure 10 makes clear, this need not be the case: one can establish that M ′ ; M in the absence of evidence from clinical studies in the target population that would one their own be sufficient to establish causality. Howick et al. might respond by noting that under the EBM-motivated account of extrapolation, only weak evidence of the target population is required to establish causality in the target population and this evidence would be sufficient to establish causality there. However, as discussed above, this is a problem for the EBM-motivated account: it makes extrapolation too easy to be entirely credible-it is subject to the extrapolator's fallacy. That the RWT-motivated theory of extrapolation is more demanding in terms of the evidence required for extrapolation is an advantage over the EBM-motivated account. Conclusion We have seen that the epistemological thesis RWT motivates a view of medical methodology that stands in conflict with contemporary EBM. Although there is a tension between RWT and EBM, I have argued that RWT can better explain three key features of the use of CSs to establish causality, and that it yields a better account of extrapolation. Thus, I conclude that RWT and EBM+ offer a promising way forward in the controversy as to how best to improve EBM. The EBM approach to causal inference has in recent years extended well beyond medicine, to public policy making and various areas of the social sciences, for example. While this paper has focussed on medicine, RWT can be interpreted as having a broader range of application, and similar conclusions to those drawn in this paper may apply beyond medicine. The broader scope of these conclusions is left as a question for further research. Notes 1. To take an extreme example of the importance of organisation, a chimney mechanism is responsible for the extraction of smoke purely in virtue of its spatial organisation. No activities constitute the chimney mechanism itself-although smoke actively passes through the mechanism-and the only relevant properties of the entities that constitute the mechanism (e.g. bricks and mortar) are structural properties to do with their impermeability and their ability to support the load of the chimney. Kaiser (2016) provides further evidence for the claim that a mechanism cannot always be identified with a causal network. 2. 'Correlated' is often used in weaker senses, e.g. meaning unconditionally probabilistically dependent, or unconditionally linearly dependent. Certain arguments of this paper also go through under these weaker interpretations of 'correlated': if, under a strong reading of 'correlation', it is not enough simply to establish correlation in order to establish causation, then that is also true under a weak reading. 3. Cases of disconnection (Schaffer 2000) or double-prevention (Hall 2004) may also be thought of as cases that involve absences. 4. Evidence of mechanisms can help in other respects too. For example, evidence of mechanisms is often essential in order to properly design a CS or interpret its results (Clarke et al. 2014). 5. These assertions hold 'normally', i.e. modulo the qualifications about underdetermination and causation between absences discussed above. 6. One might think that it would be very difficult to systematically consider evidence of mechanisms alongside evidence of correlation. However, as Parkkinen et al. (2018) show, this is not the case. They put forward procedures for evaluating non-CS evidence of mechanisms and for combining this evaluation with a standard evaluation of CSs in order to provide an overall assessment of a causal claim. 7. This point was emphasised by Illari (2011, §2). One might think that, by not requiring two different sources of evidence, RWT somehow becomes trivially true, or that it becomes compatible in general with present-day EBM. Subsequent sections of this paper show that this is not so, by highlighting points of disagreement with present-day EBM and arguing that these points of disagreement favour RWT. 8. Cartwright (2011) is another proponent of the view that successful extrapolation requires evidence that goes beyond statistical studies.
15,828.4
2019-01-02T00:00:00.000
[ "Philosophy" ]
Bond Strength of Orthodontic Bracket Cement Using a Bleaching Light for Curing Aim: To investigate the bond strengths achieved by using a Bleaching Curing Light (BCL) to polymerize orthodontic bonding cement. Material and Methods: 160 anterior bovine teeth were used to form 20 average sized human dental arches, and distributed into 2 groups according to which light curing method used: Group 1: BCL for 40 seconds, or Group 2: LED for 10 seconds. After storage in a controlled environment, Shear Bond Strength (SBS) and Adhesive Remnant Index (ARI) were determined. Results: Group 1 showed significantly lower SBS in the most posterior (first molar) position of the dental arch, (Group 1: 0.7 ± 1.0 MPa, Group 2: 2.9 ± 1.7 MPa, p < 0.01), but significantly higher SBS in the most anterior position (Group 1: 5.1 ± 1.8 MPa, Group 2: 3.8 ± 1.2 MPa, p < 0.02). A high correlation was found between the position of the bracket and debonding values (p < 0.02). Bonding failures in the most posterior arch positions occurred more frequently within Group 1, and lower ARI than Group 2 over corresponding arch locations. Conclusion: Simultaneous full-arch curing of orthodontic bracket cement using a BCL is clinically acceptable in all but the most posterior locations along the dental arch. Introduction In the clinical practice of orthodontics, bonding of fixed appliances is one of the most time-consuming tasks.Hence, ergonomic measures such as: combined agents, pre-coated brackets, reduced curing time, and indirect bonding procedures have been proposed [1].In addition, the use of an enlarged light-exiting tip has been reported to develop shear bond strength equal or lower to a conventional tip [2]. Whereas, in vivo investigations of bond strength present difficulties in investigating independent variables, in vitro human and bovine models pervade the literature, although varied in experimental design [3] [4] [5].Concomitantly, the use of light curing methods [6] [7] varied adhesive materials or bonding methods [8] [9], and debonding procedures have also been reported [10]. The advent of light-catalyzed vital dental bleaching [11], has provided a cross-over tool which may improve the ergonomics of orthodontic appliance bonding by facilitating the simultaneous curing over an entire arch. The purpose of this study was to investigate the efficiency of a Bleaching Curing Light (BCL) in simulated one-arch orthodontic bracket bonding.The null hypothesis being that this will produce similar results compared to current methods. Study Design 160 intact anterior bovine teeth were harvested from beef carcasses, and preserved in Thymol [12].The inclusion criteria were that the teeth were permanent and that the buccal surfaces were caries-free, so that primary bovine teeth or teeth with decayed buccal surfaces were excluded.These were arranged into 20 arches with their roots in wax bases (Dentaurum, Ispringen, Germany) to correspond to the largest average human dental arch [13].Samples were distributed so that half of the 8 teeth originating from each source were included in the experimental and half in the control groups.As shown in Figure 1 and in Figure 2, sample position denoted using the ISO 3950:2016 (FDI) system of notation [14] and each arch was oriented within a dental manikin (Columbia Dentoform, New York, USA). These were divided into two equal groups: Group 1, 40 second static exposure to LED Bleaching Curing Light (BCL) (Figure 2), Group 2, 10 second separate exposure of each bracket to a LED Regular Tip curing light (RT) (both from Foshan Coxo Medical Instrument CO, LTD, Guangdong, China).The exposure time in both groups was based on manufacturer's instructions of use. Prior to light curing, all teeth were cleaned with Zircate Prophy Paste (Dentsply, Milford, USA) for 15 seconds, and debrided by washing for 5 seconds with water spray, then dried with oil-free compressed air.The labial surfaces where adhesive bonding was to be applied were then prepared using 37% ortho-phosphoric acid for 30 seconds (Vista TM, Racine, Wisconsin, USA), according to Saleh [15], and debrided as above. The prepared enamel surfaces were bonded according to manufacturer instructions using XT Primer, and Transbond XT Composite (3M, Unitek, Monrovia California USA) [16].The latter being applied to the mesh pads of premolar brackets (Hangzhou ORJ, China) which were oriented so that the most posterior bracket on each side approximated the position of a first molar tube, based on mean tooth widths [13].The 3 more anterior brackets were positioned so that full contact was achieved between the bracket base the prepared enamel surface.The bracket bonding material was polymerized as described above with either the BCL or conventional curing light, after which all casts were removed from the mannequins and stored at 85% humidity and 37˚C for 24 hours. Samples were removed from their wax base and each positioned in a holding device positioning its buccal surface parallel to the direction of a loading force applied via a 5-strand braided 0.0195" stainless steel wire (Ortho Organizers, Carlsbad, USA) around the wings of each bracket held by an Instron, Model 4502 (Buckinghamshire, England) formatted with a 10 kN load-cell, applied at 10 mm/min cross-head speed.SBS was calculated by dividing the debonding force (measured force causing debonding), by the area of the bracket base. Statistical Analysis ANOVA with repeated measures and paired t-test were carried out to compare differences between the two groups according to position along the arch. Kruskal Wallis test was used to determine any significant differences in scaled ARI values in the various bonding positions within each group.Wilcoxon Signed Ranks Test was used to determine the differences of ARI in the various bonding positions between the two groups.Pearson correlation was applied to verify correlations between SBS and ARI.Statistical significance for all tests was considered as p < 0.05.This research project was approved by the Ethics Committee of the institution where it was held. Results The mean SBS values of the two groups according to position along the arch are presented in Table 1.Bonding failures (debonding force = 0 N), were found in teeth 14 and 24 in Group 1.The mean SBS of this position in Group 1 (0.7 ± 1.0 MPa) was significantly lower (p < 0.001) than those of other positions along the simulated dental arch. Data related to position in the arch and type of curing on the debonding force is shown in Figure 3.A high correlation was found between the position of the bracket on the dental arch and the debonding values (p < 0.002).SBS in teeth 11.21 of Group 1 were significantly higher than in group 2 (p < 0.02), whereas in teeth 14.24 Group 2 showed 3.8-fold higher SBS (p < 0.001).In positions 2 and 3 the differences between the two groups were not significant (p > 0.05) (Figure 3). The ARI within group 1 was found to vary statistically according to position (p = 0.017) (Table 2).The Wilcoxon Signed Ranks Test showed significant differences in positions 3 (p = 0.032) and 4 (p = 0.030) between groups 1 and 2 Open Journal of Stomatology (Table 3).However, no significant correlations were found between the SBS and ARI in any group. Discussion The present study found that the use of the BCL resulted in much lower SBS values in the most posterior position, rejecting the null hypothesis.However, The multiple failures of bonding in the most posterior position in Group 1, together with the much smaller mean value of debonding force, suggests clinically that bonding to the first molars using the BCL will be least successful.Since the bonding protocol was the same for all the other steps of the bonding procedure, this finding must be due to insufficient (light) curing of the bracket adhesive material in the posterior region of the arch. Although, effects of the BCL during dental bleaching has been previously reported [11] [18], there are no studies reporting comparisons between the anterior and posterior aspects of the dental arch.Differences in SBS related to position along the arch might be explained by the shape and intended use of the BLC making distribution of light less efficient posteriorly. In Group 1, SBS ranged from 0.2 to 11.5 MPa (excluding bond failures), which are lower than those previously reported [9] [19].This is likely due to differences in study designs, materials tested, methods used for the measurements, or specimens differences.Furthermore, methodological variations such as consistency of lever-arm point of force application, or thickness of adhesive layer are innate to such investigations.As a result, there is an additional torque acting that is ignored during debonding tests [3]. Clinically acceptable bond strengths range between 5.9 and 7.8 MPa [20].Deceasing SBS posteriorly in Group 1 (Figure 2), but significantly higher debonding strength in teeth 11, 21 compared to Group 2 may be due to the longer curing time (40 seconds versus 10 seconds, respectively).It has been previously reported that increasing exposure (5, 10 or 15 seconds) with the same LED did not cause significant differences in SBS, but average values were found to be higher with longer curing time [21].However, it was not the purpose of this study to base clinical conclusions on an in vitro shear bond strength experiment, due to the well known methodological problems associated with the design of such tests [3] [22]. In positions 2 and 3 the differences in SBS between the two groups were not statistically significant.This suggests that successful bonding can be achieved when using the BCL also in the premolar area.However, bond failures found in Group 1 decreased the SBS mean at position 4 (3.34MPa).Excluding these, the SBS values found in the anterior and premolar areas may be considered within the required range. The use of bovine teeth as an appropriate in vitro dental model has been pre-viously reported [23] [24] [25].It has been shown that both shear and tensile strengths are not significantly different between human and bovine dentin [26] [27], or enamel [27].In addition, reported dental bond strengths in human and bovine studies conclude that the latter can substitute for human teeth in in vitro studies establishing the initial performance of new products [9] [28] [29]. The latency period of 24 hours after bonding has been reported to increase the setting time of light-cured adhesives [30] [31].This has been associated with the increase in shear strength reported when allowing setting for 24 hours [32] to 7 days [33].However, these do not correspond to clinical reality, where brackets are loaded immediately after bonding, therefore, for purposes of comparison, a 24-hour latency period was adopted for the present study [32] [34]. No significant correlation was found between ARI and bond strength within each group, in agreement with Linn et al [35].However, significant differences were detected between the two groups in positions 3 (p = 0.032) and 4 (p = 0.030), in which the study group showed lower ARI values, and it was found that adhesive bond fracture occurred between the adhesive and the bracket base more frequently in the study group.This implies a tendency for greater amounts of residual composite in the posterior areas at bracket removal when the BCL was used.However, this requires further investigation since here the bracket/adhesive interface was determined only by visual inspection. Conclusions 1) Light curing with the BCL leads to similar polymerization of orthodontic adhesive in the anterior and premolar regions. 2) The SBS values suggest that curing with the BCL is an appropriate but location sensitive activation method, thus it would not be effective for one-arch orthodontic bonding. Table 1 . Mean SBS and debonding force values and standard deviations for the two groups according to bonding position along the arch. Table 2 . Kruskal-Wallis test examining differences in scaled ARI values between different bonding positions within each group. Table 3 . Wilcoxon Signed Ranks Test examining the differences of ARI in the various bonding positions between the two groups.
2,716
2018-03-13T00:00:00.000
[ "Medicine", "Materials Science" ]
A Novel Mechanism of Lactobacilli Bacteria Action on Development of Hepatocytic Tolerance to Staphylococcus aureus The fundamental characteristics of mammalian PGLYRPs and NOD2 in the reduction of pathogen-induced hepatocytic inflammation as well as interaction with probiotics have not yet been well studied. The aim of this research was to explore whether or not probiotics exert hepatoprotective effects by means of attenuation of NOD2NF-κB signal transduction induced by Staphylococcus aureus through regulation of PGLYRP2/3. By ELISA analysis of pro-inflammatory cytokine secretion and RT-qPCR assay of NOD2 and PGLYRP2/3 gene expression upon stimulation of bacteria lysates, we found that PGLYRP2 and NOD2 play important roles in the transduction of inflammatory responses induced by S. aureus lysates, while PGLYRP3 induced by Lactobacillus plantarum MYL26 lysates serves as an anti-inflammation mediator, counteracting the effect of PGLYRP2 and NOD2. We proposed that one new mechanism by which probiotics exert hepatoprotective effect is through induction of PGLYRP3, which antagonizes PGLYRP2 and NOD2, thus leading to attenuation of NOD2-NFκB signal transduction. Introduction Gastrointestinal tract microbiota and bacterial translocation play imperative characters in the pathogenesis of systemic inflammation, especially liver diseases (Norman et al., 2008).Over proliferation of intestinal bacteria, elevated levels of bacterial lipopolysaccharides, and even worse, increased bacterial translocation from the GI tract give rise to humans more susceptible to severe inflammatory syndrome, thus suffering from a series of systemic disorders (Seo & Shah, 2012).Impaired GI tract epithelial integrity has been demonstrated as major cause of leaky gut that allowed translocation of bacteria into blood circulation.Bacteria such as Staphylococcus aureus produce numerous toxins triggering inflammatory responses that cause not only foodborne illness but also enhanced gut permeability which predispose to bacterial infections as well as bacterial translocation into bloodstream (Wang et al., 2001).As a result, in addition to improvement of leaky gut, how to develop cellular tolerance to invading pathogens or elevated inflammatory stimuli has became an important issue. Probiotics have been defined as live microorganisms, when administered in adequate amounts, confer health advantages on the host beyond their intrinsically elementary nutrition (Arora et al., 2013).The most prevalent probiotics used in fermented foods and dietary supplements are from two lactic acid bacteria genera: Bifidobacterium and Lactobacillus (Krznaric et al., 2012). It is well acknowledged that inherited and acquired factors lead to alterations in GI tract microbiota which might leave hosts susceptible to a wide range of disorders.Recent studies have suggested that infectious diarrhea (Quigley, 2012), various allergies (Noval Rivas et al., 2013), inflammatory bowel diseases (IBDs) and endotoxemia (Caradonna et al., 2000) are related to variations in the GI tract microflora.Variations in the GI tract microflora influence susceptibilities not only to intestinal disorders, but also, at least in part, to systemic immune diseases, such as obesity (Aggarwal et al., 2013), cardiovascular diseases (Manco et al., 2010) and diabetes mellitus (Larsen et al., 2010).These diseases are also associated with short chain fatty acid produced by GI tract microorganisms due to its regulation capacities on appetite hormones (Conterno et al., 2011).Therefore, any intrinsic or extrinsic factors affecting microbial homeostasis could have a substantial impact on human health.The innate immune system detects pathogenic bacteria by means of a collection of pattern recognition receptors highly conserved from insects to humans and shows specificity for microorganism cellular components not found in eukaryotes (Brenner et al., 2013).Pathogen-associated molecular patterns (PAMPs), which are molecules or structures that exist on a class of pathogens, are recognized by PRRs, such as toll-like receptors, nucleotide-binding oligomerization domain-containing proteins, and peptidoglycan recognition proteins (Inoue & Shinohara, 2013).The most characteristic bacterial PAMPs consist of LPS derived from Gram-negative bacteria and peptidoglycans originating from either Gram-positive or Gram-negative bacteria.In addition to cell wall components, both viral/ bacterial unmethylated CpG motifs and flagellin are also powerful PAMPs (Fujita & Taguchi, 2012).In addition to genus of mycoplasma which lacks cell wall structures, PGN is a ubiquitous component of bacteria cell walls, and is an extraordinarily distinguishable antigen for characterizing pathogenic bacteria.Over the past few decades, several lines of evidence have indicated that PGNs are recognized by at least three PRRs, including NOD2, PGLYRPs, and TLR2.NOD2, also referred to as caspase recruitment domain-containing protein 15, is a member of the NOD1/Apaf-1 family (Balamayooran et al., 2010), which recognizes the bacterial cell wall PGN that contains the specific structure muramyl dipeptide.It has been shown that NOD2 are highly relevance to bacteria-induced inflammation (Franchi et al., 2008). Mammalian PGLYRPs can be categorized into four types: PGLYRP1, PGLYRP2, PGLYRP3 and PGLYRP4 (Sorbara & Philpott, 2011).PGLYRP1, PGLYRP3, and PGLYRP4 have been proposed to have bactericidal properties and are primarily expressed in polymorphonuclear leukocytes.PGLYRP2, an N-acetylmuramoyl-L-alanine amidase, is mainly expressed in the liver and sequentially released into the systemic circulation.Mammalian PGLYRPs were originally considered PRRs.However, they not only function as PRRs, but also participate in pro-and anti-inflammatory responses (Boneca, 2009). NOD2 tolerance is most likely a consequence of the host's defence system which confines pro-inflammatory actions when stimulation of innate immunity by PGN originates from both Gram-positive and Gram-negative bacteria (Macho et al., 2011).A growing body of research has indicated that contact of lymphocytes or intestinal epithelial cells with either Gram-negative or Gram-positive bacteria cell wall constituents results in refractoriness to subsequent challenge (Faria et al., 2012).This is important for the development of a potential treatment for inflammatory disorders.In this context, a number of reports have demonstrated that NOD2 tolerance does not only occur due to PGN, but also to many other bacteria cellular constituents, such as glycolipids, LPS, lipoteichoic acid, flagellin, unmethylated CpG DNA motif, as well as several heat shock proteins (Kim et al., 2011;Muller-Anstett et al., 2010). To our knowledge, there are only a limited number of comprehensive studies that have addressed the subject of probiotic-induced NOD2 tolerance in liver cells.As a result, we intensified our efforts to investigate the mechanistic actions by which probiotics exert hepatoprotective effects via development of NOD2 tolerance. In this study, seven strains of probiotics (L.plantarum MYL26, L. plantarum MYL31, L. acidophilus MYL201, L. acidophilus MYL202, L. bulgaricus MYL101, L. bulgaricus MYL102 and L. casei MYL01) and pathogenic bacteria S. aureus BCRC 10451 were processed independently into heat-killed bacteria lysates, which were used as stimuli to induce pro-and anti-inflammatory processes.Enzyme-linked immunosorbent assay (ELISA) was performed to evaluate and quantify the inflammation-associated protein expression of tumour necrosis factor alpha (TNFα), interleukin 6 (IL-6), interleukin 8 (IL8), interleukin 12 (IL-12), NF-κB nuclear p65 and cytoplasmic IκBα in HepG2 cells.We also investigated the relationship between NOD2-NF-κB pathway and the expressions of PGLYRP2/3 upon stimulation of probiotics and S. aureus lysates.Moreover, siRNA technique was conducted on NOD2 and PGLYRP2/3 to observe whether NOD2 and PGLYRP2/3 play essential roles in the induction of inflammation tolerance which might lead to a promising therapy for liver injury induced by S. aureus. Bacterial Strains Isolation and identification of lactic acid bacteria from newborn infant faeces and breast milk were performed in the Microbiology Laboratory of the Department of Food Science and Biotechnology, National Chung Hsing University, Taichung, Taiwan.Staphylococcus aureus BCRC 10451 was acquired from the Bioresource Collection and Research Centre (BCRC, Hsinchu, Taiwan).Lactic acid bacteria were grown anaerobically at 37°C in MRS (Difco Laboratories, Detroit, MI, USA).S. aureus was grown aerobically at 37°C in terrific broth (1.2% peptone, 2.4% yeast extract, 72 mM K 2 HPO 4 , 17 mM KH 2 PO 4 , and 0.4% glycerol).All strains were serially subcultured three times prior to use. Preparation of Heat-Killed Bacteria Lysates Bacteria were stored at -80°C until use.All bacterial strains were cultured at 37°C for 16 h and collected by centrifugation at 2500 rpm for 5 min.For preparation of bacteria lysates, cells were adjusted to 1×10 8 cfu/ mL and then washed twice with deionized water before being suspended in phosphate-buffered saline (PBS).Repeated freezing and thawing method was adopted for preliminary cell disruption.Then, cells were heat-killed at 75°C for 3 min, followed by homogenization by Tissue-Tearor (Biospec, USA).All bacteria lysates were aliquoted 10μL for accessing whether or not there were existing live bacteria. Stimulation of HepG2 Cells with Probiotics Followed by S. aureus Challenge HepG2 cells (1×10 6 cells/ mL) were treated with probiotic lysates at 37°C for 20 hours.After stimulation, HepG2 cells were challenged with S. aureus lysates for 20 hours.The supernatants were harvested and assayed IL-6, IL-8, IL-12 and TNF-α.The adhesion cells were lysed for detection of nuclear NF-κB p65 and cytoplasmic IκBα.Nuclear/ cytoplasmic proteins were collected using Nuclear/Cytosol Fractionation Kit (Biovision, USA), and samples were suspended with buffer containing protease inhibitor cocktail.All the protein expressions were evaluated and quantified by ELISA according to manufacturers' instruction (eBioscience ELISA system).Housekeeping protein human actin was adopted as internal control. MTT Assay 3-[4,5-dimethyl-2-thiazolyl]-2,5-diphenyl-2H-tetrazolium bromide (MTT) assay is based on the cleavage of the tetrazolium salt by mitochondrial dehydrogenases in viable cells.In order to determine the most appropriate co-incubation times, approximately 1×10 5 cells were plated onto each well of 96-well plates for 24 h, followed by treatment with S. aureus and different probiotic lysates for 5, 10, 15, 20, 25 and 30 hours.After incubation, 200 mL of MTT solution (0.5 mg/mL) were added to each well for 4 h.The supernatant was removed and 200 μL of dimethyl sulphoxide (DMSO) were added to each well to dissolve the dark blue formazan crystals.The absorbance was measured at 570 nm. Statistical Analysis All values are expressed as mean ± standard deviation (SD).Each value is the mean of three separate experiments in each group (n=3).Statistical comparisons were carried out by student's t test.p < 0.05 (symbol *) was considered significantly different. www.ccsen Lactob Figure 1 m Figure 5 showed that NOD2 knockdown had an enormous impact on the pro-inflammatory properties of S. aureus.NOD2 inactivation not only led to abolishment of pro-inflammatory cytokine secretion induced by S. aureus, but also gave rise to abolishment of PGLYRP2/3 expression as Figure 6 showed.The expressions of PGLYRP2/3 induced by S. aureus, L. acidophilus MYL201 and L. plantarum MYL26 lysates were observed dramatically decreased when NOD2 was silenced, while PGLYRP2 knockdown (Figure 7) did not affect the expression of NOD2 and PGLYRP3.Figure 5 also revealed that the effect of PGLYRP2 knockdown, even though significant, was not as potent as that of NOD2 knockdown on the inactivation of S. aureus-induced inflammatory process, as evidenced by pro-inflammatory cytokine secretion.The observation that PGLYRP2 knockdown made S. aureus partially losses pro-inflammatory properties suggested that PGLYRP2 might mediate inflammation.As for PGLYRP3 knockdown, it was showed that PGLYRP3 knockdown significantly affected both the anti-inflammatory functions of lactobacilli bacteria and the inflammatory properties exerted by S. aureus.PGLYRP3-silenced HepG2 cells challenged by S. aureus lysates secreted more pro-inflammatory cytokines than that of wild type.Moreover, L. plantarum MYL26 did not exert anti-inflammatory capacity when PGLYRP3 was silenced.Although PGLYRP2 knockdown did not affect NOD2 and PGLYRP3 expression, the result of figure 8 revealed that PGLYRP3 knockdown considerably increased the expression of NOD2 and PGLYRP2, both of them prior shown to be relevance to pro-inflammation process. Taken together, the cytokine secretion consequence of NOD2 and PGLYRP2/3 knockdown implicated that NOD2 plays an imperative character in the onset of S. aureus-induce inflammation as well as a key mediator of PGLYRP2/3 activation.PGLYRP2 also plays a role in inflammation mediation but the effect is not as profound as NOD2.PGLYRP3 is of importance in limitation of NOD2 and PGLYRP2 expression, the function that might contribute to anti-inflammation. Discussion In view of the fact that NOD2 is associated with enhanced pro-inflammatory cytokine secretion and nitric oxide generation (Cartwright et al., 2007;Scott et al., 2010;Stoffels et al., 2004), we hypothesized that the hepatoprotective effects of probiotics are in part mediated by inhibition of NOD2-NF-κB signal transduction.The aim of this research was to explore whether or not probiotics exert hepatoprotective effects by means of attenuation of NOD2-NF-κB signal transduction through regulation of PGLYRP2/3.Within the in vitro study of HepG2, we investigated the induction of inflammation tolerance by stimulation of different strains of lactobacilli bacteria in terms of suppressed pro-inflammatory cytokine secretion in response to challenge of S. aureus.In a system of S. aureus-induced inflammatory liver damage, we first examined to what extent the practical potency of inflammation tolerance is induced by lactobacilli bacteria.In order to shape the mechanism more elaborately and address the differences from molecular aspects between potent and impotent strains, the most effective and ineffective lactobacilli bacteria strains were selected for exploration of inflammation signal transduction events. The expressions of NOD2 after treatment with S. aureus and lactobacilli bacteria lysates were accessed and found that all bacteria lysates were able to activate NOD2 expression, especially S. aureus lysates showing the greatest effect.This outcome was in line with literatures focus on innate immunity regulated by lactobacilli bacteria (Foligne et al., 2007;Matuchansky, 2012;Shida et al., 2009).Regarding the marginal effect of NF-κB p65 nuclear translocation and mild pro-inflammatory cytokine production, we verified that NOD2 expression activated by L. plantarum MYL26 did not lead to serious inflammation.There are a number of points worth noting.First, potent strain L. plantarum MYL26 enhanced NOD2 expression, even if they did not give rise to severe inflammation.Second, NOD2 and PGLYRP2/3 activation was strain-dependent.Each probiotic strain contributed to varying degrees of gene activation.Potent strain L. plantarum MYL26 activated PGLYRP3 much more significantly that that of impotent strain L. acidophilus MYL201, while NOD2 and PGLYRP2 expression were showed much higher in group of impotent strain L. acidophilus MYL201.Third, the patterns of NOD2 and PGLYRP2 expression induced by impotent strain L. acidophilus MYL201 lysates were similar to that of S. aureus lysates challenge group.These observations implied that NOD2 and PGLYRP2 are relevance to inflammation signal transduction.However, PGLYRP3 seems to play a key role in the development of inflammation tolerance. To further demonstrate the postulation that PGLYRP2 is relevance to inflammation process and PGLYRP3 contributes to the development of inflammation tolerance, complementary studies are required to detail the relationship between NOD2, PGLYRP2 and PGLYRP3.We conducted siRNA technique targeting PGLYRP2, PGLYRP3 and NOD2, followed by advanced assessment of pro-inflammatory cytokine production.NOD2 knockdown led to significantly decreased pro-inflammatory cytokine production, accompanied by reduced expressions of PGLYRP2 and PGLYRP3.One explanation for this is that PGLYRP2 and PGLYRP3 are the downstream signal transducers of NOD2 which is thought to be of great importance in increased secretion of pro-inflammatory cytokine induced by S. aureus, as well as lactobacilli bacteria.The result that NOD2 inactivation developed resistance to S. aureus corresponded to numerous literatures that indicate S. aureus mediates inflammation, in addition to exotoxin, mainly due to cell wall PGN specific to NOD2 (Juarez-Verdayes et al., 2013;Mitchell et al., 2007;Volz et al., 2010).PGLYRP2 knockdown did not influence the expression of NOD2, the result which provides further evidence that NOD2 serves as an upstream mediator of PGLYRP2.Decreased pro-inflammatory cytokine production caused by PGLYRP2 knockdown, although significant, was not as potent as that caused by NOD2 knockdown.Regarding the phenomenon of increased pro-inflammatory cytokine production and enhanced NOD2 and PGLYRP2 expression, L. plantarum MYL26, at least in part, were deprived of anti-inflammatory efficacies when knockdown of PGLYRP3.Moreover, the observation that potent strain L. plantarum MYL26 was effective in the activation of PGLYRP3 while impotent strain L. acidophilus MYL201 was not, corresponds with the result that PGLYRP3 knockdown abolishes the anti-inflammatory efficacies of L. plantarum MYL26. The results of PGLYRP3 knockdown might be explained in relation to interrupted development of inflammation tolerance.In other words, induction of PGLYRP3 by L. plantarum MYL26 might give rise to anti-inflammatory effect before S. aureus challenge.Thus, PGLYRP3 knockdown leads to weak development of inflammation tolerance.However, we also found that PGLYRP3 knockdown led to enhanced expression of NOD2 and PGLYRP2.Combining with the result of poor PGLYRP3 expression caused by NOD2 inactivation, this implies that PGLYRP3 acts as a negative regulator responsible for NOD2 and PGLYRP2 feedback control.It is reasonable to conclude that PGLYRP2 and PGLYRP 3 act as pro-and anti-inflammatory proteins because the knockdown of PGLYRP2/3 are strongly associated with attenuated or increased pro-inflammatory cytokine production.This may account for the observation that why L. acidophilus MYL201 is impotent in development of inflammation tolerance while L. plantarum MYL26 is potent because L. plantarum MYL26 induces PGLYRP3 while L. acidophilus MYL201 activates NOD2 and PGLYRP2. Taken together, HepG2 cells up-regulates the expression of PGLYRP3, which is supposed to counteract PGLYRP2 and NOD2, as a self-defence mechanism against S. aureus-induced inflammatory process.A rational interpretation for why PGLYRP3 knockdown led to diminished anti-inflammatory effect, while PGLYRP2 knockdown resulted in diminished inflammatory effect, may lie in the fact that PGLYRP3 plays an anti-inflammatory role, whereas PGLYRP2 acts as a pro-inflammatory mediator. Conclusions In conclusion, we discovered that the effects of probiotics in the attenuation of S. aureus-induced liver damage are by means of PGLYRP3 activation.The phenomenon of better defence against S. aureus challenge in HepG2 cells was attributed to previous induction of PGLYRP3 expression which led to feedback control of NOD2 and PGLYRP2.However, one of the difficulties researchers confront is that the ability for activating the expression of PGLYRP3 varies from strains to strains.Furthermore, inflammation process induction is multi-factorial, and thus there is no single solution to solve such multifaceted problem.This research might give a new insight on screen method establishment based on a novel molecular mechanism for selecting potent probiotic strains with the purpose of conferring hepatoprotective benefit. Figure Figure 2. C by challen Figure 5. N followed b
3,978.8
2013-08-15T00:00:00.000
[ "Biology" ]
REMARK ON STABILITY OF FRACTIONAL ORDER PARTIAL DIFFERENTIAL EQUATION unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract. In this paper, using a fractional order partial derivative with non-singular kernel we investigate, the stability and its generalization on semi-closed and semi-open interval for the solution of a fractional order partial differential equation with the help of an inequality. In this paper, we will consider the following fractional order partial differential equation INTRODUCTION In recent years, the fractional calculus plays an significant role in numerous fields, such as a pure and applied Mathematics, Science and Engineering Technology. Furthermore, this is concerned with existence of mild solution of evolution with Hilfer fractional derivative generalized the well-known Riemann -Liouville fractional derivative by noncompact measure method and acquire some sufficient conditions to make certain the existence of mild solution [11], An initial value problem for a class of non-linear fractional differential equations concerning Hilfer fractional derivative and prove the existence and uniqueness of universal solutions in the space of weighted continuous functions. Also analyze the stability of the solution for a weighted Cauchy -type problem [12]. They found the existence and the uniqueness of a positive solution in the space of weighted continuous functions and boundary performance of such solution [13], Particularly, sufficient conditions for the existence of solution for a class of initial value problems for impulsive fractional differential equations connecting the Caputo fractional derivative [14]. Additionally, the existence and uniqueness of a solution of a class of initial boundary value problems for implicit fractional differential equations with fractional derivative and the outcome are based upon technique of measures of compactness and the fixed point theorems of Darbo and Monch [15], in addition the existence and uniqueness results for implicit differential equations of Hilfer type fractional order via Schaeder's fixed point theorem and Banach contraction principle [16], Ulam stability and data dependence for fractional differential equations with Caputo fractional derivative of order α and presents four types of Ulam stability results for the fractional differential equation [17], Ulam ψ -Hilfer fractional derivative and present the Hyers -Ulam -Rassias stability and the Hyers-Ulam stability of the fractional Voltera integral -differential equation by means of fixed point method [18]. We organize this paper as : In the second section, we define some basic definitions and notations, in the third section, we investigate the stability and its generalization on semi-closed and semi-open interval for the solution of a fractional order partial differential equation with the help of inequality. TECHNICAL BACKGROUND In this section, we use some definitions and notations which are given in [1] with details and present technical preparation needed for further discussion. In particular, take N = 3, ..., a N and θ 1 , θ 2 ...., θ N are positive constants. Also let u, ψ ∈ C n (Ĩ, R) are two functions such that ψ is increasing and ψ ( We use notation is non -negative and non decreasing then, for any t ≥ α and being E α (.) the one-parameter Mittag-Leffler function. The right-sided fractional integral is defined in an analogous form. be two functions such that ψ is increasing and ψ (x) = 0, for all x ∈ I. The left-sided ψ -Hilfer fractional derivative H D α,β ,ψ a + (.) of a function of order α and The right-sided ψ-Hilfer fractional derivative is defined in an analogous form. For each function y satisfying there is a solution y 0 of the fractional integro-differential equation and a constant C > 0 independent of y and y 0 such that for all x ∈ [a, b], then we say that the integro-differential equation has the Ulam-Hyers stability. [1] If for each function y satisfying x a K(x, τ, y(τ), y(δ (τ)))dτ) |≤ θ x ∈ [a, b], where θ ≥ 0, there is a solution y 0 of the fractional integro-differential equation and a constant C > 0 independent of y and y 0 such that x ∈ [a, b], for some non-negative function σ defined on [a, b] , then we say that the fractional integro-differential equation has the so-called semi-Ulam-Hyers-Rassias stability. [1] (Banach) Let (X, d) be a generalized complete metric space and T : X → X a strictly contractive operator with Lipschtiz constant L > 1. If there exist a non-negative integer k such that d(T k+1 x, T k x) < ∞ for some x ∈ X, then the following three propositions hold true: (i) The sequence (T n x) n∈N converges to a fixed point x * of T; (ii) x * is the unique fixed point of T in X * = {y ∈ X; d(T k x, y) < ∞}; (iii) If y ∈ X * , then MAIN RESULT In this paper, our aim is to investigate the stability and its generalization on semi-closed and semi-open interval for the solution of a fractional order partial differential equation with the help of inequality. For our convenience in the calculations, we consider the following set of considerations and and u satisfies (0.1) real number C 1 f ,φ ,C 2 f ,φ ,C 3 f ,φ and C n f ,φ > 0 such that, for any ∈> 0 and for any solution v to the inequality (3.2), with In the similar manner, we have the inequalities Remark 2: If function v is a solution to the inequality (3.1), if and only if there exist a function is a solution of the following system of integral inequalities. Then, we have a) for h ∈ C ([0, a), B), g ∈ C ([0, b), B) and k ∈ C ([0, c), B), the equation (3.2) has a unique solution with is a solution to the system.
1,345.4
2020-03-03T00:00:00.000
[ "Mathematics" ]
The Preparation, Determination of a Flexible Complex Liposome Co-Loaded with Cabazitaxel and β-Elemene, and Animal Pharmacodynamics on Paclitaxel-Resistant Lung Adenocarcinoma Paclitaxel is highly effective at killing many malignant tumors; however, the development of drug resistance is common in clinical applications. The issue of overcoming paclitaxel resistance is a difficult challenge at present. In this study, we developed nano drugs to treat paclitaxel-resistant lung adenocarcinoma. We selected cabazitaxel and β-elemene, which have fewer issues with drug resistance, and successfully prepared cabazitaxel liposome, β-elemene liposome and cabazitaxel-β-elemene complex liposome with good flexibility. The encapsulation efficiencies of cabazitaxel and β-elemene in these liposomes were detected by precipitation microfiltration and microfiltration centrifugation methods, respectively. Their encapsulation efficiencies were all above 95%. The release rates were detected by a dialysis method. The release profiles of cabazitaxel and β-elemene in these liposomes conformed to the Weibull equation. The release of cabazitaxel and β-elemene in the complex liposome were almost synchronous. The pharmacodynamics study showed that cabazitaxel flexible liposome and β-elemene flexible liposome were relatively good at overcoming paclitaxel resistance on paclitaxel-resistant lung adenocarcinoma. As the flexible complex liposome, the dosage of cabazitaxel could be reduced to 25% that of the cabazitaxel injection while retaining a similar therapeutic effect. It showed that β-elemene can replace some of the cabazitaxel, allowing the dosage of cabazitaxel to be reduced, thereby reducing the drug toxicity. Introduction Chemotherapy is a common method of tumor treatment. Taxanes, such as paclitaxel and docetaxel, have become the first-line drugs in chemotherapy treatment for conditions such as lung, ovarian and breast cancer [1,2]. However, with the widespread use of paclitaxel, paclitaxel resistance is becoming increasingly prominent and this is one of the main reasons for treatment failure [3][4][5]. Finding a way to overcome the paclitaxel resistance of tumors is imperative. The mechanisms of paclitaxel The Effect of Paclitaxel, Cabazitaxel and β-Elemene on Paclitaxel-Resistant Lung Adenocarcinoma Cells The effect of paclitaxel, cabazitaxel and β-elemene on lung adenocarcinoma cells (A549) are shown in Table 1. The results showed that paclitaxel was highly resistant to the paclitaxel-resistant lung adenocarcinoma cells (A549/T), and the resistance index was 44.6. Compared with paclitaxel, cabazitaxel showed a significant decrease in the resistance index to the paclitaxel-resistant cells, with a small resistance. Paclitaxel resistance in lung adenocarcinoma cells is currently attributed to repeated drug stimulation and P-glycoprotein pump efflux. During the long-term development of drug resistance, paclitaxel-resistant lung adenocarcinoma cells had changed to adapt to harsh living conditions, such as the emergence of multidrug resistance. Cabazitaxel had a low affinity with Pglycoprotein, so it only demonstrated a small resistance. β-elemene displayed a slight resistance to paclitaxel-resistant lung adenocarcinoma cells. This may be due to the fact that β-elemene had strong permeability. β-elemene also could inhibit the expression of P-glycoprotein [13]. Combined Effect of Overcoming the Resistance of Cabazitaxel and β-Elemene Compositions The combined effect of overcoming the resistance of cabazitaxel and β-elemene compositions is shown in Table 2. The results showed that the inhibition effects of different ratio compositions of cabazitaxel and β-elemene ranged from 1 to 5.5 times compared with cabazitaxel alone on paclitaxelresistant lung adenocarcinoma cells (IC50 of cabazitaxel alone/IC50 of cabazitaxel in compositions). The effects of different ratio compositions of cabazitaxel and β-elemene ranged from 6.4 to 35.3 times compared with that of paclitaxel. The effects of these compositions were significantly higher than that of paclitaxel. When the ratio of composition was greater than 1379.99 μM/174.66 nM (the ratio of β-elemene to cabazitaxel), the effects were increased. When the ratio was 1379.99 μM/43.66 nM the effect was significantly increased to 35.3 times that of paclitaxel, indicating that the composition had a significant effect on overcoming paclitaxel resistance. When the ratios of β-elemene to cabazitaxel were greater, the effect on overcoming paclitaxel resistance was greater. The Effect of Paclitaxel, Cabazitaxel and β-Elemene on Paclitaxel-Resistant Lung Adenocarcinoma Cells The effect of paclitaxel, cabazitaxel and β-elemene on lung adenocarcinoma cells (A549) are shown in Table 1. The results showed that paclitaxel was highly resistant to the paclitaxel-resistant lung adenocarcinoma cells (A549/T), and the resistance index was 44.6. Compared with paclitaxel, cabazitaxel showed a significant decrease in the resistance index to the paclitaxel-resistant cells, with a small resistance. Paclitaxel resistance in lung adenocarcinoma cells is currently attributed to repeated drug stimulation and P-glycoprotein pump efflux. During the long-term development of drug resistance, paclitaxel-resistant lung adenocarcinoma cells had changed to adapt to harsh living conditions, such as the emergence of multidrug resistance. Cabazitaxel had a low affinity with P-glycoprotein, so it only demonstrated a small resistance. β-elemene displayed a slight resistance to paclitaxel-resistant lung adenocarcinoma cells. This may be due to the fact that β-elemene had strong permeability. β-elemene also could inhibit the expression of P-glycoprotein [13]. Combined Effect of Overcoming the Resistance of Cabazitaxel and β-Elemene Compositions The combined effect of overcoming the resistance of cabazitaxel and β-elemene compositions is shown in Table 2. The results showed that the inhibition effects of different ratio compositions of cabazitaxel and β-elemene ranged from 1 to 5.5 times compared with cabazitaxel alone on paclitaxel-resistant lung adenocarcinoma cells (IC 50 of cabazitaxel alone/IC 50 of cabazitaxel in compositions). The effects of different ratio compositions of cabazitaxel and β-elemene ranged from 6.4 to 35.3 times compared with that of paclitaxel. The effects of these compositions were significantly higher than that of paclitaxel. When the ratio of composition was greater than 1379.99 µM/174.66 nM (the ratio of β-elemene to cabazitaxel), the effects were increased. When the ratio was 1379.99 µM/43.66 nM the effect was significantly increased to 35.3 times that of paclitaxel, indicating that the composition had a significant effect on overcoming paclitaxel resistance. When the ratios of β-elemene to cabazitaxel were greater, the effect on overcoming paclitaxel resistance was greater. Particle Size and Zeta Potential of the Liposomes According to the results of the combined-effect study and the clinical situation of cabazitaxel injection and elemene injection, the cabazitaxel liposome, β-elemene liposome and the complex liposome with good flexibility were prepared successfully. For passively targeting liposomes, the particle size, zeta potential and permeability were key factors. In these liposomes, d-α-tocopherol polyethylene glycol 1000 succinate (TPGS) was used to enhance the permeability of drugs and reduce the size of liposomes, and the amount of cholesterol was reduced to increase its flexibility further and enable them to cross the barriers of blood vessels and tumors more easily. According to the general composition of liposomes, phospholipids are usually 0.5%-3%, cholesterol is 0.1%-1%, TPGS is 0.1%-1%, and ethanol is 0.1%-20% (the dosage of different processes was also different). In the preliminary experiments, we used hydrogenated soybean phospholipids and egg yolk phospholipid. However, their prepared liposomes were not sufficiently transparent compared to that of soybean phospholipid; therefore, soybean phospholipid was chosen as the membrane material of the liposomes. The ratio of cholesterol to soybean phospholipid was reduced to 1:25 in order to enhance the permeability of the complex liposome. TPGS was a good emulsifier, which can enhance the drug permeability and inhibit P-glycoprotein. Therefore, TPGS was chosen as the emulsifying stabilizer. Trehalose was used as an isotonic agent in the prescription. The liposomes prepared by the above prescription had a relatively small particle size, uniform distribution and suitable zeta potential. They were almost transparent. The average particle size of these liposomes was caculated by the volume model of Nicomp software v.3.0.6 in particle sizing systems. The average particle size, polydispersity index (PI) value, zeta potential and the mean values of the complex liposome, β-elemene liposome and cabazitaxel liposome are shown in Table 3. According to the features of these liposomes, they may be small unilamellar vesicles. The components between the complex liposome and cabazitaxel liposome were slightly different. The cabazitaxel liposome was freeze-dried, and the size and zeta potential were detected after resolving. Therefore, some differences in their size and zeta potential were observed. The particle size and distribution figures of these liposomes are shown in Figures S1-S9 in the Supplementary Materials. The Detection of Encapsulation Efficiency of Cabazitaxel and β-Elemene in the Liposomes The encapsulation efficiency of cabazitaxel in liposomes was determined by the precipitation microfiltration method. The average filter interception recovery of cabazitaxel in the corresponding aqueous solution (n = 6) was 99.72% ± 0.57% and the relative standard deviation (RSD) was 0.58%. Its average total recovery was 100.02% ± 1.05% and the RSD was 1.05%. The average encapsulation efficiency of the cabazitaxel liposome (n = 6) was 95.63% ± 0.67% and the RSD was 0.71%. The average recovery of the cabazitaxel liposome was 102.51% ± 0.91% and the RSD was 0.90%. The average encapsulation efficiency of cabazitaxel in the complex liposome (n = 6) was 96.98% ± 0.85% and the RSD was 0.88%; the average recovery of cabazitaxel in the complex liposome was 98.19% ± 0.27% and the RSD was 0.28%. The encapsulation efficiency of β-elemene in liposomes was determined by the microfiltration centrifugation method. The average filter interception recovery of β-elemene in the corresponding aqueous solution (n = 6) was 98.08% ± 0.32% and the RSD was 0.33%. The average encapsulation efficiency of β-elemene liposome (n = 6) was 96.32% ± 0.59% and the RSD was 0.62%. The average recovery of β-elemene liposome was 99.60% ± 1.45% and the RSD was 1.46%. The average encapsulation efficiency of β-elemene in the complex liposome (n = 6) was 99.29% ± 0.42%, with an RSD of 0.43%. The average recovery of β-elemene in the complex liposome was 99.92% ± 0.37% and the RSD was 0.38%. From these results, the method meets the requirements. Their encapsulation efficiencies were all above 95%. As cabazitaxel did not dissolve easily in water but was easily dissolved in ethanol, the liposome preparation used approximately 1% ethanol (w/w) as the cosolvent. The free cabazitaxel in the preparation usually dissolved in ethanol or existed in the form of drug particles, and precipitation would occur under the condition of long-term placement. Drugs encapsulated in liposomes were not easy to precipitate. In order to speed up the precipitation of free cabazitaxel, sodium chloride was added to the solution to destroy the physical stability of the free drug and accelerate its precipitation. Then, the precipitation could be removed by a microfiltration membrane. In the process of precipitation formation, the keys were the standing time and the dosage of sodium chloride was the precipitation promoter. Through optimization, the standing time was 4 h and the dosage of sodium chloride was 0.75 g for 5 mL liposome solution. As β-elemene is insoluble in water and is less dense than water, it required more than 50% ethanol solution to completely dissolve. The liposome preparation used approximately 1% ethanol (w/w) as a cosolvent, which was insufficient to form a stable suspension of β-elemene in aqueous solution. Free β-elemene in the preparation was usually dissolved in trace amounts of ethanol to form tiny oil droplets or floated on the liquid surface. Free β-elemene can be intercepted and removed by 0.45 µm microfiltration centrifugation. The size of liposomes was usually small and they easily passed through the 0.45 µm microfiltration membrane. Therefore, the encapsulation efficiency of β-elemene in liposomes was determined using a 0.45 µm microfiltration centrifugation method. In the pre-experiment, we used the dextran gel column method. However, it was difficult to completely separate free drugs from liposomes. The centrifugation method was also used and the prepared liposomes hardly precipitated, even upon ultra-high-speed centrifugation. Therefore, the precipitation microfiltration method and microfiltration centrifugation method were finally chosen. The Release Rate Detection of Cabazitaxel and β-Elemene in the Liposomes An important parameter of liposome is the release rate. It is typically hoped that drugs can achieve synchronized release for the complex liposome so that the drugs can synchronously act on cancer cells. The release rate detection of cabazitaxel and β-elemene in these liposomes are shown in Figures S10-S13 in the Supplementary Materials. The release rate of cabazitaxel in the complex liposome (n = 3) was 16.39% ± 1.13% at 0.5 h, and 85.44% ± 1.18% at 10 h. The release rate of β-elemene in the complex liposome (n = 3) was 5.77% ± 0.63% at 0.5 h, and 99.07% The Animal Pharmacodynamics of the Liposomes Due to the relatively high toxicity of cabazitaxel and the fact that it contained more polysorbate 80 in its marketed preparation, in this study we encapsulated it into liposome. β-elemene liposome and the complex liposome were also prepared, and their therapeutic effects were compared. Considering that the dosage of phospholipids and other excipients in the combination of cabazitaxel liposome and β-elemene liposome was twice as much as that of the complex liposome, both are fat-soluble drugs, and the prescription process was basically the same; therefore, the complex liposome had relative advantages. Thus, this study mainly examined the pharmacodynamics of cabazitaxel liposome, β-elemene liposome and the complex liposome. The relative tumor volume profiles of these liposomes are shown in Figure 2. The relative tumor proliferation rate of these liposomes is shown in Figure 3. The tumor tissues with paclitaxel-resistant lung adenocarcinoma are shown in Figure 4. The tumor inhibition rates of these liposomes are shown in Figure 5. soluble drugs, and the prescription process was basically the same; therefore, the complex liposome had relative advantages. Thus, this study mainly examined the pharmacodynamics of cabazitaxel liposome, β-elemene liposome and the complex liposome. The relative tumor volume profiles of these liposomes are shown in Figure 2. The relative tumor proliferation rate of these liposomes is shown in Figure 3. The tumor tissues with paclitaxel-resistant lung adenocarcinoma are shown in Figure 4. The tumor inhibition rates of these liposomes are shown in Figure 5. Relative tumor volume profiles of studied liposomes. *Compared with the 5% glucose group, there was a statistically significant difference (p < 0.01). Compared with 5% glucose group, the statistics parameter of cabazitaxel injection was t = 7.682, p < 0.01; that of the β-elemene liposome Relative tumor volume profiles of studied liposomes. *Compared with the 5% glucose group, there was a statistically significant difference (p < 0.01). Compared with 5% glucose group, the statistics parameter of cabazitaxel injection was t = 7.682, p < 0.01; that of the β-elemene liposome group was t = 7.221, p < 0.01; that of cabazitaxel liposome was t = 8.012, p < 0.01, that of the cabazitaxel-β-elemene complex liposome was t = 8.612, p < 0.01. **Compared with the taxol injection group, there was a statistically significant difference (p < 0.01). Compared with the taxol injection group, the statistics parameter of cabazitaxel injection was t = 7.373, p < 0.01; that of the β-elemene liposome group was t = 6.369, p < 0.01; that of the cabazitaxel liposome was t = 7.469, p < 0.01; and that of the cabazitaxel-β-elemene complex liposome was t = 8.116, p < 0.01. Compared with the cabazitaxel injection group, the statistics parameter of the β-elemene liposome group was t = −1.674, p > 0.05; that of the cabazitaxel liposome was t = 1.067, p > 0.05; that of the cabazitaxel-β-elemene complex liposome was t = −1.051, p > 0.05 (there was no statistically significant difference). Compared with the β-elemene liposome group, the statistics parameter of the cabazitaxel-β-elemene complex liposome group was t = 0.971, p > 0.05 (there was no statistically significant difference). group was t = 7.221, p < 0.01; that of cabazitaxel liposome was t = 8.012, p < 0.01, that of the cabazitaxelβ-elemene complex liposome was t = 8.612, p < 0.01. **Compared with the taxol injection group, there was a statistically significant difference (p < 0.01). Compared with the taxol injection group, the statistics parameter of cabazitaxel injection was t = 7.373, p < 0.01; that of the β-elemene liposome group was t = 6.369, p < 0.01; that of the cabazitaxel liposome was t = 7.469, p < 0.01; and that of the cabazitaxel-β-elemene complex liposome was t = 8.116, p < 0.01. Compared with the cabazitaxel injection group, the statistics parameter of the β-elemene liposome group was t = −1.674, p > 0.05; that of the cabazitaxel liposome was t = 1.067, p > 0.05; that of the cabazitaxel-β-elemene complex liposome was t = −1.051, p > 0.05 (there was no statistically significant difference). Compared with the β-elemene liposome group, the statistics parameter of the cabazitaxel-β-elemene complex liposome group was t = 0.971, p > 0.05 (there was no statistically significant difference). Figure 3. The relative tumor proliferation rate of these liposomes. *Compared with the 5% glucose group, there was a statistically significant difference (p < 0.01). **Compared with the taxol injection group, there was a statistically significant difference (p < 0.01). Compared with the cabazitaxel injection group, there was no statistically significant difference for the group with cabazitaxel liposome, β-elemene liposome or the complex liposome, respectively. The results of the statistics parameter of the relative tumor proliferation rate were the same as those of the relative tumor volume. The relative tumor proliferation rate of these liposomes. *Compared with the 5% glucose group, there was a statistically significant difference (p < 0.01). **Compared with the taxol injection group, there was a statistically significant difference (p < 0.01). Compared with the cabazitaxel injection group, there was no statistically significant difference for the group with cabazitaxel liposome, β-elemene liposome or the complex liposome, respectively. The results of the statistics parameter of the relative tumor proliferation rate were the same as those of the relative tumor volume. Compared with the 5% glucose group, the statistics parameter of the cabazitaxel injection group was t = −11.870, p < 0.01; that of the β-elemene liposome group was t = −10.095, p < 0.01; that of the cabazitaxel liposome was t = −15.615, p < 0.01; and that of the cabazitaxel-β-elemene complex liposome was t = −11.824, p < 0.01. **Compared with the taxol injection group, there was a statistically significant difference (p < 0.01). Compared with the taxol injection group, the statistics parameter of the cabazitaxel injection was t = −8.294, p < 0.01; that of the β-elemene liposome group was t = −5.648, p < 0.01; that of the cabazitaxel liposome was t = −10.461, p < 0.01; that of the cabazitaxel-β-elemene complex liposome was t = −7.091, p < 0.01. Compared with the cabazitaxel injection group, the statistics parameter of the cabazitaxel-β-elemene complex liposome group was t = 1.842, p > 0.05, there was no statistically significant difference. Compared with the β-elemene liposome group, the statistics parameter of the cabazitaxel-β-elemene complex liposome group was t = −1.669, p > 0.05, there was no statistically significant difference. The tumor inhibition rates of group B, C, D, E, F, G were 13.53% ± 9.81%, 24.33% ± 10.67%, 58.40% ± 5.81%, 47.62% ± 6.25%, 63.46% ± 3.27%, and 52.71% ± 7.18%, respectively. From the results, it can be seen that the blank liposome had a slight antitumor effect on nude mice using the human paclitaxel-resistant lung adenocarcinoma model (the drug resistance index was 44.6). Paclitaxel common injection (10 mg/kg) has a certain antitumor effect. The anti-tumor effects of cabazitaxel injection (2.5 mg/kg), β-elemene liposome (25 mg/kg), cabazitaxel liposome (2.5 mg/kg), and the complex liposome (0.625 mg/kg after the first 2.5 mg/kg) were similar. Compared with the group with 5% glucose, these four groups (cabazitaxel injection, β-elemene liposome, cabazitaxel liposome, and the complex liposome) had statistically significant differences. Compared with the taxol injection, these four groups (cabazitaxel injection, β-elemene liposome, cabazitaxel liposome, and the complex liposome) also had statistically significant differences. Compared with the cabazitaxel injection group, there was no statistically significant difference for the cabazitaxel liposome group, β-elemene liposome group or the complex liposome group. Cabazitaxel, which has a low affinity with P-glycoprotein, is a paclitaxel derivative developed to overcome paclitaxel resistance. From the above results, it can be seen that cabazitaxel injection and cabazitaxel liposome all had a relatively good effect on paclitaxel-resistant lung adenocarcinoma. Surprisingly, β-elemene liposome also had a relatively good effect on nude mice with paclitaxel-resistant lung adenocarcinoma. The effect of β-elemene liposome (25 mg/kg) was similar to the effect of cabazitaxel injection (2.5 mg/kg), although β-elemene had a lower inhibitory effect than cabazitaxel in previous cell experiments. This suggests that β-elemene possessed other mechanisms besides directly inhibiting tumor cells. The complex liposome with 0.625 mg/kg cabazitaxel after the first 2.5 mg/kg administered every other day may have a similar effect to that of 2.5 mg/kg cabazitaxel injection every other day or 25 mg/kg β-elemene every day. This implies that, as the flexible complex liposome, the dosage of cabazitaxel could be reduced to 25% that of the cabazitaxel injection while retaining a similar therapeutic effect. It was found that there was no death in mice in the complex liposome group compared with cabazitaxel injection and β-elemene liposome, indicating that the toxicity of the complex liposome was less when the cabazitaxel dosage was reduced. It showed that β-elemene can replace some of the cabazitaxel, reducing the dosage of cabazitaxel, thereby reducing the toxity. This suggests a way to reduce the dosage of cabazitaxel. We speculated that β-elemene with strong permeability and inhibiting p-glycoprotein may be beneficial to overcome the blood vessel barrier, mesenchymal hyperosmotic barrier, and cell membrane barrier of tumors. At the same time, β-elemene had a certain effect in regulating immunity, which may signal related immune molecules to attack tumors and change the microenvironment of tumors, which made cabazitaxel easier to enter into tumor tissue and cells, thus enhancing the efficacy of cabazitaxel. Yin et al. [37] developed a PEG-modified liposome encapsulating cabazitaxel (containing egg phospholipid, cholesterol, and PEG2000-DSPE, and chloroform was used in the preparation process). Their result showed that the cabazitaxel liposome enhanced the inhibitory effect on CT-26 (mouse colon cancer) and T41 (mouse breast cancer) tumors compared to the cabazitaxel solution. This result and our results indicate that cabazitaxel has a good efficacy with regard to various tumors, but it is more valuable for drug-resistant tumors. Wang et al. [44] found that β-elemene liposome (containing 6% soybean phospholipid, 1% cholesterol and 1%polyvinylpyrrolidone-k30) had a good inhibitory effect on hepatocellular carcinoma (H22) in nude mice. Wang et al. [45] prepared β-elemene ordinary liposomes, long-circulating liposomes and thermosensitive long-circulating liposomes (containing 5% soybean phospholipid, 1.67% cholesterol, 0.33% PEG2000-DSPE). The results showed that these β-elemene liposomes had a good effect on nude mice with hepatocellular carcinoma (H22). Dong et al. [46] prepared β-elemene-curcumin complex liposomes as atomization inhalation preparation (containing 6.667% phospholipid and 1.333% cholesterol); their results showed that they had a good inhibitory effect on Lewis lung cancer cells in vitro. The above results showed that β-elemene liposome obtained by various prescriptions and preparation techniques has an effect on some tumors. However, the effect of β-elemene liposome, cabazitaxel liposome and cabazitaxel-β-elemene complex liposome on paclitaxel-resistant lung adenocarcinoma was not reported previously. In this study, β-elemene flexible liposome, cabazitaxel flexible liposome and the flexible complex liposome containing TPGS had a relatively good effect on nude mice with paclitaxel-resistant lung adenocarcinoma. It showed that they are worthy of further study. Paclitaxel resistance is very common in clinics. At present, how to overcome paclitaxel resistance is a difficult challenge. In this study, we developed three liposomes to treat paclitaxel-resistant lung adenocarcinoma. From the results, it can be seen that both the cabazitaxel liposome and β-elemene liposome have relatively good anti-tumor effects when used alone, and the dosage of cabazitaxel in the complex liposome can be reduced with a similar therapeutic effect. These results suggest that the above preparations have relatively good clinical potential. Combined Effect of Overcoming Paclitaxel Resistance of Cabazitaxel and β-Elemene Compositions The cryopreserved A549 and A549/T were recovered and transferred to the cell culture flask containing the culture medium. The cells were distributed uniformly by gently shaking. The culture medium was RPMI 1640 with 10% fetal bovine serum. The cell culture bottle was cultured in a CO 2 incubator. After cell passage, the cells were observed to have no abnormalities and prepared to be inoculated. A549 and A549/T were digested and counted. The cell density was about 6 × 10 5 /mL. The above cells were added to the medium containing serum, and 100 µL of cell suspensions were added into a 96-well cell culture plate to obtain 3000 cells in each well. Then, it was incubated for 24 h at 37 • C in a 5% CO 2 incubator. Drug solutions of different ratios of cabazitaxel/β-elemene ( .00 µM for A549/T cell) were prepared by using dimethyl sulfoxide (DMSO) as the solvent. The drug solutions were diluted to the desired content with culture medium. A quantity of 100 µL of the diluted drug medium was added into the well. A negative control group was also established. The 96-well cell culture plate was incubated for 72 h at 37 • C in 5% CO 2 . Ninety-six-well cell culture plates were stained with MTT and their optical density (OD) values were detected according to the following steps: 20 µL of MTT solution (5 mg/mL) was added to each well and incubated for 4 h in the incubator. The supernatant was discarded. A quantity of 150 µL DMSO was added to each well and gently mixed for 10 min in the shaking table. The OD value of each well was detected by the plate reader, λ was 490 nm. Then the inhibition rate was calculated according this formula: Inhibition rate (%) = (OD value of negative control group − OD value of experimental group) /OD value of negative control group × 100% (1) The IC 50 (half maximal inhibitory concentration) values of the drugs were calculated by SPSS 18.0 software (IBM SPSS Statistics, Chicago, IL, USA). The Preparation of Cabazitaxel Liposome, β-Elemene Liposome and Their Complex Liposome According to the results of the combined-effect study, when the clinical situation of cabazitaxel injection and elemene injection were considered at the same time, the formulation and process of cabazitaxel-β-elemene complex liposome were as follows: 160 mg cabazitaxel was weighed and dissolved by adding 9.5 g ethanol. Then, 4 g β-elemene and the appropriate excipients such as 0.8 g cholesterol, 20 g soybean phospholipid and 4 g TPGS were added and dissolved by heating at 80 • C as the organic phase. A total of 80 g trehalose was dissolved into 682 g water and preserved at 60 • C as the aqueous phase. The organic phase was added into the aqueous phase. Then, the suspension was sheared at 15,000 r/min for 1 h, and high-pressure homogeneity was performed three times at 15,000 psi. Then, the complex liposome was prepared. The preparation method and prescription of β-elemene liposome were the same except that no cabazitaxel was used. The formulation and process of cabazitaxel liposome were as follows: 400 mg cabazitaxel, 0.48 g cholesterol, 12 g soy phospholipid, and 4 g TPGS, were dissolved into 100 mL ethanol, then it was rotated and vacuum-pumped to remove ethanol at 60 • C. A total of 160 g trehalose was dissolved in 623 g water to dissolve the above materials. Then, it was sheared at 15,000 r/min for 1 h, and high-pressure homogeneity was performed three times at 15,000 psi. Finally, the cabazitaxel liposome was freeze-dried. As the cabazitaxel liposome had no β-elemene, the dosage of cabazitaxel may be increased and the dosage of phospholipid may be reduced. The complex liposome and β-elemene liposome were not freeze-dried because of the volatility of β-elemene. The particle size was detected by Nicomp software v.3.0.6 of particle sizing systems. The common market-dosage of cabazitaxel was 25 mg/m 2 according to the human body surface area. The common dosage of elemene injection in the market was 400-600 mg each day (about 80-100 mL each day, an emulsion injection containing soybean phospholipid). Considering the clinical use, we prepared the complex liposomes with 0.2 mg/mL of cabazitaxel and 5 mg/mL of β-elemene. The Content Detection of Cabazitaxel and β-Elemene in the Liposomes The cabazitaxel and β-elemene contents were detected by HPLC according to the following conditions: The detection wavelength was 230/210 nm and the flow rate was 0.9 mL/min. For the preparation of phosphoric acid water, 1000 mL water was adjusted to a pH of 4.0 by adding 1% phosphoric acid aqueous solution. The mobile phase A was methanol/acetonitrile/phosphoric acid water (25:30:45), the mobile phase B was methanol/acetonitrile/phosphoric acid water (25:55:20), and the mobile phase C was methanol/acetonitrile/phosphoric acid water (5:90:5). The gradient elution conditions of HPLC were as follows: at 0 min, the mobile phase A/B was 80/20; at 45 min, the mobile phase A/B was 20/80; at 50 min, C was 100. The detection time was 60 min. The injection volume was 20 µL and the column temperature was 30 • C. The detection of cabazitaxel in the liposomes was as follows: 1 mL of the liposome solution was placed in a 25 mL volumetric flask, and methanol was added to the scale of the volumetric flask, which then was subjected to sonication for 30 min and shaken well after cooling. Then, cabazitaxel in the liposome was detected by liquid chromatography conditions at 230 nm, as described above. The detection of β-elemene in the liposomes was as follows: 1 mL of the liposome was placed in a 25 mL volumetric flask, and methanol was added to the scale of the volumetric flask, which then was subjected to sonication for 30 min and shaken well after cooling. Then, 1 mL of this solution was taken out and diluted to 10 mL with methanol and shaken well. Then, β-elemene in the liposome was detected by liquid chromatography conditions at 210 nm, as described above. It was found that the excipients of the liposome did not interfere with the detection of cabazitaxel and β-elemene. The retention time was about 14 min for cabazitaxel. The theoretical plate number of cabazitaxel was more than 4000, and the chromatographic resolution of cabazitaxel was more than 1.5. The cabazitaxel reference solution was linear in the range of 0.30-100.00 µg/mL. The regression equation of cabazitaxel was y = 25.52798x, r = 0.99999. The retention time was about 31 min for β-elemene. The theoretical plate number of β-elemene was more than 4000, and the chromatographic resolution of β-elemene was more than 1.5. The β-elemene reference solution was linear in the range of 0.80-120.00 µg/mL. The regression equation of β-elemene was y = 18.03583x, r = 0.99999. The Detection of Encapsulation Efficiency of Cabazitaxel and β-Elemene in the Liposomes The detection method of encapsulation efficiency of cabazitaxel in liposome was as follows: 5 mL liposome solution was placed into a centrifugal tube and 0.75 g sodium chloride was added and dissolved by vortex oscillating for 3 min, then reserved at 25 • C for 4 h. Then, it was transferred to a 10 mL syringe and was filtered with a 0.45 µm hydrophilic syringe filter. The filtrate was added to 50 mL of methanol and dissolved by sonication. Its content was determined by HPLC, which was given as A. The 10 mL syringe was washed with 20 mL methanol and the syringe filter was immersed in the methanol washing fluid and sonicated for 30 min, cooled and added to 25 mL of methanol. Its content was determined by HPLC, which was given as B. The cabazitaxel content in the liposome was given as Z. The encapsulation efficiency of cabazitaxel in liposome was equal to A/(A + B) × 100%, and the total recovery was equal to (A + B)/Z × 100%. The corresponding aqueous solution of cabazitaxel was prepared according to the liposome prescription. The corresponding aqueous solution of cabazitaxel was taken to be 5 mL, and then processed according to the above method. The content of filtrate was given as C. The cabazitaxel content in the syringe and filter was given as D. The filter interception recovery of cabazitaxel in the corresponding aqueous solution was equal to D/(C + D) × 100%. The detection method of the encapsulation efficiency of β-elemene in liposome: 0.2 mL of the liposome was put into a centrifuge tube with a 0.45 µm PVDF microfiltration membrane and was centrifuged for 10 min at 20 • C and 12,000× g. The lower liquid in the centrifuge tube was taken out and 50 mL of methanol was added; its content was given as A. The upper micro filter tube in the centrifuge tube was taken out and added to 20 mL of methanol and sonicated for 30 min, and then added to 25 mL of methanol; its content was given as B. The β-elemene content in the liposome was given as Z. The encapsulation efficiency of β-elemene was equal to A/(A + B) × 100%. The total recovery was equal to (A + B)/Z × 100%. The corresponding aqueous solution of β-elemene was prepared according to the liposome prescription. The corresponding aqueous solution of β-elemene was taken to be 0.2 mL and was then processed according to the above method. The content of the lower layer of the tube was given as C. The content of the upper layer of tube was given as D. The filter interception recovery of β-elemene in the corresponding aqueous solution was equal to D/(C + D) × 100%. The Release Rate Detection of Cabazitaxel and β-Elemene in the Liposomes As β-elemene was insoluble in water, it needed more than 50% ethanol solution to completely dissolve. Both cabazitaxel and β-elemene were easily soluble in 75% ethanol; therefore, 75% ethanol as a dialysis medium may achieve a leaky groove condition. The detection method of the release rate was as follows: 10 mL of liposome solution was put into a dialysis bag (MW 300000) and placed in a dialysate (75% ethanol 100 mL), and was stirred at 300 r/min and 37 • C. A total of 2 mL of dialysate was taken at 0, 0.5, 1, 2, 4, 6, 8, and 10 h (2 mL of new dialysate was added after each sampling). The content of cabazitaxel and β-elemene in the taken dialysate were detected. The cumulative release rate of cabazitaxel and β-elemene in the liposomes was calculated. The release of each liposome was tested three times. The Animal Pharmacodynamics of the Liposomes The animal experiments were approved by the Scientific Research Ethics Committee of Hangzhou Normal University (number: 2017-030), in accordance with the guiding opinions on the treatment of laboratory animals in China (2006-398). Paclitaxel-resistant A549/T cells (the drug resistance index was 44.6), with a cell concentration of about 1 × 10 7 /mL, were injected subcutaneously into the right axilla of each nude mouse. The drug was administered when the tumor volume was about 100 mm 3 . A total of 77 nude mice were randomly divided into seven groups. In total, 11 nude mice were in each group. The drug administration method was slow intravenous administration via a caudal vein. The taxol injection group used 10 mg/kg, administered every other day. The cabazitaxel injection group used 2.5 mg/kg, administration every other day. The β-elemene liposome group used 25 mg/kg, administered every day. The cabazitaxel liposome group used 2.5 mg/kg, administered every other day. The cabazitaxel-β-elemene complex liposome group used 2.5 mg/kg for the first dose, and 0.625 mg/kg for the later dose, administered every other day. The solvent group used a 5% glucose injection, administered every other day. The blank liposome group used blank liposome, administered every other day. The tumor volume was measured once every two days and the relative tumor proliferation rate was calculated. After 30 days, the tumor tissues were taken and weighed to calculate the tumor inhibition rate. The tumor volume was equal to 0.5 × a × b 2 ; a is the long diameter of tumor and b is the short diameter of tumor. The relative tumor volume (RTV) was equal to Vt/V0, where V0 is the tumor volume at zero days, and Vt is the tumor volume measured at each time. The formula of relative tumor proliferation rate (T/C) was as follows: (the relative tumor volume of the treatment group/the relative tumor volume of the control group) × 100%. The formula of the tumor inhibition rate was as follows: (1 − the tumor weight of the treatment group/the tumor weight of the control group) × 100%. The statistical analysis of the results was performed using a t-test on the SPSS 18.0 software. Conclusions The cabazitaxel liposome, β-elemene liposome and the complex liposome were prepared successfully. The encapsulation efficiencies of drugs in the liposomes were detected using a new precipitation microfiltration or microfiltration centrifugation method. Their encapsulation efficiencies were all above 95%. The release rates were detected using a dialysis method. The release profiles of cabazitaxel and β-elemene in these liposomes conformed to the Weibull equation. The release of cabazitaxel and β-elemene in the complex liposome was almost synchronous. A pharmacodynamics study showed that cabazitaxel liposome and β-elemene liposome had relatively good effects on overcoming paclitaxel resistance in paclitaxel-resistant lung adenocarcinoma. As the flexible complex liposome, the dosage of cabazitaxel could be reduced to 25% that of the cabazitaxel injection while retaining a similar therapeutic effect. The results showed that β-elemene can replace some of the cabazitaxel and reduced the dosage of cabazitaxel, thereby reducing the drug toxicity.
8,631
2019-04-30T00:00:00.000
[ "Chemistry", "Biology" ]
Modifying the false discovery rate procedure based on the information theory under arbitrary correlation structure and its performance in high-dimensional genomic data Background Controlling the False Discovery Rate (FDR) in Multiple Comparison Procedures (MCPs) has widespread applications in many scientific fields. Previous studies show that the correlation structure between test statistics increases the variance and bias of FDR. The objective of this study is to modify the effect of correlation in MCPs based on the information theory. We proposed three modified procedures (M1, M2, and M3) under strong, moderate, and mild assumptions based on the conditional Fisher Information of the consecutive sorted test statistics for controlling the false discovery rate under arbitrary correlation structure. The performance of the proposed procedures was compared with the Benjamini–Hochberg (BH) and Benjamini–Yekutieli (BY) procedures in simulation study and real high-dimensional data of colorectal cancer gene expressions. In the simulation study, we generated 1000 differential multivariate Gaussian features with different levels of the correlation structure and screened the significance features by the FDR controlling procedures, with strong control on the Family Wise Error Rates. Results When there was no correlation between 1000 simulated features, the performance of the BH procedure was similar to the three proposed procedures. In low to medium correlation structures the BY procedure is too conservative. The BH procedure is too liberal, and the mean number of screened features was constant at the different levels of the correlation between features. The mean number of screened features by proposed procedures was between BY and BH procedures and reduced when the correlations increased. Where the features are highly correlated the number of screened features by proposed procedures reached the Bonferroni (BF) procedure, as expected. In real data analysis the BY, BH, M1, M2, and M3 procedures were done to screen gene expressions of colorectal cancer. To fit a predictive model based on the screened features the Efficient Bayesian Logistic Regression (EBLR) model was used. The fitted EBLR models based on the screened features by M1 and M2 procedures have minimum entropies and are more efficient than BY and BH procedures. Conclusion The modified proposed procedures based on information theory, are much more flexible than BH and BY procedures for the amount of correlation between test statistics. The modified procedures avoided screening the non-informative features and so the number of screened features reduced with the increase in the level of correlation. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-024-05678-w. Introduction Controlling the Family Wise Error Rate (FWER) under the nominal level α, in a largescale multiple testing is an important issue in statistical inference.The simplest method for controlling FWER is a Bonferroni (BF) correction, which can be defined as a modification of the rejection threshold for individual P-values.The BF procedure compares all the p-values of K simultaneous hypotheses with α/K.This procedure is very conservative and provide a strong control on the FWER and leads to an increase in type II error rate.In most studies, researchers accepted the hazard of the false discoveries to find any possible significance difference [1].So, the False Discovery Rate (FDR) procedures are proposed and developed.The Benjamini-Hochberg (BH) procedure that compares the P-values with a fixed increase in threshold is used in most recent scientific research [2]. The BH procedure is one of the most important methodological advances in testing multiple hypotheses, which has been widely used for screening large data sets of genomic to identify a favorable number of important features.This procedure has an essential assumption of independence between test statistics.However, when dealing with high-dimensional data such as microarray data, genes are usually associated with biological or technical reasons [1,2]. Initial research in the test of multiple hypotheses and controlling the FDR largely ignored the structure of dependence among the hypotheses, which is often considered a nuisance parameter and is heavily overwhelmed by the assumption of independence [3,4]. Correlation may lead to more liberal or conservative test methods; therefore, it should be considered in deciding which hypotheses should be reported as alternative hypotheses [5].Also, the correlation may greatly increase (inflate) the variance of false discoveries and estimators of the common discovery rate [6,7].Ignoring the dependence between hypotheses may lead to loss of efficiency and bias in decision-making.On the other hand, errors in non-null distribution can lead to false positive and false negative errors [3].Consequently, correlation can significantly worsen the performance of many FDR methods [8], and the FDR can be variable if there is a strong correlation [5,9]. Controlling the FDR under dependency is a major problem that requires a lot of research.The key issue is how to incorporate the dependency structure correctly in the inference.Currently, researchers have focused on the development of multiple comparative methods for the affiliated hypotheses.For the first time, Benjamini and Yekutieli mentioned that the effect of the test statistic dependence on FDR at the level of α is controlled under the desired dependence between P-values in the BH procedure.This method is very conservative in practice.They also introduced the concept of positive regression dependence on subsets (PRDS) and proved that the BH procedure controls the FDR for P-values with such property [10]. Qiu and Yakovlev showed a strong correlation for FDR only through simulation [7].Storey et al., Wu, and Clarke and Hall showed that in the asymptotic concept, the BH procedure is valid in poor dependency models, linear process, and Markov dependency [11][12][13].Owen and Finner et al. showed that the expected values and variance of falsepositive cases might have different features under dependence, but results did not provide an FDR, indicating that the BH procedure under severe dependence and variation is vulnerable [6,14]. Efron and Schwartzman and Lin showed that strong correlations reduce the accuracy of estimating and testing [2,5].Specifically, positive or negative correlations have affected the experimental zero distributions of Z-values, which has a significant effect on the subsequent analysis. The studies carried out by Sun and Tony Cai, and Sun and Wei, and Benjamini and Heller showed that the combination of functional, spatial, and temporal correlations in inference could improve the strength and interpretation of existing methods.However, these methods do not apply to general dependency structures [4,15,16].Also, Leek and Storey and Friguet et al. studied multiple testing under the factor models [4,17,18].For a general class of dependent models, Leek and Storey, Friguet et al., Fan et al., and Fan and Han showed that overall dependence could be very weakened by reducing the common factors.Modified P-values can be used to build more powerful FDR methods [17][18][19][20].The studies by Hall and Jin, and Li and Zhong showed that multiple testing and covariance structures can be used through conversion to make the test statistic, and the results indicated the beneficial effects of dependence [21][22][23]. However, the above methods rely heavily on the accuracy of estimated models and the asymptotic assumptions of the test statistics.Under small sample conditions, poor estimates of model parameters or violation of independence hypotheses may lead to less powerful or invalid FDR methods.Risser developed a theoretical approach of Bayesian decision for multiple dependent tests and a nonparametric hierarchical statistical model, which controls the FDR and is a strong model for determining the false model.Du et al. created a class of multiple testing without distribution for controlling the FDR under general dependency by considering a sequence of symmetric ranking statistics [20,21]. In many cases, especially in high-dimensional data, consecutive test statistics have a moderate or strong correlation [24][25][26].Although in high-dimensional and fused highlow order biological information some techniques such as machine learning or graph representation learnings are developed to handle the complex structures between features, feature selection by MCPs before using these techniques could improve their results [27][28][29]. Benjamini and Tille and Clark and Hall argued that the state of dependency in the multiple testing is asymptotic with the same independence [10,11].But, general dependency structures in multiple testing is still a very challenging and important problem.Efron noted that solidarity should be considered in deciding whether zero hypotheses are important because the accuracy of FDR techniques is compromised in high-correlation situations [9].However, even if procedures are valid under specific dependency structures, regardless of real dependency information, they will continue to suffer from reduced performance. Due to the widespread use of the BH procedure, considering the effect of correlation in practical analysis is important.Previous studies evaluated two type of correlation structures; correlation among features and correlated samples.The studies by Storey et al., Hall and Jin, Sun and Cai, and Li and Zhong focused on correlated samples [1,10,19].In the present study, we consider the correlation between features that leads to dependent test statistics, so to modify the BH procedure we accommodate the correlation between sorted features based on the absolute values of corresponded test statistics.For correct inference, this study modified the FDR procedure according to an arbitrary correlation structure and proposed three modified procedures based on conditional fisher information of consecutive sorted test statistics for controlling the false discovery rate. In the present study, we proposed three modifications to the FDR procedure which can counteract the correlation between sorted features based on the conditional fisher information between consecutive sorted test statistics, and applied them for high dimensional hypothesis testing.Our proposed methods suggested for simultaneous hypothesis testing in two major groups; 1.For simultaneous comparison of P features in two groups; Such as genomic data of a specific disease we have thousands of features for two groups (case/control), so we must have done P hypothesis testing to find the feature(s) with significant difference between groups.2. For pairwise comparison of a unique feature among k independent groups;Such as Post Hoc tests after ANOVA, we must have done k(k-1)/2 hypothesis testing to find the group(s) with significant differences. The correlation structure between test statistics in both categories are exist and obviously, it is not ignorable.We applied our modified procedures for first category of simultaneous high dimensional hypothesis testing but it could be applied simply for the second category. Results of the simulation study Table 1 compares the mean and the standard deviation (SD) of the number of screened features without adjustment on the p-values, and with adjustment by Bonferroni (BF), Benjamini-Hochberg (BH), Benjamini-Yekutieli (BY), and three proposed modified procedures under mild (M3), moderate (M2), and strong (M1) assumptions, according to the level of the correlation coefficient (ρ = 0, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 0.95, 0.99) between consecutive sorted test statistics by their p-values. When the correlation coefficient between all features is zero the number of features with p-value less than 0.05 have the mean and the standard deviation equal to 353.35, and 11.80, respectively.Also, the mean number of screened features by BH procedure is approximately equal with all three modified procedures M1, M2, and M3.However the mean number of screen features by BY procedure is considerably less than all other procedures except BF.The mean number of screen features by BY procedure reaches to M1, M2, and M3 procedures when the correlations are 0.95, 0.9, and 0.8, respectively.It means that under high levels of correlations the BY procedures are performed approximately equal to the modified procedure.As shown in Table 1, with an increase in correlation coefficients the mean number of screened features without adjustment on their P-values are approximately constant, but the standard deviations increased by ρ.This pattern exists for BF, BY, and BH adjustment procedures.But for the M1, M2, and M3 procedures both means and standard deviations have changed according to the correlation between features.The mean number of the screened features decreased according to the increase in the level of correlations in the three proposed methods, but their standard deviations increased. As expected the number of screened features by the M3 procedure is less than the number of screened features by the moderate modification M2.The number of screened features by M2 is less than the number of screened features by the mild modification M3.The standard deviations of the number of the screened features increase with the level of correlation in all proposed procedures.As shown in Fig. 1. when ρ = 0 the dis- tribution of the number of screen features are symmetric, but the kurtosis of BF and BY procedures are higher than normal density.The distribution of screen features by BH, M1, M2, and M3 procedures are approximately identical and normal.By increasing ρ distribution of screen features by all procedures is skewed to right and the skew- ness increases withρ .The box plots of the number of screened features to compare the median, Interquartile Range (IQR), and outliers are presented in Fig. 2. From these plots, we can observe that despite the increase in outliers, with the increase in, the IQR of modified procedures is smaller than BH and BY procedures.The range distance between the first and the third quartiles with the median for modified procedures is approximately equal and symmetric in comparison to BY and BH procedures.More descriptive statistics of screen features in simulation study are presented in Additional file 1: S1.Also, the results of the othersimulation study when the sample sizes at each group are equal to 30, are presented in Additional file 2: S2. Results of the real study Based on the p-values of the t-tests, on P = 22,277 gene expressions and at the level of the α = 0.05, 8465 gene expressions were significant but, most of these genes are not involved in cancer.Since α, type I error rate was not reported in this study, we first determined the power (1-β) based on the different values of the effect size and type I error rates, using the following formula, Due to the high-dimension data, when performing this hypothesis test, the main concern is to keeping the trade between control the amount of type 1 error (i.e., to keep the family-wise type I error rate at its nominal level α, such as BF procedure) and the power of the study to screening the significance features, by using the FDR procedures to screen the more relevant gene expressions to colorectal cancer.We compared the performance of BH, BY and three proposed modified procedures; M1, M2, and M3 in Table 2. Firstly, we show the distribution of the 22,277 t-values and bivariate correlations between sorted features by their p-values in Fig. 3.As shown in these histograms the distribution of t-values and correlations are symmetric around zero.For more exploration we draw and show the distribution of the first 200 t-values and bivariate Table 3 shows the number of screened features by six adjustment procedures at two levels of α.Also the entropy and AUC for the EBLR models were reported in this table.As shown in Table 3 the number of screened features by BF and BY procedures are equal.Also, the number of screen features by M1 and M2 procedures are equal.The numbers of screen feature at α = 5 × 10 -12 , by all six procedures are few and the Entropies are approximately equal.So, we prefer to use α = 5 × 10 -10 , which gains more power and compares the performance of FDR procedures at this level of type I error rate.At this level of α, the number of screened features increased considerably for all adjustment procedures.Also, there is a considerable difference in screen features by the different adjustment procedures.By fitting the EBLR model on screened features by six adjustment procedure, the entropies and the AUCs were calculated.As seen in this table all the AUCs are 1 (perfect fit) except for the BF procedure.The entropies are near together, but the entropy of the EBLR model on 94 screen features by the BH procedure is equal to 0.82, and the entropy of the EBLR model on 61 screen features by the M1 procedure is equal to 1.19, it means that with losing 94-61 = 33 degree of freedom the reduction in entropy is just equal to 1.19-0.82= 0.37, so the efficiency of M1 procedure is more than the BH procedure.Also the difference between the entropy of the EBLR models fitted on screened features by M2 and M1 procedures is ignorable in compare with loss in degree of freedoms.The box plots in Fig. 4, show that the predicted probability of the EBLR model completely separated in cancerous and healthy tissue for M1, M2, M3, and BY procedures, but for BF and BH procedures there is no complete separation.Although, the box plot of the BY procedure shows perfect fit the entropy of the EBLR model fitted on 61 features from the M1 procedure, and 71 features from the M2 procedure, and 81 features from the M3 procedure are less than the entropy of the EBLR model fitted on the 59 screen features by BY procedure.So, the M1, M2 and M3 procedures are more efficient than BY procedure.So, finally the M1 procedure with 61 screened features is the most efficient procedure for feature screening in colorectal cancer data according to less entropy with less loss in degree of freedom. Discussion The BH procedure for feature screening based on controlling the false discovery rate has a substantial assumption of independent test statistics.In large-scale multiple testing assumption of independence between test statistics is unrealistic.Many studies reported that the dependency structure between test statistics cause over-dispersion in the distribution of the FDR [4][5][6][7][8].In the present study, we observe that the over-dispersion and right skewness in the distribution of the number of screen features by BF, BH, BY and proposed procedures increase with the level of correlations.However, as shown in Fig. 1 the skewness in the density of the proposed procedures is less than the BH procedure, and also the interquartile range of boxplots in Fig. 2 was thinner than BH and BY procedures. When there was no correlation between 1000 simulated features, the performance of the BH procedure was similar to the three proposed procedures, but the BY procedure is very conservative as reported [4].In low to medium correlation structures the BY procedure is too conservative, and the BH procedure is too liberal.The mean number of screened features by BH and BY procedures were constant at the different level of the correlation between features.The mean number of screen features by our proposed procedures were between BY and BH procedures and reduced when the level of correlations increased.Where the correlations between features were high (ρ > 0.8) the number of screened features by proposed procedures reach to the BF procedure, as expected.We reduced the acceleration of increasing the number of false discoveries by modifying the BH procedure according to the amount of extra information of each new feature, resulting in a more precise procedure for screening the important features with the presence of a solidarity structure between the features.Then, we compared the performance of three proposed procedures with BF, BH and BY for screening in High-dimensional genomic dataset, with 22,277 gene expressions' comparisons between the healthy and cancerous tissue groups.In this regard, by allowing two different levels for nominal type I error rates, α, the significance genes were screen by six procedures.The Efficient Bayesian Logistic Regression (EBLR) model were used to fit a predictive model based on the screened features.The EBLR model based on the screen features by M1 and M2 procedures have minimum entropies and were more efficient than BY and BH procedures.In a study on this data set twenty Machin Learning approaches were used to fit the predictive model based on Leek and Storey developed an approach to address the strong arbitrary dependence of multiple testing collected on the original data surface in a large-scale (high-dimensional data) study before calculating the test statistics or P-values.To address the dependency problem of multiple testing based on kernel dependency estimation, they presented a small set of vectors that define entirely the dependency structure in any high-power data set.They showed that hypothesis tests could be randomly independent as long as conditioning on a dependence kernel.This generalizes the results of the independent error rate control to the general dependency mode.It can also estimate dependence at the data level, which is more useful than estimating dependence at the P-value level or test statistics [23].Compared with proposed procedures this method is blind and base on the random correlation structures, but our modifications are based on the ordered information of whole data set. Although, some efficient methods for the low to high-correlated feature have been proposed and used, our proposed procedures are the first to modify the thresholds of the FDR procedure based on the information theory.So, according to the results of the simulation study and real data study, the optimization in the number of screened features has occurred. Conclusion The modified proposed procedures based on information theory, are much more flexible than BH and BY procedures for the amount of correlation between test statistics.Our modified procedures avoided screening the non-informative features and so the number of screened features reduced with the increase in the level of correlation. The three proposed modified procedures for feature screening are simply applicable for arbitrary positive or negative, and low or high correlation structures between sorted test statistics.These modifications are based on information theory and lead to finding the small set of significant features with sufficient information according to correlation between the sorted features and so, the remaining features do not have extra information. Methods First, we describe the Benjamini-Hochberg (BH) procedure and Benjamini-Yekutieli (BY) procedure then introduce our proposed modified procedures. Benjamini-Hochberg procedure (BH Procedure) In this procedure, when test statistics under the distribution of the null hypothesis are independent, the BH procedure control the FDR at the level of α.The BH procedure is shown below: 1. Sorting the observed p-values in ascending order, p (1) ≤ • • • ≤ p (P) 2. Calculation of k = max{1 ≤ i ≤ P : p (i) ≤ l i P α} where l i = i for i = 1, 2, . . ., P 3. If there is such a K, all the null hypotheses corresponding to p (1) ≤ • • • ≤ p (k) are rejected. Benjamini-Yekutieli procedure (BY procedure) The Benjamini-Yekutieli proposed a procedure for controling the false discovery rate under arbitrary dependency (test statistics have positive or negative correlations).They modified the threshold of BF procedure using a constant function C(P) = P i=1 1 i .And find. But in situation that the tests statistics are independent or positively correlated they suggested C(P) = 1 like as an ordinary BH procedure. Proposed modified procedures Consider the simultaneously P hypotheses: where δ i = |µ 1i − µ 2i | , is the absolute mean difference between two groups of the ith feature; µ 1i , is the mean of the ith feature at the first (case)group.µ 2i , is the mean of the ith feature at the second (control)group. If we assume that all features are independent and following the multivariate Gaussian distribution with mean δ = (δ 1 , δ 2 , . . ., δ P ) and diagonal covariance matrix Σ. We could scaled each δ i s by dividing on their variances: where; σ 2 1i , is the variance of the ith feature at the first (case)group.σ 2 2i , is the variance of the ith feature at the second (control)group.n 1 &n 2 , are the sample sizes of the first and second groups, respectively.So we rewrite the hypotheses (1) as follow: The t-test statistics for ( 2) is as follows: where X 1i , is the sample mean of the ith feature at the first (case) group, X 2i , is the mean of the ith feature at the second (control) group, S 2 1i , is the sample variance of the ith feature at the first (case) group, S 2 2i , is the sample variance of the ith feature at the second (control) group, α . (1) , is pooled variance of the ith feature in both groups, n 1 , n 2 , are the sample sizes of the first and second groups, respectively, if n 1 and n 2 are large enough,(n 1 + n 2 − 2) ≥ 30 , t i s follows Gaussian distribution with mean τ = (τ 1 , τ 2 , . . ., τ P ) and covariance matrix I.So we use Z i instead of t i .According to information theory when X 1i − X 2i s are independent multivariate Gus- sian random variables, the fisher information of δ i , conditional on δ i−1 is as follow; Also, Z i s are independent multivariate standard Gaussian random variables, so the fisher information of τ i , conditional on …, P. Also the fisher information of P independent Gaussian features is equal to I(τ 1 , τ 2 , . . ., τ P ) = P .So, In BH procedure according to independence assumption, the step-up conditional thresholds increase by 1 P .But when the features are correlated, if Corr X (i) , X (i−1) = ρ i we have: So, the fisher information of δ i , conditional on δ i−1 , i � = j, is as follow, And the fisher information of τ i , conditional on τ i−1 , i � = j, is as follow: So, under mild condition we propose the conditional thresholds increase by (3).As 1 − ρ 2 i ≤ 1 the information of τ i , conditional on τ i−1 decrees when both variables are correlated.It is clear, because when two variables are correlated, a part of information of the second variable is defined in the first variable.As Corr Z (i) , Z (i−1) = ρ i we could define two independent consecutive sorted standardized Gaussian test statistics as In genomic high dimensional datasets, features are measured for unique source (patient) so we could have a strong assumption that all effects (absolute mean differences) are identical with Gaussian distribution, but the correlation between features are different.As a Result So, the conditional fisher information under strong assumption is So, under strong assumption we propose the conditional thresholds increase by ( 4).Also, we can write: AS, (1 + |ρ i |) ≥ 1 , we proposed a moderate modification between strong and mild modification: As a result, under moderate condition we propose the conditional thresholds increase by (5). The step-down procedure works after sorting absolute values of Z i , in descending order.Supposed that Z (i) is the ith sorted test statistics and Corr Z (i−1) , Z (i) = ρ i for i = 2,3,…,P. In case of ρ i = 0 , the FDR procedure should be modified based on this correlation coef- ficient.The Pearson correlation coefficient r i , as an estimator of ρ i , between sorted con- secutive features according to their p-values was used to the modifications on the FDR procedure. If all the sorted features have a complete linear correlation, we will have it means that all sorted test statistics have same information in the class of the linear estimation statistics of τ i , and so the thresholds of our proposed procedures do not increase for consecutive tests.So, the performance of modified FDR procedures is near to the BF procedure. If the test statistics are independent the pairwise correlation coefficient between all features are zero, so we have: It means that, when all sorted test statistics are independent, the performance of three proposed procedures are near to the BH procedure. We compared the adjusted thresholds and the adjusted p-values procedures of BF, BH, BY and three proposed procedures; strong (M1), moderate (M2), and mild (M3) by the rank of the sorted p-values in Table 4. Except for the BY procedure, the first p-value compared with 1 P α in all other procedures.The thresholds of BF proce- dure are fixed and there is no increase with the rank of the sorted p-values.Both BH and BY thresholds increased constantly by the rank of the sorted p-values, k P α and k P×C(P) α , respectively.The thresholds of M1, M2, and M3, increased by the rank of sorted p-values but were proportional to the level of correlation between sorted test statistics.The speed of increases in modified procedures is lower than BH procedure.So, it is expected that the number of screened features by the modified procedures be less than the BH procedure.As C(P) > 1 for P > 1, the first threshold of the BY proce- dure is less than the BF procedure, so, the BY procedure could be more conservative than BF procedure due to its first threshold value. Illustration example To demonstrate how we estimate the thresholds and adjusted p-values we make an artificial example.Supposed that we did eight individual hypothesis tests to find the significance differences for eight features in two groups and sort their p-values as follows, Also we find the Pearson correlation coefficient between two consecutive sorted features by their p-values as follow, The purpose of this example is simultaneous comparison of eight features between two groups.So, we must use of an adjustment procedure to control the FDR.We compare the performance of six adjustment procedures to find the simultaneous difference at the significance level of α = 0.1 With two approaches; first, we calculate the adjusted p-values and then compared them with α, and secondly, we calculate the adjusted thresholds and compared the sorted p-values with them (Table 5).As shown in Table 5, both approaches lead to the same result. Simulation study We set the dimension of P = 1000 features in two independent equal groups with size n 1 = n 2 = 100 and generate the observations for these features sequentially as the following scheme with 1000 replications. At each replication we conducted P independent sample t-tests and sort their p-values.We sort the p-values and then calculate the adjusted p-values according to the BH procedure, our proposed procedures, and BF procedure.Then we set α = 0.05 and calculate the number rejected null hypothesis (screened features) without adjustment (p-value < α) and with adjustment (adjusted p-value < α) by each procedure.The mean, and the standard deviation of the number of discoveries for all (r = 1000) replications were calculated separately for each value of ρ. Real data application: gene expression data from colon cancer patient tissues In this section, we evaluate the performance of the proposed procedure as an analysis of a real data set.From the GSE44861 data set of colorectal cancer, we used 111 samples of microarray tests with 22,277 gene expression levels and a binary status feature including 56 samples of cancer tissue (Y = 1) and 55 samples of healthy tissue (Y = 0).This data was generated using the Affymetrix Gene Chip platform and has been preprocessed and the gene expression levels are presented as fragments per kilo base million (FPKM.The normalization process was done using the "edgeR" package in R.This data set is freely available for researchers to investigate gene expression patterns in colon tumors and identify potential biomarkers of colorectal cancer.These data were registered in the GEO database in 2013 and updated in 2017. Compared to cancerous and non-cancerous cells, if the difference in expression is significant for a specific gene, it can be concluded that the gene was related with colorectal cancer.We used a T-test to find genes associated with colon cancer and to select the significant gene expressions.The hypothesis of this test is as follows, where µ 1i is the mean of the ith gene in group 1 (cancerous tissue), µ 2i is the mean of the ith gene expression in group 2 (healthy tissue), and P = 22,277.In this way, the p-values of the t-tests for all features are determined.Then, we sort the p-values in an ascending order.And estimating the bivariate correlation between two consecutive sorted test statistics by calculating the bivariate correlation between their sorted features.Then the adjusted p-values based on BF, BH, BY, and three proposed procedures M1, M2, and M3 were calculated and compare with α. To assessing the efficiency of screen features by different procedures, a multiple logistic regression model was used.Due to quasi complete separation, and small sample size ordinary maximum likelihood approach did not converge.So, the Efficient Bayesian Logistic Regression (EBLR) model that was developed under a highly efficient Ultimate Polya Gamma Marcov Chain Monte Carlo (MCMC) algorithms, was used.The "UPG" package under R4.3.1 was used to fit EBLR model on screened features. To compare the results of the EBLR model on screened features by different procedures; BF, BY, BH, M1, M2, and M3, we use three approaches: Fig. 1 Fig. 1 Density plot of the number of discoveries by MCP procedures at the different level of correlations in simulation study Fig. 2 Fig. 2 Box plots of the number of discoveries by MCP procedures at different level of correlations in simulation study correlations between sorted features by their p-values in Fig.3.As expected the t statistics for first 200 t-values have bi-module distribution on two tailed of the histogram of t-values for 22,277 features.However, the histogram of correlations is more exciting.The correlations between first 200 features are high and also has bi-module distribution on the taileds of the histogram of bivariate correlations between 22,277 consecutive sorted gene expressions.So, independence assumption in the BH procedure is violated and it is necessary to consider the correlation structure in the FDR procedures. Fig. 3 Fig. 3 Histogram of t-values and bivariate correlations between sorted features of colorectal gene expression data Fig. 4 Fig. 4 Box plots of predicted probability for Y = 1 versus observed value of Y, from fitted EBLR model on the screen genes by different MCP procedures for colorectal cancer data Table 1 The mean and the standard deviation (SD) of the number of screened features by BF, BH, BY, M1, M2, and M3 procedures in the simulation study Table 2 Cross tab of the p-values and Pearson correlation coefficients between sorted features of colorectal cancer study Table 3 The entropy, and the Area Under the ROC Curve (AUC) of fitted EBLR models on the screened genes by BF, BH, PY, M1, M2, and M3 procedures of colorectal cancer study Table 5 Adjusted thresholds and adjusted p-values by BF, BH, By, M1, M2, and M3 procedures in the illustration example *This adjusted p-value is less than α = 0.1 **p (i) is less than the adjusted
7,936.6
2024-02-05T00:00:00.000
[ "Biology", "Mathematics" ]
EISCAT Svalbard radar observations of SPEAR-induced E- and F-region spectral enhancements in the polar cap ionosphere . The Space Plasma Exploration by Active Radar (SPEAR) facility has successfully operated in the high-power heater and low-power radar modes and has returned its first results. The high-power results include observations of SPEAR-induced ion and plasma line spectral enhancements recorded by the EISCAT Svalbard UHF incoherent scatter radar system (ESR), which is collocated with SPEAR. These SPEAR-enhanced spectra possess features that are consistent with excitation of both the purely growing mode and the parametric decay instability. In this paper, we present observations of upper and lower E-region SPEAR-induced ion and plasma line enhancements, together with F-region spectral enhancements, which indicate excitation of both instabilities and which are consistent with previous theoretical treat-ments of instability excitation in sporadic E-layers. In agreement with previous observations, spectra from the lower E-region have the single-peaked form characteristic of collisional plasma. Our observations of the SPEAR-enhanced E-region spectra suggest the presence of variable drifting regions of patchy overdense plasma, which is a finding also consistent with previous results. Introduction Among the most important phenomena associated with overdense RF heating are the stimulation of non-propagating plasma density irregularities at the upper-hybrid height and the excitation of Langmuir and ion-acoustic waves at the Omode reflection height (e.g. Robinson, 1989;Rietveld et al., 1993;Kohl et al., 1993;Mishin et al., 2004). These ionacoustic and Langmuir waves can be detected in the inter-Correspondence to: R. S. Dhillon<EMAIL_ADDRESS>action region by radars pointing in a wide range of directions, including those close to the geomagnetic field direction (e.g. Stubbe et al., 1992;Kohl et al., 1993;Djuth et al., 1994;Isham et al., 1999;Honary et al., 1999;Rietveld et al., 2000;Dhillon and Robinson, 2005). These wave modes give rise to enhancements in both the E-and F-region ion and plasma line incoherent scatter spectra. These spectral enhancements are thought to be caused by excitation of instabilities (Perkins and Kaw, 1971) that include the purely growing mode (Fejer and Leer, 1972), also called the oscillating two-stream instability or modulational instability (Rietveld et al., 2002), and the parametric decay instability (Fejer, 1979). These two instabilities were first observed using the HF heating facility at Arecibo (Carlson et al., 1972;Gordon and Carlson, 1974) as summarized in the review by Carlson and Duncan (1977). Among other results, many observations of these phenomena have been obtained using the EISCAT incoherent scatter radar systems (e.g. Stubbe et al., 1992;Kohl et al., 1993;Stubbe, 1996) during RF heating experiments at Tromsø. The SPEAR system (a recent addition to the global array of HF high-power facilities) began experimental operations in April 2004, when the first results using the highpower beam were obtained (Robinson et al., 2006). Since then, further successful experimental campaigns have been undertaken, during which RF-induced enhancements in both field-perpendicular and field-parallel scatter have been detected by coherent and incoherent scatter radars, respectively. Field-parallel SPEAR-induced spectral modifications whose characteristics are consistent with excitation of the purely growing mode and the parametric decay instability have been observed in the F-region by the EISCAT Svalbard UHF incoherent scatter radar system (Robinson et al., 2006). In this paper, we add to these results by concentrating mainly upon enhancements in sporadic E-layers and restrict our attention to E-and F-region SPEAR-enhanced spectral data accumulated by the EISCAT Svalbard Radar (ESR) during experimental SPEAR/ESR/CUTLASS campaigns conducted Published by Copernicus Publications on behalf of the European Geosciences Union. in December 2004 and November/December 2005. Heating of sporadic E-layers was first reported by Gordon and Carlson (1976) using the Arecibo 430 MHz incoherent scatter radar. Further observations were obtained by Djuth (1984), during typical daytime conditions as well as during intervals where sporadic E-layers were present, and Djuth and Gonzales (1988), who studied the temporal evolution of enhanced plasma lines during heating of sporadic E-layers and found enhancements upshifted and downshifted from the transmitter frequency by the heater frequency. Also, Schlegel et al. (1987) noted significant increases in the E-region ion line spectral power recorded by the EISCAT 933 MHz mainland UHF radar. More recently, Rietveld et al. (2002) presented RF-induced E-and F-region spectral enhancements that were detected by both the UHF (operating at 931 MHz after renovation) and 224 MHz VHF EISCAT mainland radars. They concluded that there was evidence consistent with E-region excitation of the parametric decay instability and purely growing mode. In Sect. 2 of this paper, the experimental configuration and certain technical aspects of the SPEAR system are described, together with the ESR. Section 3 will provide an overview of the experiments conducted in the SPEAR campaigns and a discussion of the results will be presented in Sect. 4. Throughout this paper, repeated references will be made to the paper by Robinson et al. (2006), where initial observations of simultaneous SPEAR-enhanced coherent and incoherent scatter data are presented, and henceforth this publication will be referred to as R2006. Instrumentation SPEAR is a new combined HF high-power heater and coherent scatter radar system (R2006; Wright et al., 2000) located in the vicinity of Longyearbyen (Spitzbergen in the Svalbard archipelago with a magnetic dip angle of approximately 8.5 degrees) and is designed to carry out a range of space plasma investigations of the polar ionosphere and magnetosphere. The first results obtained using SPEAR have been presented in R2006. Full details of the SPEAR system, including technical specifications and operating constraints, can be found in R2006. During the intervals covered here, SPEAR operated using the full 6×4 array, transmitting O-mode-polarized radio waves in the field-aligned direction with a frequency of 4.45 MHz and an effective radiated power (ERP) of approximately 15 MW. The half-power beam width is approximately 21 degrees. The CUTLASS coherent scatter radars (Milan et al., 1997) were also in operation during the experimental intervals presented below. Both the Iceland and Finland radars ran an experimental mode that used 15 km range gates on channel A and 45 km range gates on channel B, with beam 6 of Iceland and beam 9 of Finland overlooking the SPEAR site. Because of a combination of the frequency sweeps (11-13 MHz), in-tegration time (1 s) and beam sweeps (beams 3-5) that were used, the temporal resolution of the data from any specific range-beam cell varied from 9-15 s. Further details of the CUTLASS operating modes utilized during the experiments are given in R2006. The incoherent scatter radar data presented below were recorded using the ESR (Wannberg et al., 1997), which is collocated with SPEAR. The ESR, which operates at frequencies close to 500 MHz, was used to detect the Langmuir and ion-acoustic waves generated during the excitation of instabilities near the O-mode reflection height. During the December 2004 campaign, the ESR ran an experimental mode that used the fixed (field-aligned) 42 m dish to collect ion line data and the steerable 32 m dish (also pointed field-aligned) to collect plasma line data. The ion line spectra were obtained on a channel with a transmitter frequency of 499.9 MHz, The height discriminated ion line spectral data were obtained in two altitude ranges of 105-210 km and 129-757 km with range resolutions of 4.750 km and 14.250 km, respectively. The frequency resolutions of data from the two altitude ranges were 0.992 kHz and 0.880 kHz, respectively, with the frequency ranges of both being ±31 kHz. The temporal resolution of the ion line data was 6.4 s. Although plasma line data were also recorded on a separate transmitter channel at a frequency of 500.3 MHz, no E-region plasma line spectral enhancements were detected because the excited E-region lay below the lowest altitude from which plasma line data were collected (150 km). During the December 2005 campaign, the ESR ran an experimental mode that used the steerable 32 m dish (pointed field-aligned) to collect both ion and plasma line data using long pulses. The ion line spectra were obtained on two channels with transmitter frequencies of 499.9 and 500.3 MHz. The height-discriminated ion line spectral data for both channels were obtained for an altitude range of 86-481 km with a range resolution of 28.226 km. The frequency resolutions of data from the two channels were 2.0 kHz with the frequency ranges of both being ±50 kHz. Only data from the 499.9 MHz channel have been presented in this paper. The plasma line data were recorded on a separate channel with a transmitter frequency of 500.1 MHz. Plasma line spectra were obtained in two overlapping frequency bands of 3.2-4.8 and 4.5-6.1 MHz both upshifted and downshifted from the transmitter frequency. The plasma line data were obtained for altitude ranges of 90-315 and 240-465 km, although in this paper data are only shown from the 90-315 km altitude range, which includes the E-region. Because of the long pulses used, together with a lack of data regarding the echo strength versus range across the long pulse, the plasma line data are not discriminated in altitude. The temporal resolution of both the ion and the plasma line data was 5.0 s. These data illustrate succinctly the effects on the SPEAR-enhanced spectra of an ionosphere with a well developed E-layer (interval 1), a relatively structured ionosphere (interval 2) and an ionosphere with an irregular E-layer (interval 3). These ionospheric conditions were deduced using ionograms, recorded using the Svalbard ionosonde (R2006), which were taken during each of these data intervals. Each of the ionograms, which are shown in Fig. 1 . This ionogram is suggestive of an irregular E-layer that interacts with radio waves that have frequencies in the range of about 2-7 MHz. The sporadic Elayer echoes exhibit an altitude extension of about 20 km at frequencies between 3 and 6 MHz. Although the altitude resolution in the ionograms is 6 km, the extended altitude range may be caused by the oblique nature of some of the echoes detected by the ionosonde receiver/processing system. The CUTLASS Finland and Iceland radars were operating during the three data intervals and they collected data, which are shown in Fig. 2, using the scanning mode that was described in Sect. 2. The three labelled rows of panels correspond to intervals 1, 2 and 3, respectively. The columns of panels, from left to right, correspond to data from Finland Channel A, Finland Channel B, Iceland Channel A and Iceland Channel B. The Finland data were taken using beam 9 and the Iceland data with beam 6 (see Sect. 2). The horizontal black bars in each of the panels indicate when SPEAR was transmitting, with the vertical dashed lines denoting the SPEAR switch-on and switch-off times. Finland channel B data for interval 1 are missing and therefore no data are shown in this panel. Clearly, definite SPEARenhanced CUTLASS backscatter was only observed during 12:48-12:50 UT on 3 December 2005 (within interval 2) by a b c Fig. 1. Ionograms, recorded during SPEAR-off using the Svalbard ionosonde (R2006) from the three data intervals that have been discussed in this paper. Panel (a) shows an ionogram taken at 13:26 UT on 10 December 2004 (interval 1) in an ionosphere with a well developed blanketing sporadic E-layer that affects O-mode waves with frequencies from 2-10 MHz. There is also evidence of second-hop scatter from approximately 4-9 MHz. Panel (b) shows an ionogram recorded at 12:50 UT on 3 December 2005 (interval 2) in a relatively structured ionosphere that has features consistent with an F-region layer. Panel ( the Finland radar. We shall return to these results later in the discussion given in Sect. 4. We turn now to ESR incoherent scatter radar data, shown in Figs. 3, 4 and 5, which correspond to intervals 1, 2 and 3, respectively. Figures 3, 4 and 5 all contain panels that show time series of ion and plasma line amplitudes. In all such panels horizontal black bars denote the period or periods during which SPEAR was transmitting. Also, the dashed horizontal lines give the mean ion or plasma line amplitude during the particular SPEAR-on or SPEAR-off interval in which they occur. Figure 3 shows ion line data from interval 1 (10 December 2004, 13:10-13:42 UT), during which SPEAR was switched on at 13:12, 13:20, 13:28 and 13:36 UT. Panels (a) and (b) show E-region data. Panel (a) shows averaged spectra from an altitude range of 110-120 km. The solid and dashed lines correspond to SPEAR-on (13:28-13:32 UT) and SPEAR-off (13:32-13:36 UT) intervals, respectively. The single central peak in the spectrum is indicative of collisional plasma and there is a clear increase in its amplitude during the SPEAR-on period. This is consistent with previous observations of HF-enhanced E-region ion line spectra (e.g. Rietveld et al., 2002). Panel (b) shows the amplitude of the central (0 kHz) ion line (110-120 km) for 13:10-13:42 UT. The SPEAR-induced spectral enhancement is generally present throughout the SPEAR-on periods, with its amplitude, although variable, typically being an order of magnitude larger during SPEAR-on than SPEAR-off. Ionograms taken at around 13:40 UT (not shown), in which the signature of the E-layer is not as pronounced, indicate that the sporadic E-layer has weakened and this is consistent with the lower (SPEAR-induced and natural) ion line amplitudes seen at this time. Panels (c-f) of Fig. 3 show F-region (160-220 km) ion line enhancements corresponding to the E-region enhancements shown in panels (a) and (b). SPEAR-induced spectral enhancements were found to occur within the altitude range of 160-220 km. Panel (c) shows averaged ion line spectra for SPEAR-on and SPEAR-off. For the case of SPEAR-off, and downshifted (f) ion line amplitudes. The E-region spectral enhancement, characterized by a clear increase in the ion line amplitude for SPEAR-on compared to SPEAR-off, is consistent with excitation of collisional plasma. The spectral enhancements in the F-region are not as distinct as those for the E-region and are consistent with a high proportion of the pump wave energy being deposited in the sporadic E-layer. denoted by the dashed line, the spectrum has the usual form of an F-region ion line spectrum, with two ion-acoustic peaks whose separation is related to the plasma temperature. The spectrum for SPEAR-on (solid line) has enhancements in the central peak, corresponding to the purely growing mode, and the ion-acoustic peaks, corresponding to the parametric decay instability. The asymmetry of the spectrum contrasts with the high degree of symmetry seen in previous SPEARenhanced F-region ion line spectra (R2006), which were obtained when no significant E-region was present. Unsurprisingly, this asymmetry does not have the form of previous asymmetries that have been attributed to the parametric decay instability (Stubbe et al., 1992). We shall return to and further discuss these F-region ion line spectral asymmetries later in this section. Panels (d-f) show the amplitudes of the upshifted ion line peak (UIL) (d), the central part of the spectrum (CIL) (e), and the downshifted ion line peak (DIL) (f). The ion line data have been presented in this format because the variability in the amplitudes, and hence the action of the purely growing mode or the parametric decay instability, can be monitored. Clearly, changes in the CIL amplitude generally occur simultaneously with changes in the DIL and UIL amplitudes. The separation of the ion-acoustic peaks during the SPEARon periods remains unchanged, which implies no change in the electron temperature during SPEAR-on. Clearly, the Eregion CIL amplitude varied appreciably during the SPEARon periods, as shown in panel (b), and this is accompanied by considerable variability in the F-region ion line amplitude. There is not a clear demarcation in amplitude between the times when SPEAR was transmitting and those times when it was not, with the possible exception of 13:28-13:32 UT where the amplitude of all three spectral components is increased by approximately a factor of two compared to the amplitude for SPEAR-off. Figure 4 shows ion and plasma line data from interval 2 (3 December 2005, 12:46-12:52 UT). The data originate from an altitude range of 140-170 km, which corresponds to the upper E-and lower F-regions. Panel (a) shows averaged ion line spectra for SPEAR-on (12:48-12:50 UT, given by the solid line) and SPEAR-off (12:50-12:52 UT, given by the dashed line). As above, the SPEAR-off spectrum has the usual form of an F-region ion line spectrum with two ionacoustic peaks. The enhancements in the central part and ion-acoustic peaks of the SPEAR-on spectrum are consistent with SPEAR-induced instabilities and are similar to the SPEAR-induced F-region enhancements reported in R2006. Panels (b)-(d) show the amplitudes of the UIL (b), the CIL (c), and the DIL (d) for 12:46-12:52 UT, with SPEAR transmitting from 12:48-12:50 UT. As stated above, the ion line data allow us to monitor the action of the purely growing mode and the parametric decay instability. Changes in the CIL amplitude are generally well correlated with changes in the UIL and DIL amplitudes. Also, the amplitudes of all three spectral components towards the beginning and the end of the SPEAR-on interval are at least an order of magnitude higher than during the rest of the SPEAR-on interval. The amplitude increases at the beginning of the SPEAR-on interval are characteristic of the commonly seen ion line overshoot, which has generally been absent from previous observations of SPEAR-enhanced data (R2006). Increases similar to those at the end of the SPEAR-on interval have been seen previously and will be discussed further in Sect. 4. The ion line amplitudes during the rest of the SPEAR-on interval are comparable to those during the SPEAR-off periods. Panels (e) and (f) show the amplitudes of the upshifted (UPL) (e) and downshifted (DPL) (f) plasma lines (at ±4.45 MHz) for 12:46-12:52 UT. The SPEAR-induced UPL and DPL enhancements are clearly well correlated, both with each other and with the ion line enhancements. Also, the UPL and DPL amplitude increases at the beginning of the SPEAR-on interval are consistent with the plasma line overshoot. As for ion line data, increases at the end of the SPEAR-on interval have been seen previously (see Sect. 4). Incidentally, the plasma line enhancements just after 12:48 UT are among the strongest yet seen during all periods that SPEAR has been operating. There are also intermittent plasma line enhancements at around 12:49:00 UT and 12:49:30 UT. . As for SPEAR-off data from interval 1, there is a single central peak in the ion line spectrum, which identifies collisional plasma. As mentioned above, increases in its amplitude have been observed previously (e.g. Rietveld et al., 2002). Panel (b) shows the CIL amplitude for 15:14-15:20 UT. Clearly, the amplitude is highly variable during the SPEAR-on times, with the spectral enhancement being greatest from about 15:16-15:17 UT. Panels (c) and (d) show the UPL (c) and DPL (d) amplitudes (at ±4.45 MHz) for 15:14-15:20 UT. The SPEAR-induced UPL and DPL enhancements, which appear at just after switch-on (at 15:16 UT) and last for about 30 s, are clearly well correlated with each other. Unlike previously seen RF-induced signatures of the overshoot, where the spectral enhancement disappears after switch-on of the high-power beam (usually after one data dump), the SPEAR-induced enhancements here (and also incidentally in the apparent overshoot data in interval 2) persist for longer. Also, insufficient plasma line resolution made it impossible to differentiate between the actions of the parametric decay instability and the purely growing mode during SPEAR-on. There are no identifiable SPEAR-induced ion or plasma line enhancements during the rest of the SPEAR-on interval. We return now to the E-and F-region spectra from interval 1 (13:10-13:42 UT on 10 December 2004). In order to highlight the variability in these spectra, we now concentrate on data taken during the 4-min-on period from 13:28-13:32 UT. E-region data are shown in Fig. 6 both SPEAR-on and SPEAR-off. Each panel in the figure shows an ion line spectrum with the x-, y-and z-axes showing frequency, altitude and amplitude, respectively. The panels in Fig. 6 are presented in chronological order with the time above each panel corresponding to the end of the particular 6.4 s integration period (with the fractional second being discarded) during which the ion line data were recorded. The black circles above a number of the panels denote complete integration periods during which SPEAR was transmitting. Although the amplitudes of these E-region spectra vary during the 4-min-on period, with a notable reduction present around 13:29:30 UT, the form of the spectral enhancements, characterized by an increase in the central peak amplitude, is a consistent feature of all the SPEAR-on spectra. Figure 7 shows F-region spectra corresponding to the E-region spectra displayed in Fig. 6. Each panel in Fig. 7, as for Fig. 6, shows the frequency, altitude and amplitude plotted using the x-, yand z-axes. The times of the F-region spectra, shown above the panels, correspond to the times of the E-region spectra. Clearly, these F-region spectra have a considerable degree of variability, with the forms of the SPEAR-enhanced spectra (which are present at altitudes between 160 and 220 km) changing throughout the SPEAR-on interval. Furthermore, the spectra are all generally asymmetric, with the number and positions of identifiable peaks changing over time. Some of these peaks occur in the central part of the ion line spectrum and at the ion-acoustic frequencies and such SPEAR-induced spectral enhancements have been identified with the actions of the purely growing mode and the parametric decay instability, respectively (R2006). Also, an interesting feature is that the most significant F-region spectral enhancements occur concurrently with the highest increases in the E-region spectral amplitude. Discussion In this study, we have provided ESR observations of the temporal evolution of E-and F-region ion and plasma line spectral enhancements caused by the interaction of the SPEAR high-power pump wave with sporadic E-layers. We have presented evidence of corresponding E-and F-region SPEARinduced ESR ion line spectral enhancements during the presence of a non-blanketing porous sporadic E-layer. These results constitute the first observations of SPEAR-enhanced E-region ion line spectra and were obtained during the December 2004 experimental campaign. However, no accompanying E-region plasma line enhancements were observed, although the absence of such observations can be explained by recalling that the enhancements occurred at altitudes too low for plasma line data to be collected. However, the lack of F-region plasma line data is more difficult to explain as previous F-region SPEAR-induced ion line enhancements have been accompanied by corresponding plasma line enhancements. Generally simultaneous SPEAR-induced E-region ion and plasma line spectral enhancements were first seen in ion and plasma line data collected during the December 2005 campaign. Simultaneous E-region ion and plasma line enhancements were seen in data from two intervals, the first of which contained upper E-region enhancements, with the other containing lower E-region enhancements while a sporadic E-layer was present. Upper F-region spectral enhancements were not seen during these two intervals from December 2005. The lower E-region data were obtained when a sporadic E-layer was present and have the same characteristics as those of the E-region data from the December 2004 campaign. Ion line data from interval 1 (10 December 2004) include E-region amplitude enhancements in the central (0 kHz) spectral component and F-region amplitude enhancements in both the ion-acoustic peaks and the central part of the ion line spectrum. Unlike previous SPEAR-enhanced F-region ion line data where plasma line enhancements were also observed (R2006) no F-region plasma line enhancements were seen in data from interval 1. Also, any enhancements in the E-region plasma line spectrum were not recorded as they would have occurred below the lowest altitude from which plasma line data were collected (150 km). For the well developed porous sporadic E-layers in this interval, the strong E-region enhancements indicate that a high proportion of the transmitted pump wave energy excited instabilities in the Eregion. Also, the porous nature of the sporadic E-layer probably led to intermittent excitation of the F-region, thereby causing highly variable F-region ion line amplitudes. For example, the presence of the F-region spectrum for 13:28-13:32 UT (shown in panel (d) of Fig. 2), which in this case happens to be asymmetric (because of variable spectral forms in the 4-min SPEAR-on interval), is consistent with patchy E-region plasma that is in frequent motion during SPEARon. The variability of the E-and F-region spectra collected during this 4-min interval may be seen clearly in the data presented in Figs. 6 and 7. As mentioned above, the F-region spectra, shown in Fig. 7, are generally irregular and asymmetric and their form varies from data dump to data dump over the period that SPEAR is transmitting. This is indicative of highly variable density-dependent absorption within and penetration through the sporadic E-layer during the SPEARon period. As mentioned in R2006, the variability in the plasma density leads to strong local fluctuations in the electric field strength of the high-power wave. Therefore, the threshold electric fields necessary for instability excitation will only be exceeded intermittently and for short periods during the SPEAR-on interval. It is clear from Figs. 6 and 7 that the most significant Eand F-region spectral enhancements tend to occur simultaneously. This concurrence of high E-and F-region SPEARenhanced spectra is surprising as there is not thought to be a correlation between the presence of sporadic E-layers, for which the density enhancements are localized in altitude, and high densities in the F-region. If the plasma density of the Fig. 6. Successive E-region ion line spectra from approximately 13:27-13:33 UT on 10 December 2004 (within interval 1). Each panel shows an ion line spectrum with the x-, y-and z-axes showing frequency, altitude and amplitude, respectively. The panels are presented in chronological order and cover two time periods during which SPEAR was switched off, separated by the 4-min interval during which SPEAR was switched on (13:28-13:32 UT). The time above each panel corresponds to the end of the particular 6.4-s integration period during which the ion line data were recorded. These times have been shown with the fractional second being discarded. The black circles above a number of the panels denote complete integration periods during which SPEAR was transmitting. Clearly, the amplitudes of these E-region spectra vary during the 4-min-on period, with a notable reduction present around 13:29:30 UT. However, the form of the spectral enhancements is characterized by an increase in the central peak amplitude during SPEAR-on. This is a consistent feature of all the SPEAR-on spectra and may be used to differentiate these spectra from those for SPEAR-off. Fig. 7. F-region spectra corresponding to the E-region spectra displayed in Fig. 6, i.e. for approximately 13:27-13:33 UT on 10 December 2004. As for Fig. 6, each panel shows the frequency, altitude and amplitude plotted using the x-, y-and z-axes. The times shown above the panels of F-region spectra correspond exactly to the times of the E-region spectra. Once again, the black circles above a number of the panels label complete integration periods during which SPEAR was switched on. Clearly, these F-region spectra have a considerable degree of variability, with the forms of the SPEAR-enhanced spectra (which are present at altitudes between 160 and 220 km) changing throughout the SPEAR-on interval. Furthermore, the spectra are all generally asymmetric, with the number and positions of identifiable peaks changing over time. Some of these peaks occur in the central part of the ion line spectrum and at the ion-acoustic frequencies. Also, the most significant F-region spectral enhancements occur concurrently with the highest increases in the E-region spectral amplitude. sporadic E-layer is high, then this implies that the majority of the high-power wave energy ought to be absorbed or reflected in the E-region, with little to progress to the F-region. However, any RF energy that does manage to penetrate the patchy sporadic E-layer is free to propagate to the F-region, where, for sufficiently high F-region plasma density, it can excite the instabilities that enhance the F-region spectra. This hypothesis is consistent with our data as the presence of F-region spectral enhancements indicates that a significant proportion of the RF energy penetrated the sporadic E-region and propagated through to the F-region, in which, presumably, the plasma density was sufficiently high throughout the interval. The data from interval 2 (3 December 2005) contain upper E-/lower F-region enhancements in the amplitudes of both the central part of the ion line spectrum and of the ion-acoustic peaks, the separation of which is related to the plasma temperature. In F-region collisionless plasma located well away from the collisional E-region, the ionacoustic peaks in ESR ion line spectra, which are recorded using transmission frequencies of 500 MHz, usually appear at about ±5 kHz (R2006). However, for these data (upper E-/lower F-region) the peaks are at approximately ±2 kHz. This reduced peak separation is a consequence of the increasing ion-neutral collision frequency, for decreasing altitude, from the collisionless conditions that obtain in the upper F-region, to the lower E-region collisional regime. The ion line enhancements, which were accompanied by simultaneous enhancements in both the upshifted and downshifted plasma lines, imply that both the parametric decay instability and purely growing mode were operating. This is consistent with previous SPEAR-enhanced F-region data where simultaneous ion and plasma line enhancements were also observed (R2006). There is evidence of overshoot phenomena in the ion and plasma line data from interval 2 at SPEAR switchon. Features that could be characterized as overshoots have only been observed occasionally in SPEAR-enhanced data, and this lack of observation has been attributed to the lower power (15 MW) transmitted by SPEAR (R2006). On the other hand, the overshoot is seen frequently in heaterenhanced Tromsø incoherent scatter data, where much higher powers (several hundred MW) are available with the Tromsø heater . In addition to clear amplitude increases at SPEAR switch-on, there are also increases towards the end of the SPEAR-on interval. Although such ion and plasma line amplitude increases have been observed previously in SPEAR-enhanced ESR data (e.g. Fig. 17 in R2006), there have usually been additional amplitude enhancements during the rest of the SPEAR-on interval. However, our observations do not contain significant ion line enhancements during the rest of this SPEAR-on interval, although plasma line enhancements are present. Substantial ion line enhancements only occur at around the time when SPEAR is switched off. While initial inspection of the data suggests that these signatures may provide evidence of en-hancements associated with SPEAR switch-off, closer examination reveals that these increases appear to be purely coincidental, as they actually occur before and die away at SPEAR-off. As for the E-region data from interval 1, the upper D-/lower E-region ion line spectra from interval 3 (4 December 2005) have a single central (0 kHz) peak, indicating collisional plasma, rather than a pair of ion-acoustic peaks. The effect of SPEAR was identified by an increase in the amplitude of this central peak and this is consistent with the action of the purely growing mode and/or the parametric decay instability (Rietveld et al., 2002). However, in contrast to the interval 1 data, these ion line enhancements were accompanied by plasma line enhancements (upshifted and downshifted by the SPEAR transmission frequency of 4.45 MHz), which lasted for about 30 s after SPEAR switch-on, and these also indicate excitation of either or both instabilities. Previous SPEAR-induced ion and plasma line enhancements have tended to occur simultaneously (R2006), and this may also be the case here, although there is a degree of variability in the ion line amplitude during the SPEAR-on interval which makes it difficult to associate unambiguously any ion line amplitude increases with the action of SPEAR. The data from all three intervals are consistent with the SPEAR-affected region of ionosphere being traversed by patches of overdense plasma, giving rise to varying ion and plasma line amplitudes in both the E-and F-regions (R2006). This patchiness results in intermittent blanketing of the Fregion, during which the transmitted pump wave interacts with the sporadic E-layer. When the pump wave does not encounter E-region plasma, it is free to propagate to and excite instabilities within the F-region, thereby further increasing the variability of the F-region ion and plasma line signatures. This variability is clear in data from interval 1, which indicate patches present during most SPEAR-on intervals. For interval 2 data, ion and plasma line enhancements indicate that patches were present at the beginning and near the end of the SPEAR-on interval. Such enhancements indicating propagating patches were also present at the beginning of the SPEAR-on period in interval 3, but generally absent during the rest of period. Our observations, which as stated above were obtained while SPEAR operated with an ERP of approximately 15 MW, are consistent with a negligible SPEAR-induced change in the plasma temperature. As suggested in R2006, this relatively low SPEAR pump power may result in limited heating of the plasma when compared to the temperature enhancements that are routinely seen using the Tromsø heater, which has an ERP of several hundred MW . This variability in the temperature enhancements provides motivation for a study, involving variable heater powers, into the dependence of temperature increases and associated RF-induced spectral enhancements on the transmitted pump wave power. Similar motivation exists to study further the infrequent occurrence of the overshoot in our www.ann-geophys.net/25/1801/2007/ Ann. Geophys., 25, 1801Geophys., 25, -1814Geophys., 25, , 2007 observations, again using an experiment with variable heater powers to investigate how the presence and frequency of the overshoot are affected by the heater power. Turning now to the CUTLASS coherent scatter data shown in Fig. 2, only data from interval 2 (3 December 2005) displayed detectable SPEAR-induced backscatter enhancements observed by CUTLASS, and then only in data recorded by the Finland radar. The SPEAR-enhanced backscatter was present for the entire 2-min SPEAR-on interval in both channels. Combining these observations with the well accepted result that irregularities are generated at altitudes below those at which the O-mode pump wave is reflected, the ESR observations indicate that the artificial irregularities detected by CUTLASS were generated at the upper E-/lower F-region. These altitudes are lower than those from where SPEAR-enhanced CUTLASS backscatter has been seen previously. These observations are unusual as modelled propagation paths obtained using ray tracing indicate that interaction altitudes in excess of 200 km are usually required in order for SPEAR-induced artificial backscatter to be seen in CUTLASS data (Yeoman, private communication, 2007). Since the irregularities are elongated and extend along the field lines, it is instructive to see whether, although the irregularities may be generated in the upper E-/lower F-region, the backscatter from them may originate from higher altitudes. Senior et al. (2004), who followed on from studies undertaken previously (Jones et al. 1984;Robinson et al, 1989), investigated the altitudinal extent of RF-induced artificial field-aligned irregularities and determined that they had e-folding scale lengths of approximately 20 km. Previous studies by Jones et al. (1984) and Robinson (1989) found e-folding scale lengths of 20 and 52 km, respectively. Although the results of Robinson et al. (1989) do not necessarily support our findings, the figure of 20 km, obtained by both Senior et al. (2004) and Jones et al. (1984), provides evidence that the upper E-/lower F-region field-aligned density irregularities do not extend high enough to reach the region from where CUTLASS backscatter is usually observed. This supports the hypothesis that the SPEAR-enhanced backscatter does not originate from those F-region altitudes consistent with ray-tracing studies and from where previous SPEARenhanced CUTLASS backscatter has been observed. This result may indicate interesting propagation characteristics and results from future campaigns will be examined in order to study this further. There were no SPEAR-induced CUT-LASS backscatter enhancements for the data from intervals 1 (10 December 2004) and 3 (4 December 2005). This is consistent with the ESR enhancements originating within the lower E-region, as CUTLASS backscatter is expected to be absent from such low altitudes because of unfavourable propagation. The theory of RF-induced enhancements in sporadic Elayers has evolved over time. Both Gordon and Carlson (1976) and Djuth (1984) observed sporadic E-layer plasma line enhancements exactly at the radar frequency ± the pump frequency. Djuth (1984) concluded that the modulational instability was probably below threshold during these observations, and therefore direct conversion may play a role in explaining the sporadic E-layer data. Djuth and Gonzales (1988) subsequently examined the temporal development of the RF-enhanced sporadic E-layer plasma line in great detail and concluded that, although direct conversion of the pump wave into Langmuir waves by in situ small-scale irregularities can explain rapid (less than 20 µs) RF-enhanced plasma line growth, slower (greater than 100 µs) observed growth times are difficult to explain with this process. Instead, it was proposed that mode conversion along sporadic E-layer vertical gradients near the critical layer provides a better overall description of the observations, and that the formation of density cavities (cavitons) near the reflection height may play an essential role in the production of Langmuir waves. These cavitons (e.g. Morales and Lee, 1977) could give rise to the sporadic E-layer ion line enhancements reported in this paper. These cavitons and the spatial extent of associated density inhomogeneities are consistent with the study by Robinson (2002) who applied a multiple scatter theory to the propagation of electromagnetic test waves during RF-induced heating and found a broadening of the interaction region. It is noteworthy that the observed plasma line growth times in the range <4 µs to 20 µs are much shorter than those anticipated for a convective parametric decay instability. Newman et al. (1998) suggested that an absolute (i.e., non-convective) parametric decay instability could develop in the sporadic E-layer plasma. This was used to explain the large sporadic E-layer airglow enhancements observed by Djuth et al. (1999) at altitudes of approximately 120 km. indicated that linear mode conversion may also be important in explaining the sporadic E-layer airglow. Also, Gondarenko et al. (2003) performed simulations that indicate that linear mode conversion in sporadic E-layer plasma above Arecibo gives rise to localized regions containing amplified electric fields. These enhanced electric fields could accelerate electrons and lead to the production of the intense airglow. Rietveld et al. (2002) presented plasma line/ion line observations in an auroral E-layer having a scale height of 5-10 km. This is significantly different from sporadic E-layer plasma, where the scale height is 0.2-1 km or less (Djuth, 1984). The underlying plasma physics are greatly affected by the vertical scale length. One expects the (propagational or convective) parametric decay instability and the oscillating two-stream instability (modulational instability) to be excited in plasma with a scale height of 5-10 km, but not in sporadic E-layer plasma where the electron density gradients are much steeper. The reason for this is that in a steep sporadic E-layer electron density gradient, the Langmuir wave vector quickly rotates away from the direction of the pump field thereby decoupling from the pump. In addition, the wave rapidly propagates outside of the narrowly confined Ann. Geophys., 25,[1801][1802][1803][1804][1805][1806][1807][1808][1809][1810][1811][1812][1813][1814]2007 www.ann-geophys.net/25/1801/2007/ altitude region of instability preventing significant Langmuir wave amplification from occurring (e.g. Perkins and Flick, 1971;Fejer and Leer, 1972;Muldrew, 1978). Thus, the thresholds of the oscillating two stream instability and the standard propagational/convective parametric decay instability become extremely large and therefore do not represent viable processes with which to explain the observations. In summary, although the results of Rietveld et al. (2002) certainly complement previous studies, they do not apply to the production of Langmuir waves in sporadic E-layers. Modelling conducted by Goldman et al. (1995Goldman et al. ( , 1997 was consistent with the findings of Djuth (1984) and Djuth and Gonzalez (1998) regarding the inhibition of the parametric decay instability in sporadic E-layers. In agreement with the findings of Djuth (1984), we have also observed steep vertical gradients in the plasma density of sporadic E-layers, together with plasma density peaks at multiple altitudes. However, our results from the lower F-/upper E-region may be explained by invoking both the purely growing mode and the parametric decay instability. This is consistent with the study by Djuth (1984), who found that, unlike in sporadic E-layers, F-region conditions supported excitation of the parametric decay instability. Our data therefore provide supporting evidence for previous results obtained by Djuth (1984), Djuth and Gonzalez (1988) and Rietveld et al. (2002), and are not inconsistent with previous modelling and observations. Conclusions In this paper, we have presented SPEAR-induced spectral enhancements in ESR incoherent scatter data from the Eand F-regions of the polar ionosphere. We have seen sporadic E-layer enhancements consistent with the formation of cavitons (e.g. Morales and Lee, 1977) and upper E-and Fregion enhancements consistent with the actions of both the purely growing mode and the parametric decay instability. The conditions for excitation of these instabilities in various ionospheric layers, together with other mechanisms for RF-enhanced ion and plasma lines, have been given previously (Djuth, 1984;Djuth and Gonzales, 1988;Rietveld et al., 2002) and our observations agree with earlier findings. Former observations of RF-enhanced spectra in sporadic E-layers have been made at low, mid and high latitudes, using high-power facilities such as those located at Arecibo and Tromsø. Our observations of SPEAR-enhanced spectra in sporadic E-layers were made in the polar ionosphere and constitute the first such results that have been obtained. In addition to the importance of these results in their own right, our measurements satisfactorily complement low-, mid-and high-latitude studies that have been performed previously.
9,541
2007-08-29T00:00:00.000
[ "Environmental Science", "Physics" ]
Where is your field going? A machine learning approach to study the relative motion of the domains of physics We propose an original approach to describe the scientific progress in a quantitative way. Using innovative Machine Learning techniques we create a vector representation for the PACS codes and we use them to represent the relative movements of the various domains of Physics in a multi-dimensional space. This methodology unveils about 25 years of scientific trends, enables us to predict innovative couplings of fields, and illustrates how Nobel Prize papers and APS milestones drive the future convergence of previously unrelated fields. Introduction We aim at building a quantitative framework to describe the time evolution of scientific fields and to make predictions about their relative dynamics. Scientific progress [1] has been already investigated from multiple points of view [2], that range from the study of scientific careers and the evolution of single scientific fields to the mutual impacts between science and society. This latter issue is greatly influenced by the availability of prediction models. For instance, Martinez et al. investigate the impact on education and labour of technological and scientific progress and on the feedbacks which in turn are given from education and labour market to science and technology [3]. Börner et al., instead, discuss the importance of having reliable predictive models in science, technology and economics paired with an easily readable data visualization procedure to help policy makers in their activity [4]. As we will show in the following, our methodology allows for concrete predictions about the time evolution of scientific fields. Another successfull field of research investigates the scientific careers. Shneiderman discusses in details the best strategies for producing highly successful scientific researches balancing between the exploration of new ideas and the exploitation of established works [5]. Ma et al. [6] and Sinatra et al. [7] both focus on the individual impact of scientists, the former by analyzing the collaboration network of scientific-prize-winners, the latter focusing on the evaluation of the activity of scientists. Jia et al. [8] have introduced a random walk based model to investigate the interest change in scientific careers and how they evolve together with the scientific progress. All these studies could benefit from a comprehensive representation of the space in which such careers take place. Indeed, others scientists have contributed to shed light on some of the fundamental mechanisms and underlying rules of the scientific progress: which are the successful strategies to conduct a scientific project, how much the scientific progress is shaped by citations and collaborations networks, see [9][10][11][12][13][14]. In this respect, a key element is to be able to efficiently project the dynamics of science in a suitable space, to obtain both a visualization and, if possible, a prediction of what will happen in the future. Many authors have tackled this issue employing the instruments provided by network theory. Gerlach et al. [15] for example developed an innovative topic model that exploits community detection techniques, Herrera et al. [16] focused on building a network of PACS that they use to study the established communities of fields and their evolution, Sun et al. [17] adopted a network based approach which exploits co-occurrences of authors, Pugliese et al. grounded their analysis on the cooccurrences of sectors in countries [18]. In a recent paper, Chinazzi et al. [19], intruduce a knowledge map of PACS produced by a general-purpose embedding algorithm, StarSpace [20]. They rely on the publication patterns of authors to define a metric of similarity between PACS, and use it to analyze the spatial distribution over different cities of the scientific activity and how it relates with the standard socioeconomic indicators provided by World Bank. Here we propose a framework which is, instead, well suited to highlight the dynamic of scientific progress. In our analysis, in a way similar to [19], we move from traditional topological spaces, such as networks of PACS or authors, to a continuous space where it is possible to introduce quantitative measures of proximity between scientific topics and most importantly, track their evolution through time. In particular, we represent PACS as multidimensional vectors, leveraging on the methodology discussed in [21] and on Natural Language Processing techniques [22,23]. The key idea is to draw a parallel between PACS and words, i.e. PACS are the words of what we call scientific language. A direct consequence of this novel way to look at PACS is to consider scientific articles as sentences, i.e. contexts which subsume the underlying rules of the scientific language as much as a sentence subsumes the underlying syntactic rules of the natural language in which it is formulated. This assumption allows to create a similarity metric between scientific fields, that we call context similarity. While in [21] this approach was introduced and used to forecast new combinations of the technological codes to make prediction on the future patenting activity, here we aim to quantitatively measure scientific trends in the Physics literature by looking at the dynamics PACS codes. This enables us not only to predict new combinations of fields but also to assess the impact of extra-ordinary contributions such as Nobel Prize papers and APS milestones. The rest of this paper is organized as follows. We first show how the mere representation of PACS dynamics in a low dimensional space gives a series of insights about how research in Physics clusterize and how scientific fields move one with respect to the others. Then we use these relative movements to forecast the appearance of innovative couplings. We also show that the publication of recognized papers is followed by an approach of the relative PACS. In the last section, we discuss in more detail the database and the methodology we used to build our representation of PACS from the data. Low dimensional representation The vector representations of PACS, which we call embeddings, live in a high dimensional space, and this prevents a direct inspection of the resulting structures. In the Methods Section we provide more details on the algorithm that constructs them staring from the raw data. For the purpose of understanding the results presented here, it suffices to know that the position of these high-dimensional vectors in the space of PACS is optimized so that each of them has as neighbours the most similar ones given the global scientific activity (the concept of similarity is quantified through the scalar product between vectors, see Methods for more details). A simple visualization of these representations and their time evolution is required to shed light on the dynamics underlying the scientific activity in Physics. For this reason we rely on a standard dimensionality reduction technique that allows us to generate a two dimensional representations of our embeddings. We use the t-SNE algorithm (t-distributed Stochastic Neighbor Embedding) [24] and its modification that takes into account time-ordered input data, Dynamic t-SNE [25]. Dynamic t-SNE requires the different instances of the high dimensional space to contain the same embeddings, because it keeps track of them to preserve temporal coherence between consecutive projections. In this way, the 2-dimensional projection at time t + 1, not only depends on the high dimensional configuration of the embeddings at the same time, but also on the 2-dimensional projection at time t. In particular, the projections at time t are used as the initial conditions for projections at time t + 1 in order to reconstruct coherent trajectories. For this reason, we have restricted the number of PACS by selecting only those present into the whole time range under investigation, i.e. 1985-2009, for a total of about 300 PACS. The result of the dimensional reduction is shown in Fig 1 where we have added the ellipses to stress the cluster structure. As expected, most of the PACS are clusterized respecting the hierarchy of the classification (see the Data and methods section), that is represented by the different colors of the PACS trajectories. The relative position of the clusters is in very good agreement with intuition: Nuclear Physics is close to Elementary Particles and fields, the two Condensed Matter clusters are also close, while the General and interdisciplinary sectors are not clearly localized. In some interesting cases some PACS are not localized into their original cluster coming from the PACS classification. We name some of these noteworthy exceptions: All the previous examples show PACS whose use and dynamics does not reflect their classification. We believe that this representation can have a number of practical applications. For instance, it could be used to update and redesign the classification of research domains and to improve the synergies among researches of (supposedly) different areas. Prediction of new PACS pairs Context similarity is a metric introduced in [21] which we have specifically adapted to measure the proximity of two PACS given the current scientific production: it mirrors and summarizes the relationship between their respective scientific areas in a given time window. It is therefore natural to use it to estimate the likelihood that a pair of PACS, which has never appeared in a paper before, will occur in the same paper in the future. In our opinion this kind of events can be regarded as an innovation in the field of Physics: following the seminal ideas of B.W. Arthur, an innovation is defined as a previously unseen combination of existing elements [26]. In this section we make systematic predictions for the appearance of new PACS pairs and we confirm the goodness of our approach using both the Receiver Operating Characteristic curve (ROC) and its integral (AUC), and the best F1-score, both of them standard tools in statistical analysis [28][29][30]. As discussed in the Methods sections, scientific articles are grouped in 5-years-long training sets. In order to test the predictive power of context similarity we repeat our analysis on 10-years-long time windows formed by joining together two consequent nonoverlapping time intervals, e.g. 1985-1989 for training and 1990-1994 for testing. The idea is to test whether the context similarity of PACS couples is connected to the likelihood that a previously unseen couple will appear in the testing set. In each 10-years window, we proceed as follows: 1. We use the training set to calculate the embeddings for the 500 most frequent PACS and identify all couples that have never been published together up to the last year of the training set. 2. We check whether couples of PACS selected in the training set are published in at least one paper of the testing set or not. We classify the testing set couples of PACS in two separate classes accordingly: class 0 if they appear together in at least one paper, class 1 otherwise. 3. We evaluate the effectiveness of context similarity to forecast unseen PACS couples using standard performance metrics such as the ROC-AUC and the best F1-score. We test our metric against a null model that takes into account the relative growth of each field relative to the others. To realize this null model we make use of the curveball algorithm introduced in [27] to create randomized bipartite networks that preserve the number of connections of each node. In other words, each article gets a random set of PACS in such a way that the following conditions are true: • All articles in the randomized articles-PACS network have the same number of PACS as they have in the original bipartite network. • All PACS in the randomized articles-PACS network appear in the same number of articles as they do in the original bipartite network. We then calculate the embeddings and the context similarity over the randomized database. In this way we can test the prediction power of context similarity against a null model that, on one side, keeps tracks of the growth of each scientific field through the frequency with which PACS are employed, while on the other, it randomizes all information about the semantic relations between PACS. Due to the long computation time required to calculate the context similarity between PACS of the randomized articles-PACS network in every decade, which we refer to as null model for simplicity, we have done it only once for the whole dataset. Results are shown in Fig 2 where we display the ROC AUC and the best F1 score. Regarding the AUC Fig 2. Context similarity (continuous lines) outperforms the null model (dashed lines) in predicting innovations in Physics, i.e. new pairs of PACS used for the first time in a paper. The two metrics are evaluated by the ROC AUC (blue lines) and the Best F1 score (red lines), and the plot show that context similarity scores higher in both cases. The database is organized in 10-years-long sliding windows, the first five years of each window form the training set where we calculate the two metrics, while the second five years form the testing set where we evaluate them. On the x axis we report the first and the last year of each testing set. The null model proposed in this plot assign to each paper a random set of PACS in such a way that each article has the same number of PACS and each PACS appears in the same number of articles. The dashed grey line represent the ROC AUC of a random guess. https://doi.org/10.1371/journal.pone.0233997.g002 metric, context similarity outperforms the null model proposed and scores well above a random classifier. Indeed, it can be proven that a random classifier would be characterized by a AUC score of 0.5 regardless of the class imbalance ratio of the system under observation [28][29][30]. Regarding the best F1 score, it can not be directly compared with a random guess, being an harmonic mean of the Precision and Recall mesures [30], but we can compare it against the null model that we have built, and it is evident that context similarity still performs better. In summary, context similarity captures more information than what can be understood looking just at the frequency of PACS, and therefore, not only it successfully grasps the relation between PACS induced by the global scientific activity, but it is also able to predict innovations in the field of Physics over the years with a constant good performance. Quantifying the impact of milestones and Nobel prize winners In Fig 3, we have focused our attention on one illustrative example of PACS dynamics influencing and being influenced by scientific papers to show the effectiveness of the proposed framework to study the evolution of the relation between different fields of research. Highlighted in blue, there are the trajectories of two PACS: Matter Waves and Quantum Statistical Mechanics: these are the PACS of the Nobel prize article on the Bose-Einstein condensation [31], published in 1995. The publication of this fundamental paper is associated, in the plot, to its PACS converging towards a closer position. In the following we will study more examples of the effect of both APS milestones (see description of the data for the definition of milestones, and Nobel prize winners on the space of PACS). Each PACS is added to papers by the authors at the time of submission. Under the reasonable assumption that authors follow the order of relevance of the topics, we consider only the first two PACS, i.e. the two main topics of a paper. In total there are 36 of such special articles, however only 20 of them have the first two PACS different at our level of aggregation (4 digits): we calculate the value of context similarity for each of them. The aim is to compare the relative variations in context similarity of these pairs with the average variations of all PACS through the years to spot a possible peculiar behavior of Nobels and milestones. In particular, we calculate such variations using as a starting point the value computed in the five years intervals having the publication year as the fifth, and last, year. The final value is computed at three different stages. The first one is set one year into the future after the date of publication, the second one five years into the future, and third one ten years into the future. the articles. In the medium term (bottom-left panel) the situation is more mixed up: some articles experience a variation of context similarity which is outside the region delimited by the 10th and 90th percentile, while others experience an arrest. In the long term (bottom-right panel), we see that for almost all the Nobel prize winners and milestones, the variation tends to be at the tails of the distribution of context similarity variation. The conclusion we draw from Fig 4 is that the publication such as Milestones or Nobel, has a mixed impact on their PACS in the short and medium term, however, in the long term, with only one exception, they all experience an high increase. The fact that some pairs of PACS show negative trends for the variation of context similarity can be explained by them starting with high values at the time of publication that prevents the possibility to reach higher values. The interpretation we give to this situation is that such papers combine already strongly related PACS, while the others are pioneers in creating bridges between previously unrelated scientific areas. In Fig 5, we show the average variation of context similarity of all these fundamental articles as a function of the number of years after their publication (red points). The error bars for each point represent the standard error relative to the average. Each point can be compared with the median and 10th and 90th percentiles of the variation of all the other PACS pairs of every article in the same time interval. The plot also shows the number of papers on which the average was carried out at each time (in green), which is decreasing with time due to the decrease of available papers on longer time spans. The plot shows that the context similarity undergoes a positive variation over the years after publication, which indicates that the main topics in these articles, identified by the first two PACS, are getting closer. This can be interpreted as an increase of interest in some fields of Physics related to the publication of those articles which have greatly influenced modern research. The negative trend in the last points can be explained by noting that the value of the context similarity is now high enough not to undergo any further substantial changes. Moreover, in these points there is greater uncertainty of calculation due to the fact that the numbers of articles available is significantly reduced with respect to previous years. Let us now focus on some specific examples. In Fig 6 we show the time evolution of context similarity of the first two PACS of four fundamental papers: • C. Jarzynski (1997): Nonequilibrium Equality for Free Energy Differences. [34] The vertical line represents the publication year. We notice different behaviors: in three out of four examples, context similarity experiences a steady long-term growth after the publication. In the top right panel such growth is also anticipating the publication, a behavior similar to the one discussed in the previous section about the possibility to predict innovative combinations of scientific fields. In the bottom left panel, instead, context similarity decreases. As already discussed, we interpret these two different cases as the paper being either a pioneer in the field, thus paving the way to further research, or at the peak of research, from which it is only possible to climb down. Conclusions Describing and predicting the scientific progress is a challenging task. In this paper we use the APS database of physics articles to build a multi-dimensional space to investigate the relative motion of scientific fields, as defined by the PACS codes. Our machine learning methodology is based on Natural Language Processing techniques, which are able to extract the context similarity between words and, in our case, between scientific topics, starting from their presence in the APS articles. This vector representation permits to visualize in a clear way the trajectories in time of Physics topics and to predict innovations in Physics, as defined by the appearance of new combinations of PACS codes in APS articles. Finally, we observe that APS Milestones and Nobel winner papers have an effect in bringing together previously unrelated topics. This work is a proof of concept that it is possible to go beyond standard network methodologies and build a space which is not only well suited to represent the dynamic of science, by it also allows to introduce metrics to make quantitative analysis and predictions. We believe that this research opens up a number of further developments, for example, this framework can be applied to study more extensive database, including not only Physics but also other scientific sectors and to investigate their mutual influence. Furthermore, it is an instrument that can be used to introduce more precise definition of scientific success such as one that links citations to the ability to affect the space of PACS: in future investigations for example, we plan to draw a comparison between sector's trajectories and the time evolution of citations. Description of the data The APS data-set (website: journals.aps.org/datasets) is a citation network data-set that is composed by papers in the field of physics organized by the American Physical Society. It contains 449935 papers in physics and related fields from 1977 to 2009. Among them, the high-impact papers used as evaluation benchmarks are derived from 78 milestone papers that experts from the American Physical Society have selected as outstanding contributions to the development of physics over the past 50 years. The PACS are alphanumerical strings hierarchically organized that are ascribed to scientific papers by authors at the time of publication and represent the domain of Physics the specific paper belongs to, for example the PACS 02.10.Yn indicates Matrix theory. The classification can be found in the supplementary information as a downloadable file. Creation of pacs embeddings PACS embeddings are created adapting the well-known algorithm of Word2Vec (in its Skip-Gram version) to our case of study [23,36]. The code producing the results discussed is implemented in tensorflow [35], an open access deep learning library published by Google, that we adapted to process scientific papers and PACS. We refer to the literature for a detailed descriptions of the procedure behind Word2Vec [23,36]. The key assumption is that there is a strong parallel with Natural Language Processing: articles can be viewed as sentences, i.e. contexts, and PACS as words. Each PACS is initialized with a random vector (embedding) and the positions of such vectors are adjusted during the training in order to maximize the similarity between PACS belonging to the same context. More into the details, each PACS is represented through a one-hot-encoded vector. This representation depends on the number of PACS to embed (the vocabulary size, in the language of [23,36]): at 4 digits precision 500 PACS per training set. The one-hot-encoded vector corresponding to a PACS is a binary vector which has all zeros except for a single one in the position that the PACS under analysis occupies in the list of all PACS: the first code is represented by [1, 0, 0, . . .], the second code by [0, 1, 0, . . .] and so on. In this regard, a scientific paper is nothing else but a collection of PACS, i.e. a collection one-hot-encoded vectors. To understand how Word2Vec works, we need to introduce two elements: an embedding matrix E of size V × N, where V is the number of PACS to embed and N the dimension of the embedding representation, a decoding matrix D of size N × V. Word2Vec is an iterative algorithm, at each steps a random batch of scientific papers is extracted from the training set and from each scientific paper in the batch, a random PACS is singled out as input and the remaining ones form the target context. Let h i be the embedding vector of a given input PACS p i . Let P be the set of all the PACS p j forming the target context. The decoding matrix allows to calculate the score between the input PACS p i and all the words p j in the target context P. Let us call u ji the score for the jth PACS in the target context P, u ji is defined by: where D j is the jth column of the decoding matrix, obtained applying the the transposed matrix D T to the one-hot-encoded representation of the PACS p j . Each score passes through a softmax function to become the posterior probability for the context PACS p j given the input PACS p i : The posterior probability to predict the whole context given the input PACS is the product of all posterior probabilities for each PACS in the context. The Skip Gram model aims to maximize this probability at each step of the training for each input-context couple. However it is computationally more efficient to transform such maximization problem into the minimization of the following loss function: At each step, Skip Gram is trained over a random batch of input-context couples therefore the total loss over the batch is the average of all the single losses L. The training set is sampled in random batches at each training steps, this allows to efficiently process large quantities of data because parameter updates for Word2Vec are calculated only on subsets, i.e. only on those vectors present in the sample. At the end of the training, the position of each codes mimics what the algorithm has learnt on the scientific language and allows to quantify the similarity between PACS given the global scientific production. Word2Vec is trained through a variation of Stochastic Gradient Descent, therefore the embedding vectors will be different every time we run the algorithm, [37][38][39]. In particular, they can differ for two reasons: they can occupy different position in the space of PACS, and they can be randomly rotated with respect to the origin of the space in which they are defined. However, rotation invariant quantities, like the scalar product, can be still calculated and are not effected by rotations of the embeddings. We adopt the definition proposed by [21] of context similarity CS ij between PACS i and j as the average over 30 runs of the scalar product among the embeddings: where S k ij is the scalar product between the embeddings of PACS i and j in the kth training instance. Taking the average over different runs offers two important advantages: on one hand, it allows us to check if the algorithm is learning to represent PACS correctly, by looking at the distribution of their scalar product, and on the other hand, it is a better proxy of the true context similarity between PACS. The database at our disposal covers 25 years of scientific papers, from 1985 to 2009, we group them in 5-year-long overlapping intervals, from 1985-1989 up to 2005-2009, for a total of 21 time windows. We have empirically found that before 1985 there are not enough articles to make a statistically valid analysis. There are no a-priori instructions to identify the minimal size of the data-set required to have a good performance of the algorithm because this value depends on the database, the vocabulary size, and the aim of the training. As a general guideline, however, the documents used for training should make enough use of all the words in the vocabulary size. We have checked that if trained with too few papers, the embeddings of some of the PACS were randomly put in the space of PACS in multiple instances of the training. In other words, in order to create a reliable vector representation of PACS, the algorithm requires a sufficiently large training set, and this criterion is not met before 1985. This is due to the fact that before 1985 there are less than 2500 articles per year, while after 1985 this number jumps to 7500 and keeps growing to more than 15000 in 2009. Consequently, papers before 1985 are discarded and papers after 1985 are grouped in 5-years-long windows to have enough data in each sliding window to successfully train the model and produce reliable results. This choice is also theoretically motivated by the assumption that the time scale of the dynamics that shape the scientific research is longer than 5 years. In each 5 years time interval we create a vector representations for the 500 more frequent 4-digit PACS. It has been empirically observed in [21,22] that the algorithm is not able to create reliable vector representations for words that are too rare. We have verified that this number is a good compromise between having a wide spectrum of topics covered and the level of accuracy of the embeddings in terms of prediction power. This choice leaves out of out analysis around 40 PACS (with multiplicity less than 2) in the first sliding windows and around 150 PACS (with multiplicity less than 10) in the last sliding windows. The increase in the number of PACS left out and in their multiplicity is due to the positive trend in the number of published papers per year. The embedding dimension chosen for this analysis is 8, i.e. PACS embeddings live in a 8-dimensional euclidean space. The optimal dimensionality, depending on the complexity of the problem under exam, and in particular on the size of the dataset and the vocabulary, is usually determined by a trial and error procedure, [23,36], and our tests suggest that 8 is a good compromise between efficiency and accuracy. The reader can find the code used to produce the embeddings at the following link: https://github.com/Andrea-Napoletano/WyFiG.
6,899.2
2019-11-07T00:00:00.000
[ "Physics" ]
Dilated LSTM with attention for Classification of Suicide Notes In this paper we present a dilated LSTM with attention mechanism for document-level classification of suicide notes, last statements and depressed notes. We achieve an accuracy of 87.34% compared to competitive baselines of 80.35% (Logistic Model Tree) and 82.27% (Bi-directional LSTM with Attention). Furthermore, we provide an analysis of both the grammatical and thematic content of suicide notes, last statements and depressed notes. We find that the use of personal pronouns, cognitive processes and references to loved ones are most important. Finally, we show through visualisations of attention weights that the Dilated LSTM with attention is able to identify the same distinguishing features across documents as the linguistic analysis. Introduction Over recent years the use of social media platforms, such as blogging websites has become part of everyday life and there is increasing evidence emerging that social media can influence both suicide-related behaviour (Luxton et al., 2012) and other mental health conditions (Lin et al., 2016). Whilst there are efforts to tackle suicide and other mental health conditions online by social media platforms such as Facebook (Facebook, 2019), there are still concerns that there is not enough support and protection, especially for younger users (BBC, 2019). This has led to a notable increase in research of suicidal and depressed language usage (Coppersmith et al., 2015;Pestian et al., 2012) and subsequently triggered the development of new healthcare applications and methodologies that aid detection of concerning posts on social media platforms (Calvo et al., 2017). More recently, there has also been an increased use of deep learning techniques for such tasks (Schoene and Dethlefs, 2018), however there is little evidence which features are most relevant for the accurate classification. Therefore we firstly analyse the most important linguistic features in suicide notes, depressed notes and last statements. Last Statements have been of interest to researchers in both the legal and mental health community, because an inmates last statement is written, similarly to a suicide note, closely before their death (Texas Department of Criminal Justices, 2019). However, the main difference remains that unlike in cases of suicide, inmates on death row have no choice left in regards to when, how and where they will die. Furthermore there has been extensive analysis conducted on the mental health of death row inmates where depression was one of the most common mental illnesses. Work in suicide note identification has also compared the different states of mind of depressed and suicidal people, because depression is often related to suicide (Mind, 2013). Secondly, we introduce a recurrent neural network architecture that enables us to (1) model long sequences at document level and (2) visualise the most important words to accurate classification. Finally, we evaluate the results of the linguistic analysis against the results of the neural network visualisations and demonstrate how these features align. We believe that by exploring and comparing suicide notes with last statements and depressed notes, both qualitatively and quantitatively it could help us to find further differentiating factors and aid in identifying suicidal ideation. Related Work The analysis and classification of suicide notes, depression notes and last statements has traditionally been conducted separately. Work on suicide notes has often focused on identifying suicidal ideation online (O'dea et al., 2017) or distinguish-ing genuine from forged suicide notes (Coulthard et al., 2016), whilst the main purpose of analysing last statements has been to identify psychological factors or key themes (Schuck and Ward, 2008). Suicide Notes Recent years have seen an increase in the analysis of suicidal ideation on social media platforms, such as Twitter. Shahreen et al. (2018) searched the Twitter API for specific keywords and analysed the data using both traditional machine learning techniques as well as neural networks, achieving an accuracy of 97.6% using neural networks. Research conducted by Burnap et al. (2017) have developed a classifier to distinguish suicide-related themes such as the reports of suicides and casual references to suicide. Work by Just et al. (2017) used a dataset annotated for suicide risks by experts and a linguistic analysis tool (LIWC) to determine linguistic profiles of suiciderelated twitter posts. Other work by Pestian et al. (2010) has looked into the analysis and automatic classification of sentiment in notes, where traditional machine learning algorithms were used. Another important area of suicide note research is the identification of forged suicide notes from genuine ones. Jones and Bennell (2007) have used supervised classification model and a set of linguistic features to distinguish genuine from forged suicide notes, achieving an accuracy of 82%. Depression notes Work on identifying depression and other mental health conditions has become more prevelant over recent years, where a shared task was dedicated to distinguish depression and PTSD (Post Traumatic Stress Disorder) on Twitter using machine learning (Coppersmith et al., 2015). Morales et al. (2017) have argued that changes in cognition of people with depression can lead to different language usage, which manifests itself in the use of specific linguistic features. Research conducted byResnik et al. (2015) also used linguistic signals to detect depression with different topic modelling techniques. Work by Rude et al. (2004) used LIWC to analyse written documents by students who have experienced depression, currently depressed students as well as student who never have experienced depression, where it was found that individuals who have experienced depression used more first-person singular pronouns and negative emotion words. Nguyen et al. (2014) used LIWC to detect differences in language in online depres-sion communities, where it was found that negative emotion words are good predictors of depressed text compared to control groups using a Lasso Model (Tibshirani, 1996). Research conducted by Morales and Levitan (2016) showed that using LIWC to identify sadness and fatigue helped to accurately classify depression. Last statements Most work in the analysis of last statements of death row inmates has been conducted using data from The Texas Department of Criminal Justice, made available on their website (Texas Department of Criminal Justices, 2019). Recent work conducted by Foley and Kelly (2018) has primarily focused on the analysis of psychological factors, where it was found that specifically themes of 'love' and 'spirituality' were constant whilst requests for forgiveness declined over time. Kelly and Foley (2017) have identified that mental health conditions occur often in death row inmates with one of the most common conditions being depression. Research conducted by Heflick (2005) studied Texas last statements using qualitative methods and have found that often afterlife belief and claims on innocence are common themes in these notes. Eaton and Theuer (2009) studied qualitatively the level of apology and remorse in last statements, whilst also using logistic regression to predict the presence of apologies achieving an accuracy of 92.7%. Lester and Gunn III (2013) used the LIWC program to analyse last statements, where they have found nine main themes, including the affective and emotional processes. Also, Foley and Kelly (2018) found in a qualitative analysis that the most common themes in last statements were love (78%), spirituality (58%), regret (35%) and apology (35%). Data For our analysis and experiments we use three different datasets, which have been collected from different sources. For the experiments we use standard data preprocessing techniques and remove all identifying personal information. 1 Last Statements Death Row This dataset has been made available by the Texas Department of Criminal Justices (2019) and contains 545 records of prisoners who have received the death penalty between 1982 and 2017 in Texas, U.S.A. A total of 431 prisoners wrote notes prior to their death. Due to the information available on this data we have done a basic analysis on the data available, hereafter referred to as LS. Suicide Note The data for this corpus has mainly been taken from Schoene and Dethlefs (2016), but has been further extended by using notes introduced by The Kernel (2013) and Tumbler (2013). There are total of 161 suicide notes in this corpus, hereafter referred to as GSN. Depression Notes We used the data collected by Schoene and Dethlefs (2016) of 142 notes written by people identifying themselves as depressed and lonely, hereafter referred to as DL. Linguistic Analysis To gain more insight into the content of the datasets, we performed a linguistic analysis to show differences in structure and contents of notes. For the purpose of this study we used the Linguistic Inquiry and Word Count software (LIWC) (Tausczik and Pennebaker, 2010), which has been developed to analyse textual data for psychological meaning in words. We report the average of all results across each dataset. Dimension Analysis Firstly, we looked at the word count and different dimensions of each dataset (see Table 1). It has previously been argued by Tausczik and Pennebaker (2010) that the words people use can give insight into the emotions, thoughts and motivations of a person, where LIWC dimensions correlate with emotions as well as social relationships. The number of words per sentences are highest in DL writers and lowest in last statement writers. Research by Osgood and Walker (1959) has suggested that people in stressful situations break their communication down into shorter units. This may indicate alleviated stress levels in individuals writing notes prior to receiving the death sentence. Clout stands for the social status or confidence expressed in a person's use of language (Pennebaker et al., 2014). This dimension is highest for people writing their last statements, whereas depressed people rank lowest on this. Cohan et al. (2018) have noted that this might be due to the fact that depressed individuals often have a lower socio-economic status. The Tone of a note refers to the emotional tone, including both positive and negative emotions, where numbers below 50 indicate a more negative emotional tone (Cohn et al., 2004). The tone for LS is highest overall and the lowest in DL, indicating a more overall negative tone in DL and positive tone in LS. Function Words and Content Words Next, we looked at selected function words and grammatical differences, which can be split into two categories called Function Words (see Table 2), reflecting how humans communicate and Content words (see Table 2), demonstrating what humans say (Tausczik and Pennebaker, 2010). Previous studies have found that whilst there is an overall lower amount of function words in a person's vocabulary, a person uses them more than 50% when communicating. Furthermore it was found that there is a difference in how human brains process function and content words (Miller, 1991). Previous research has found that function words have been connected with indicators of people's social and psychological worlds (Tausczik and Pennebaker, 2010), where it has been argued that the use of function words require basic skills. The highest amount of function words were used in DL notes, whilst both GSN and LS have a similar amount of function words. Rude et al. (2004) has found that high usage, specifically of first-person singular pronouns ("I") could indicate higher emotional and/or physical pain as the focus of their attention is towards themselves. Overall Just et al. (2017) has also identified a larger amount of personal pronouns in suicide-related social media content. Previous work by Hancock et al. (2007) has found that people use a higher amount of negations when also expressing negative emotions and used fewer words overall, compared to more positive emotions. This seem to be also true for the number of negations used in this case where amount of Negations were also highest in the DL corpus and lowest in the LS corpus, whilst the overall words count was lowest for DL and negative emotions highest. Furthermore, it was found that Verbs, Adverb and Adjectives are often used to communicate content, however previous studies have found (Jones and Bennell, 2007;Gregory, 1999) that individuals that commit suicide are under a higher drive and therefore would reference a higher amount of objects (through nouns) rather than using descriptive language such as adjectives and adverbs. Affect Analysis The analysis of emotions in suicide notes and last statements has often been addressed in research (Schoene and Dethlefs, 2018;Lester and Gunn III, 2013) The number of Affect words is highest in LS notes, whilst they are lowest in DL notes, this could be related to the emotional Tone of a note (see Table 1). This also applies to the amount of Negative emotions as they are highest in DL notes and Positive emotions as these are highest in LS notes. Previous research has analysed the amount of Anger and Sadness in GSN and DL notes and has shown that it is more prevalent in DL note writers as these are typical feelings expressed when people suffer from depression (Schoene and Dethlefs, 2016 The term Cognitive processes encompasses a number of different aspects, where we have found that the highest amount of cognitive processes was in DL notes and the lowest in LS notes. Boals and Klein (2005) have found that people who use more cognitive mechanisms to cope with traumatic events such as break ups by using more causal words to organise and explain events and thoughts for themselves. Arguably this explains why there is a lower amount in LS notes as LS writers often have a long time to organise their thoughts, events and feelings whilst waiting for their sentence (Death Penalty Information Centre, 2019). Insight encompasses words such as think or consider, whilst Cause encompasses words that express reasoning or causation of events, e.g.: because or hence. These terms have previously been coined as cognitive process words by (Gregory, 1999), who argued that these words are less used in GSN notes as the writer has already finished the decision making process whilst other types of discourse would still try to justify and reason over events and choices. This can also be found in the analysis of our own data, where both GSN and LS notes show similar, but lower frequency of terms in those to categories compared to DL writers. Tentativeness refers to the language use that indicates a person is uncertain about a topic and uses a number of filler words. A person who use more tentative words, may have not expressed an event to another person and therefore has not processed an event yet and it has not been formed into a story (Tausczik and Pennebaker, 2010). The amount of tentative words used in DL notes is highest, whilst it is lowest in LS words. This might be due to the fact that LS writers already had to reiterate over certain events multiple times as they go through the process of prosecution. Personal Concerns Personal Concerns refers to the topics most commonly brought up in the different notes, where we note that both Money and Work are most often referred to in GSN notes and lowest in LS notes. This might be due to the the fact that (Mind, 2013) lists these two topics as Table 7 shows that the focus of LS letters is primarily in the past whilst GSN and DL letters focus on the present. The high focus on the past in DL notes as well as GSN notes could be, because these notes might draw on their past experiences to express the issues of their current situation or problems.The most frequent use of future tense is in LS letters which could be due to a LS notes writers common focus on afterlife (Heflick, 2005 Overall it was noted that for most analysis GSN falls between the two extremes of LS and DL. Learning Model The primary model is the Long-short-term memory (LSTM) given its suitability for language and time-series data (Hochreiter and Schmidhuber, 1997). We feed into the LSTM an input sequence x = (x 1 , . . . , x N ) of words in a document alongside a label y ∈ Y denoting the class from any of the three datasets. The LSTM learns to map inputs x to outputs y via a hidden representation h t which can be found recursively from an activation function. where t denotes a time-step. During training, we minimise a loss function, in our case categorical cross-entropy as: LSTMs manage their weight updates through a number of gates that determine the amount of information that should be retained and forgotten at each time step. In particular, we distinguish an 'input gate' i that decides how much new information to add at each time-step, a 'forget gate' f that decides what information not to retain and an 'output gate' o determining the output. More formally, and following the definition by Graves (2013), this leads us to update our hidden state h as follows (where σ refers to the logistic sigmoid function and c is the 'cell state'): A standard LSTM definition solves some of the problems of vanilla RNNs have (Hochreiter and Schmidhuber, 1997), but it still has some shortcomings when learning long dependencies. One of them is due to the cell state of an LSTM; the cell state is changed by adding some function of the inputs. When we backpropagate and take the derivative of c t with respect to c t − 1, the added term would disappear and less information would travel through the layers of a learning model. For our implementation of a Dilated LSTM, we follow the implementation of recurrent skip connections with exponentially increasing dilations in a multi-layered learning model by Chang et al. (2017). This allows LSTMs to better learn input sequences and their dependencies and therefore temporal and complex data dependencies are learned on different layers. Whilst dilated LSTM alleviates the problem of learning long sequences, it does not contribute to identifying words in a sequence that are more important than others. Therefore we extend this network by (1) an embedding layer and (2) an attention mechanism to further improve the network's ability. A graph illustration of our learning model can be seen in Figure 2. Dilated LSTM with Attention Each document D contains i sentences S i , where w i represents the words in each sentence. Firstly, we embed the words to vectors through an embedding matrix W e , which is then used as input into the dilated LSTM. The most important part of the dilated LSTM is the dilated recurrent skip connection, where c (l) t is the cell in layer l at time t: s (l) is the skip length; or dilation of layer l; x (l) t as the input to layer l at time t; and f (·) denotes a LSTM cell; M and L denote dilations at different layers: The dilated LSTM alleviates the problem of learning long sequences, however not every word in a sequence has the same meaning or importance. Attention layer The attention mechanism was first introduced by Bahdanau et al. (2015), but has since been used in a number of different tasks including machine translation (Luong et al., 2015), sentence pairs detection (Yin et al., 2016) , neural image captioning (Xu et al., 2015) and action recognition (Sharma et al., 2015). Our implemenetation of the attention mechanism is inspired by Yang et al. (2016), using attention to find words that are most important to the meaning of a sentence at document level. We use the output of the dilated LSTM as direct input into the attention layer, where O denotes the output of final layer L of the Dilated LSTM at time t +1 . The attention for each word w in a sentence s is computed as follows, where u it is the hidden representation of the dilated LSTM output, α it represents normalised alpha weights measuring the importance of each word and S i is the sentence vector: Experiments and Results For our experiments we use all three datasets, Table 8 shows the results for the experiments series. We establish three performance baselines on the datasets by using three different algorithms previously used on similar datasets. Firstly, we use the ZeroR and LMT (Logistic Model Tree) previously used by (Schoene and Dethlefs, 2016). Additionally we chose to benchmark our algorithm also against the originally proposed Bidirectional LSTM with attention proposed by Yang et al. (2016), which was also used on similar existing datasets before (Schoene and Dethlefs, 2018). Evaluation In order to evaluate the DLSTM with attention we look in more detail at the predicted labels and visualise examples of each note to show which features are assigned the highest attention weights. Label Evaluation In Figure 2 we show the confusion matrix over the DLSTM with attention. It can be seen that LS notes are most often correctly predicted and DL notes are least likely to be accurately predicted. The same applies to results of the main competing model (Bi-directional LSTM with Attention), Figure 3 shows that this model still misclassifies LS notes with DL notes. The most important words highlighted in a last statement note (see Figure 4) are personal pronouns as well as an apology and expression of love towards friends and family members. This corresponds with the higher amount of personal pronouns, positive emotions and references to Family in LS notes compared to GSN and DL notes. Furthermore it can be seen that there is a low amount of cognitive process words and more action verbs such as killing or hurt, which could confirm that inmates have had more time to process events and thoughts and don't need cognitive words as a coping mechanism anymore (Boals and Klein, 2005). Figure 5 shows a GSN note, where the most important words are also pronouns, references to family, requests for forgiveness and endearments. Previous research has shown that forgiveness is an important feature as well as the giving instructions such as help or phrases like do not follow are key to accurately classify suicide notes (Pestian et al., 2010). Terms of endearment for loved ones at the start or towards the end of a note (Gregory, 1999). The DL note in Figure 6 shows that there is a greater amount of cognitive process verbs present, such as feeling or know as well as negations, which confirms previous analysis using LIWC. Figure 7 shows a visualisation of a LS note. In this instances the word God was replaced with up, when looking into the usage of the word up in other LS notes, it was found that it was commonly used in reference to religious topics such as God, heaven or up there. Whilst there is still consistency in highlighting personal pronouns (e.g.: you), it can be seen that the end of the note is missing and more action verbs such as hurt or take are more important. The visualisation in Figure 9 demonstrates how the personal pronoun I has been removed from several DL notes, where DL notes are least likely to be predicted accurately as shown in Figure 2. Conclusion In this paper we have presented a new learning model for classifying long sequences. We have shown that the model outperforms the baseline by 6.99 % and by 5.07 % a competitor model. Furthermore we have provided an analysis of the linguistic features on three datasets, which we have later compared in a qualitative evaluation by visualising the attention weights on examples of each dataset. We have shown that the neural network pays attention to similar linguistic features as provided by LIWC and found in human evaluated related research.
5,429.2
2019-11-01T00:00:00.000
[ "Computer Science" ]
Design of 1 . 33 μm and 1 . 55 μm Wavelengths Quantum Cascade Photodetector In this paper, a quantum cascade photodetector based on intersubband transitions in quantum wells with ability of detecting 1.33 μm and 1.55 μm wavelengths in two individual current paths is introduced. Multi quantum wells structures based on III-Nitride materials due to their large band gaps are used. In order to calculate the photodetector parameters, wave functions and energy levels are obtained by solving 1-D Schrodinger–Poisson equation self consistently at 80 ̊K. Responsivity values are about 22 mA/W and 18.75 mA/W for detecting of 1.33 μm and 1.55 μm wavelengths, respectively. Detectivity values are calculated as 1.17 × 107 (Jones) and 2.41 × 107 (Jones) at wavelengths of 1.33 μm and 1.55 μm wavelengths, respectively. Introduction Quantum well infrared photodetectors (QWIPs) as thermal imagers using focal plane arrays (FPAs) have been studied extensively [1] [2] [3].For electronic transport QWIPs operate at photoconductive and photovoltaics modes.Applications of QWIPs at photoconductive mode are limited at low temperature because of existence an external applied bias.QWIPs based on photovoltaic mode are promising devices operating in high temperature and longer wavelength applications because of having no external voltage bias and dark current [4]. Quantum cascade detectors (QCDs) operating in photovoltaics mode are promising devices for small pixel large area FPAs, multicolor detection and demultip-lexers [5].A typical QCD consist of an active region constructed of multiple periods, each containing a thick, highly doped active QW and a nominally undoped extraction cascade composed of thinner QWs.The electrons in the ground state of doped wells in each period are excited to the upper level, then extract from this well by emission of an optical phonon.The speed of detectors based on intersubband transition (ISBT) is limited by the electron ISBT scattering time is about 1 ps [3].Conduction-band ISBT QCDs operate based on photon-electron interactions between quantized subbands in the conduction band of wells.In QCD structure Multi quantum wells (MQWs) are designed dependant on detected wavelengths [6] [7] [8] [9] [10].For small wavelength, quantum well structures with small conduction band offset such as GaAs/AlGaAs and InGaAs/InAlAs are used [5] [6] [7].MQWs based on III-Nitride materials due to large conduction band offset and large LO-phonon energy are the best candidate for design of UV and NIR photodetector [11] [12] [13] [14] [15].In the other hand, 1.33 μm and 1.55 μm wavelengths are interest, because of their importance in optical fiber communications.Attenuation of fiber glass at 1.55 μm wavelength is minimal so, this wavelength useful for long distance communication.Also, distortion of optical signal centered at 1.33 μm wavelength is minimal. In this paper, a QCD for detecting of 1.33 μm and 1.55 μm wavelengths in individual current paths based on intersubband transitions in AlGaN/AlN MQWs is designed.Paths are separated by 100 Å AlN In order to calculate photodetector parameters, wave functions and energy levels are obtained by solving 1-D Schrodinger-Poisson equation self consistently at 80˚K.Incident wavelength excite the electrons populated the first energy level of n+ doped QWs, after that they are extracted from first wells by emitting optical phonons emission having energy close to GaN LO-phonon energy (92 meV).Responsivity of paths is about 22 mA/W and 18.75 mA/W for detecting of 1.33 μm and 1.55 μm wavelengths respectively.Detectivity values are calculated as 1.17 × 10 7 (Jones) and 2.41 × 10 7 (Jones) at wavelengths of 1.33 μm and 1.55 μm respectively. Theoretical Background and Simulation Results A 3D view of the design QCD with ability of detection 1.33 μm and 1.55 μm wavelengths in two separated paths is indicated in Figure 1. Each path separated by 100 Å AlN.Paths 1 and 2 are designed for detection of 1.33 μm and 1.55 μm wavelengths, respectively.They possess 20 periods Al x- Ga 1−x N/AlN MQWs, the thickness of barriers and wells listed in Table 1 and Table 2.The first QWs in each path are n+ doped with concentration of 5 × 10 11 cm −2 .Conduction band edge and wave functions for each path are shown in The wave functions are calculated by solving 1-D Schrodinger-Poisson self consistently at 80 ˚K [16].For conduction band edges calculation pyroelectric and piezoelectric polarization effects in III-Nitrid materials are considered [17]. Internal electric field due to polarization effects in j th layer for a k layers quantum structures is obtained as Equation ( 1) [18]: ( ) where, j and k are the number of layers.L k , P k , P j , ε j and ε k are the length of layers, the total polarization and permittivity of j th and k th layer, respectively. The absorption coefficient is obtained as Equation (2) [10]: where, E i and E f , are the quantized energy levels for the initial and final states, respectively.M fi , μ, c, L eff , n r and τ in are dipole matrix element between initial and final states, the permeability, the speed of light in free space, the effective spatial extent of electrons in subbands, the refractive index and the intersubband relaxation time respectively.The absorption coefficient at 80˚K for each path is indicated in Figure 3. Absorption coefficient is linked to dipole matrix element between initial and final states through Equation ( 2).As illustrated the path for detection of 1.55 μm has small absorption coefficient due to the small overlapping of wave functions between initial and first levels.The responsivity R for each path is obtained as Equation ( 3) [3]: where, λ, c, q, h, η, P e , P c , N QW are the incident wavelength, the speed of light in free space, the elementary charge, Planck's constant, the quantum efficiency, the escape probability of an excited electron in active QW, capture probability into the active QW's ground state for an electron traveling down the QCD's cascade and the number of active QW periods of the QCD.Absorption efficiency is expressed as Equation ( 4) [3]: where, α and d are the absorption coefficient and thickness of active well in each period, respectively.The responsivity for each path at 80 ˚K is indicated in Fig- In QCDs, resistance of the one period of the structure at zero bias in area of the device defined as R 0 A, is an important parameter characterized the dark current (current in absence of incident light) [4].In order to calculate the R 0 A only interaction between electrons and LO-phonon is considered and interaction between electrons and acoustical phonons are neglected due to sufficient high differences between the energy levels in the studied structures [4].R 0 A is obtained as Equation ( 5) [4]: Here, G ij is global transition rate between the subband i and subband j and is the sum of the two transition rates for absorption of LO phonons (G aij ), and 3 and Table 4, respectively. As observed in Table 3 and Table 4 since global transition rates of path 1 are higher than the values for path 2 so, the resistance at null bias of path 1 is lower. Higher global transition rates between two levels can be related to higher overlapping of corresponding wave functions [3].For path 1 this overlapping is high (due to having smaller width of the wells) which leads to smaller resistance values.Electron transition capability between two levels is increased by increasing temperature, therefore as shown in Table 3 and Table 4, by increasing sample temperature the global rates is increased. Detectivity for the designed detector is limited by Johnson-noise obtained as Equation 6[3]: where, R(λ), R 0 A are the responsivity spectrum and resistance of device in area of the device, respectively.Detectivity versus incident wavelength for paths at 80 ˚K is shown in Figure 7. Paths have difference detectiveity values due to having different responsivity and resistivity (at zero bias) values shown in Figure 4 and Figure 5, respectively. Conclusion In this research, a QCD for detecting of 1. Figure 2 . Figure 2. As shown in Figure 2, by incoming radiation, the photoexcited electrons transport to the lower energy levels with optical phonons emission close to GaN LO-phonon energy (92 meV) until reach the first levels in the next period of the path. Figure 1 . Figure 1.A 3D view of the design QCD with ability of detection 1.33 μm and 1.55 μm wavelengths in two separated paths. Figure 2 . Figure 2. Conduction band edge and wave functions for each path for detection of (a) 1.33 μm and (b) 1.55 μm. Figure 3 . Figure 3. Absorption coefficient for paths versus incident wavelength at 80 ˚K. Figure 4 . Figure 4. Responsivity for each path versus incident wavelength at 80 ˚K. 33 μm and 1.55 μm wavelengths in individual current paths based on intersubband transitions in AlGaN/AlN MQWs was designed.In order to calculate photodetector parameters, 1-D Schrodinger-Poisson equation self consistently at 80˚K was solved to obtain wave functions and energy levels.Responsivity values about 22 mA/W and 18.75 mA/W for detecting of 1.33 μm and 1.55 μm wavelengths, respectively.Detectivity values are calculated as 1.17 × 10 7 (Jones) and 2.41 × 10 7 (Jones) at wavelengths of 1.33 μm and 1.55 μm, respectively. Table 3 . Dominant transition rates of path1 at three different temperatures. Table 4 . Dominant transition rates of path 2 at three different temperatures.
2,158.8
2017-08-17T00:00:00.000
[ "Physics", "Engineering" ]
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. Introduction Security has become a major challenge for Internet of Things (IoT) research. It cannot be ignored in massive IoT applications as well. With the rapid expansion of the IoT, a variety of different wireless communication technologies and network structures are constantly integrating, including Wireless Sensor Networks (WSNs), RFID, mobile vehicular networks, mobile networks, 4G communication networks, WiMAX and cable broadband, etc. Meanwhile, the network communication environment is becoming more and more complicated. Compared with existing wireless networks, security issues such as reliable relationships between entities and positioning will be more difficult in the IoT [1]. These problems cannot be solved simply through existing network security solutions. In various objects and all kinds of network communication scenarios, how to ensure the reliability of information resources, the stability of information transmission and the security of information space have become important and urgent problems. As an important component of the IoT, belt-type sensor networks are widely used in ribbon-like monitoring areas such as rivers, highways, tracks, bridges, pipelines and the industry fields. It is indeed a special IoT branch that can provide information exchange between items in a ribbon area. Due to the terrain and surrounding environments, the topology of the belt-type sensor network presents a narrow, elongated strip as a whole. In this type of network, a large number of nodes with limited resources are often deployed in one-dimensional or approximate one-dimensional linear spaces. Information transmission can only be carried out along a few paths, with a certain direction. The communication range of each node in the sensor networks cannot be covered by the gateway node. Therefore, the node which is far away from the target needs to transmit data in a multi-hop way. More seriously, belt-type sensor networks have an open nature which is particularly vulnerable to all kinds of attacks. When it comes to widespread applications, security issues cannot be ignored. Location is a crucial parameter in the belt-type sensor network. Normally, the data itself is meaningless if it contains no corresponding location information. In order to ensure effective interconnection among individuals within the network coverage, the data should be located. Especially when a sensor node detects an emergency, its location information should be quickly and accurately identifiable [1,2]. Node location information is critical to the effectiveness of sensor networks applications. As the core support of the IoT, secure location technology is receiving more and more attention. In such a situation, designing a dependable localization algorithm for survivable belt-type sensor networks is a realistic challenge. It is also a meaningful work to improve the next-generation wireless technologies for the IoT [3]. The security location algorithm based on the non-ranging method is a research hotspot, which usually use the estimated distances between nodes to calculate the position [4]. At present, there are more research results for sensor networks deployment strategy in open and flat environments [5][6][7][8][9][10][11]. For an ideal environment without malicious interference, most algorithms can achieve good positioning performance. However, as the technology is constantly updated and the application scenes become more complicated, it is clear that the existing algorithms cannot maintain the capacity, especially for the situation which requires a higher security level. In recent years, there have been many studies and reports on sensor network positioning [12][13][14][15][16][17][18][19][20]. Numerous non-ranging algorithms are only available for simple environments without complex disturbances. As a kind of classical non-ranging positioning algorithm, the Distance Vector Hop (DV-Hop) algorithm has been widely used and studied in depth for node location [1,5,21]. However, there are relatively few studies on algorithms which are suitable for special environments such as belt-type, crossover, and annular areas. For special occasions, the node deployment and positioning requirements are different compared with open flat space [22,23]. Generally, any kind of security algorithm will consume part of the available resources, including the computing energy and communication costs. Therefore, for a specific positioning system, we need to consider the system application background, security requirements of attack models, resource conditions and other factors [24,25]. Since special node deployments have different effects on network connectivity, many existing algorithms are greatly limited by their capacity [26][27][28][29][30][31][32]. The motivation of this paper is that the nodes cannot always achieve correct and stable positioning information, which may trigger network security risk. In particular, the wireless transmission medium is susceptible to temperature, humidity and other environmental impacts. Any failures or errors may generate unpredictable security risks. In this scenario, the node location method which is applicable for the wide area cannot be used directly in the long zone. At the same time, the resources of a single node are limited and it is difficult to add additional devices for positioning. In this paper, we fully consider the special structure of the region to deploy the belt-type sensor networks. More importantly, the addition of extra hardware devices is avoided and the network infrastructure is utilized as much as possible to locate the target. If the non-ranging method is used directly, the positioning accuracy cannot meet the requirements well. In order to get more precise results, an iterative refinement mathematical treatment method is introduced during the process. The limited resource of a single node is an important prerequisite, which has been fully considered. For the contradiction between energy consumption and positioning accuracy, an acceptable balance is established. We make the following contributions: (1) We design a node deployment mechanism that could effectively save energy. (2) A hop-distance calculation method that can eliminate blurring is proposed. (3) We improve the accuracy of the proposed algorithm. The remainder of our paper is organized as follows: Section 2 presents the modeling of the belt-type topology and node deployment mechanism based on energy-efficiency. Section 3 describes the improvement measures of the proposed algorithm in different stages from three aspects. How to determine the security of location information is also discussed in this section. Section 4 illustrates our simulations and experimental results. Finally, the conclusions are presented in Section 5. Topological Modeling In this section, we analyze the application environment of the algorithm, and model the belt-type sensor networks. Model Requirements When deploying a wireless sensor network, if an aircraft is utilized to randomly distribute the anchor nodes, this may cause some nodes to be closely adjacent or overlapping, resulting in waste of resources and making many unknown nodes unable to be located. We use the packet structure model as the research basis, considering the characteristics of belt-type networks and the formation mechanism of the regional nodes when designing the topology. Grouping nodes in the network can improve the efficiency of communication channels and effectively control the resources occupied by the activity. The packet structure model is a common application object in sensor networks which is also the basis of an effective node deployment. The network topology is shown in Figure 1. In belt-type sensor networks, the normal nodes are usually placed along the edges of the coverage area, which makes it look like they are all in an extended line. For the sake of convenience, the nodes have the same information processing and communication capabilities in our model. Each type of node has its own sequence label (ID). The spatial distribution of nodes conforms to the following two principles: first, all nodes in the network are in the coverage of the whole detection area; secondly, in each sensor node group, the central node automatically becomes the group center, and the rest of the sensor nodes in each group can be covered by the central node of this group. The source node is a network node that acts as a sender to transmit the original packet. We name it "Source" for short. Sink, also known as "sink node", is responsible for connecting the sensor network to other networks. Normally, it can be considered as a gateway. The node with relay function is named as the "relay node". They are able to transmit data and information in a wireless, multi-hop manner. The beacon node is also known as an "anchor node" which belongs to a node that already knows its own position. The remainder of our paper is organized as follows: Section 2 presents the modeling of the belt-type topology and node deployment mechanism based on energy-efficiency. Section 3 describes the improvement measures of the proposed algorithm in different stages from three aspects. How to determine the security of location information is also discussed in this section. Section 4 illustrates our simulations and experimental results. Finally, the conclusions are presented in Section 5. Topological Modeling In this section, we analyze the application environment of the algorithm, and model the belt-type sensor networks. Model Requirements When deploying a wireless sensor network, if an aircraft is utilized to randomly distribute the anchor nodes, this may cause some nodes to be closely adjacent or overlapping, resulting in waste of resources and making many unknown nodes unable to be located. We use the packet structure model as the research basis, considering the characteristics of belt-type networks and the formation mechanism of the regional nodes when designing the topology. Grouping nodes in the network can improve the efficiency of communication channels and effectively control the resources occupied by the activity. The packet structure model is a common application object in sensor networks which is also the basis of an effective node deployment. The network topology is shown in Figure 1. In belt-type sensor networks, the normal nodes are usually placed along the edges of the coverage area, which makes it look like they are all in an extended line. For the sake of convenience, the nodes have the same information processing and communication capabilities in our model. Each type of node has its own sequence label (ID). The spatial distribution of nodes conforms to the following two principles: first, all nodes in the network are in the coverage of the whole detection area; secondly, in each sensor node group, the central node automatically becomes the group center, and the rest of the sensor nodes in each group can be covered by the central node of this group. The source node is a network node that acts as a sender to transmit the original packet. We name it "Source" for short. Sink, also known as "sink node", is responsible for connecting the sensor network to other networks. Normally, it can be considered as a gateway. The node with relay function is named as the "relay node". They are able to transmit data and information in a wireless, multi-hop manner. The beacon node is also known as an "anchor node" which belongs to a node that already knows its own position. In belt-type sensor networks, the communication between a sensor node and a monitoring base station should be executed through a relay node in the multi-hop way. Therefore, the closer the node is with the monitoring station, the more data it needs to forward. After operating for a long time, the network can easily form a "hot zone" that increases the burden of the whole network. The energy consumption is increasing quickly as well [9,30]. Neighbor nodes near the base station are prone to "empty energy" phenomena. The emergence of the hot zone makes the number of nodes (near the base station) continuously accumulates, and the activity occupies a large amount of the communication bandwidth. The calculation ability of the node is severely reduced, which can cause the rate of packet loss to rise. Therefore, the location valid information cannot be effectively passed and tested, and the positioning accuracy is finally affected. The network of nodes loses communication and computing power, which can easily cause the network down and positioning failure. To study the problem more effectively, we assume that: In belt-type sensor networks, the communication between a sensor node and a monitoring base station should be executed through a relay node in the multi-hop way. Therefore, the closer the node is with the monitoring station, the more data it needs to forward. After operating for a long time, the network can easily form a "hot zone" that increases the burden of the whole network. The energy consumption is increasing quickly as well [9,30]. Neighbor nodes near the base station are prone to "empty energy" phenomena. The emergence of the hot zone makes the number of nodes (near the base station) continuously accumulates, and the activity occupies a large amount of the communication bandwidth. The calculation ability of the node is severely reduced, which can cause the rate of packet loss to rise. Therefore, the location valid information cannot be effectively passed and tested, and the positioning accuracy is finally affected. The network of nodes loses communication and computing power, which can easily cause the network down and positioning failure. To study the problem more effectively, we assume that: (1) The sensor nodes are distributed in a wide rectangular area (length is L and width is M, L >> M). The central gateway node is located in the middle of the strip area, and the sink node is located at the left side. The communication between the sensor node and the gateway node adopts the multi-hop mode, and the communication radius of the node is R. The distances between most sensor nodes and base stations are greater than the R of the nodes themselves. The nodes in the group can communicate directly with the sink nodes, and the communication distances between the intergroup nodes are one-hop. At this point, the sink node and the source node communication radius should be greater than the length of the group (shown in Figure 2). (2) Each sensor node has a unique ID, and they can carry out information perception and collection independently. They can also send their own information through the wireless channel to the gateway node. In a unit area, let the rate of data generation is λ and let the initial energy of the sensor node is e. (1) The sensor nodes are distributed in a wide rectangular area (length is L and width is M, L >> M). The central gateway node is located in the middle of the strip area, and the sink node is located at the left side. The communication between the sensor node and the gateway node adopts the multi-hop mode, and the communication radius of the node is R. The distances between most sensor nodes and base stations are greater than the R of the nodes themselves. The nodes in the group can communicate directly with the sink nodes, and the communication distances between the intergroup nodes are one-hop. At this point, the sink node and the source node communication radius should be greater than the length of the group (shown in Figure 2). (2) Each sensor node has a unique ID, and they can carry out information perception and collection independently. They can also send their own information through the wireless channel to the gateway node. In a unit area, let the rate of data generation is λ and let the initial energy of the sensor node is e . Regional Energy Consumption The process of exchanging information between distant nodes consumes a lot of energy in belt-type sensor networks. In a certain region, the life cycle of the sensor node can be approximately defined as the total energy of the region divided by the rate of energy consumption. To facilitate the analysis of energy consumption in the network area, we selected a small area which named as "A". In order to maximize the extension of the life cycle of the sensor networks, the period of each sensor node should be set in a relatively close range. Therefore, the ratio of the energy consumption to the total energy in each region is set to a constant. According to the above analysis, the network horizontal range is set in [−L/2, +L/2], and the gateway nodes are arranged in the symmetrical position. The network area used to analyze energy consumption is shown in Figure 3. Most of energy consumption in sensor networks mainly occurs during the data transmission. According to the classical signal propagation theory [9], the energy consumption of the transmitted information ET(l, d) can be expressed as Equation (1): where l is the length of the packet, d is the transmission distance, Regional Energy Consumption The process of exchanging information between distant nodes consumes a lot of energy in belt-type sensor networks. In a certain region, the life cycle of the sensor node can be approximately defined as the total energy of the region divided by the rate of energy consumption. To facilitate the analysis of energy consumption in the network area, we selected a small area which named as "A". In order to maximize the extension of the life cycle of the sensor networks, the period of each sensor node should be set in a relatively close range. Therefore, the ratio of the energy consumption to the total energy in each region is set to a constant. According to the above analysis, the network horizontal range is set in [−L/2, +L/2], and the gateway nodes are arranged in the symmetrical position. The network area used to analyze energy consumption is shown in Figure 3. (1) The sensor nodes are distributed in a wide rectangular area (length is L and width is M, L >> M). The central gateway node is located in the middle of the strip area, and the sink node is located at the left side. The communication between the sensor node and the gateway node adopts the multi-hop mode, and the communication radius of the node is R. The distances between most sensor nodes and base stations are greater than the R of the nodes themselves. The nodes in the group can communicate directly with the sink nodes, and the communication distances between the intergroup nodes are one-hop. At this point, the sink node and the source node communication radius should be greater than the length of the group (shown in Figure 2). (2) Each sensor node has a unique ID, and they can carry out information perception and collection independently. They can also send their own information through the wireless channel to the gateway node. In a unit area, let the rate of data generation is λ and let the initial energy of the sensor node is e . Regional Energy Consumption The process of exchanging information between distant nodes consumes a lot of energy in belt-type sensor networks. In a certain region, the life cycle of the sensor node can be approximately defined as the total energy of the region divided by the rate of energy consumption. To facilitate the analysis of energy consumption in the network area, we selected a small area which named as "A". In order to maximize the extension of the life cycle of the sensor networks, the period of each sensor node should be set in a relatively close range. Therefore, the ratio of the energy consumption to the total energy in each region is set to a constant. According to the above analysis, the network horizontal range is set in [−L/2, +L/2], and the gateway nodes are arranged in the symmetrical position. The network area used to analyze energy consumption is shown in Figure 3. Most of energy consumption in sensor networks mainly occurs during the data transmission. According to the classical signal propagation theory [9], the energy consumption of the transmitted information ET(l, d) can be expressed as Equation (1): Most of energy consumption in sensor networks mainly occurs during the data transmission. According to the classical signal propagation theory [9], the energy consumption of the transmitted information E T (l, d) can be expressed as Equation (1): where l is the length of the packet, d is the transmission distance, E e (l) is the energy consumption of the circuit, E(l, d) amp is the energy consumption parameter for the emitter. The energy consumption of the received information E R (l) can be expressed as Equation (2): From the Equations (1) and (2), we can achieve the total energy loss rate of the whole region. As shown in Equation (3). The energy consumed by sending and receiving unit data are represented by E T and E R respectively. t is the communication distance of a single node. r stands for the distance between the reference node and the base station. When r is not equal to t and r >> t, the node density of the region (ρ(r)) is approximately 2n(L − 2r)/L 2 . In summary, if the sensor node is closer to the gateway node, its distribution density should be larger. Because the node which is closer to the gateway node not only needs to send the collected perceptual data, but also to forward the information collected from others. Theoretically, more nodes need to join the network to share the communication consumption, achieve balanced load and extend network survival time. Node Deployment Method Let the total number of sensor nodes in the network be N. These nodes are subdivided into n groups in a belt-type zone. According to the physical topology environment and the control strategy on node density described in Sections 2.1 and 2.2, the collection of each node is grouped and constructed. Both the effective communication radius of the node and the spacing among different groups are set to 50 m. At the same time, the group number of the node is assigned to each node. The group number of the sink node is 0, and the first group of nodes is with only one hop distance from the sink node. For the neighbor nodes, the distance between the other nodes and the sink node is gradually increased. For nodes at two-hops, three-hops and four-hops from the sink node, their group numbers are set to 2, 3 and 4, respectively. All the other groups can be numbered with similar method. In a range controlled by a gateway node, the source node is distributed from high to low on both sides according to the distribution density. Within the jurisdiction of a single gateway node, the distribution of each node is shown in Figure 4. From the Equations (1) and (2), we can achieve the total energy loss rate of the whole region. As shown in Equation (3). The energy consumed by sending and receiving unit data are represented by ET and ER respectively. t is the communication distance of a single node. r stands for the distance between the reference node and the base station. When r is not equal to t and r >> t, the node density of the region . In summary, if the sensor node is closer to the gateway node, its distribution density should be larger. Because the node which is closer to the gateway node not only needs to send the collected perceptual data, but also to forward the information collected from others. Theoretically, more nodes need to join the network to share the communication consumption, achieve balanced load and extend network survival time. Node Deployment Method Let the total number of sensor nodes in the network be N. These nodes are subdivided into n groups in a belt-type zone. According to the physical topology environment and the control strategy on node density described in Sections 2.1 and 2.2, the collection of each node is grouped and constructed. Both the effective communication radius of the node and the spacing among different groups are set to 50 m. At the same time, the group number of the node is assigned to each node. The group number of the sink node is 0, and the first group of nodes is with only one hop distance from the sink node. For the neighbor nodes, the distance between the other nodes and the sink node is gradually increased. For nodes at two-hops, three-hops and four-hops from the sink node, their group numbers are set to 2, 3 and 4, respectively. All the other groups can be numbered with similar method. In a range controlled by a gateway node, the source node is distributed from high to low on both sides according to the distribution density. Within the jurisdiction of a single gateway node, the distribution of each node is shown in Figure 4. Node Activation Mechanism In WSN, most member nodes should be dormant to save energy consumption. However, the nodes which are located at each group boundary or perform specific tasks should be in an operational state, i.e., The network does not need all nodes in work mode. If a reasonable sleep/wake mechanism is used, we can greatly reduce the network energy costs. Most sensor networks adopt IEEE 802.15.4 as the Media Access Control (MAC) layer protocol. It already contains a periodic sleep/wake mechanism. However, in the band topology, if the wake cycle of Node Activation Mechanism In WSN, most member nodes should be dormant to save energy consumption. However, the nodes which are located at each group boundary or perform specific tasks should be in an operational state, i.e., The network does not need all nodes in work mode. If a reasonable sleep/wake mechanism is used, we can greatly reduce the network energy costs. Most sensor networks adopt IEEE 802.15.4 as the Media Access Control (MAC) layer protocol. It already contains a periodic sleep/wake mechanism. However, in the band topology, if the wake cycle of each node in the network is exactly the same, it will easily cause a packet internal node to generate large congestion when sending and receiving data information. As a result, the energy load of each node will generally increase. In order to adapt the network model based on energy-efficient strategy, the algorithm should use a sleep/wake-up scheduling mechanism based on a dynamic reconfiguration strategy as follows: (1) When the target node enters the monitoring area, the sensor node starts searching and receiving the broadcast flood message "Request-MSG" issued by the target. According to the chronological order, sink node records the minimum hop number information "Hop-count" and its own identification information "Node-ID". Subsequently, all information about anchor nodes in the neighbor area of the target node is stored and the initial positioning tree is constructed. (2) Based on the network topology of the target node and neighbor anchor nodes in the sensing area, the two sets of nodes which need to wake up or keep the sleeping (low power state) are estimated and sorted. Once they are determined, the message "Wakeup-MSG" is immediately sent to the target, wake up and activate anchor nodes with h hop from the target node, launch them into working state and holding. After all above actions are completed, the message "Prune-MSG" is sent to the target, and the anchor nodes are cleared once again when the locating tree is created. (3) According to the business type, we constantly and dynamically reconstruct the locating tree. As the target node moves and the sensing area continually changes, the anchor nodes that need to participate in the location continue to wake up or to remain dormant. Hop-Distance Estimation Correction In this section, we examine the connectivity among nodes and correct the hops-distance estimates based on the differences in connectivity. The non-ranging positioning algorithm can be described as follows: Firstly, adopting the network to perceive information such as (ID, location, hops count, etc.) between the unknown node and the anchor node. Secondly, using information fusion and mathematical calculation methods to estimate the distance between nodes. Finally, the location of the unknown node can be estimated. The basic steps are: (1) Using the protocol of distance vector exchange in sensor networks, the hop count h and distanced information between unknown nodes and anchor nodes are collected. In the network, packets containing location information are forwarded until all nodes are aware of the location of each anchor node. With data fusion technology, data in all packages is associated. (2) According to the location information of other anchor nodes received by the known node, the hop-distance conversion model is established. Using the distance formula, we can estimate the actual distance about per hop, and then broadcast it over the entire network. (3) We can obtain the estimated distance between the unknown target node and each anchor node. By applying the mathematical method (triangular method and maximum likelihood estimation method), we can further estimate the position coordinates of the target node and correct the calibration. Mechanism of Data Broadcasting The maximum communication radius of an anchor node is the range of information transmission, and the typical network protocol of distance vector exchange is used to send packets to the surrounding neighbor nodes in a broadcast manner. This protocol uses bellman-ford to calculate the path. In the distance-vector routing protocol, each router does not know the topology information of the entire network. They simply announce their distance to the other routers and receive similar notices from them. The packet contains its own location coordinate information, the current time information and accumulated hops information. The packet format can be expressed as: LocInf[(x i ,y i ),ID i ,T]. At this time, the location information is transmitted as a normal data message. The sink node initiates the "interest" for the detection target of the location. If an anchor node responds to this "interest" message, its own location information is immediately broadcast and forwarded as the collected data. As shown in Figure 5. The random movement of nodes causes the data packet to generate multiple collisions during the broadcast, resulting in the loss of transmission information [33,34]. In order to avoid the above situation and protect the network performance, we set the maximum data transmission hop value h max in the limited broadcast range. As shown in Figure 5, the sink node does not save the "location information" message after the "interest" message is broadcasted. Only the source node with the anchor function records the "Location interest" message and returns the collected data. At this point, the data which is passed back is the information contained in the anchor node. The special topology of the belt-type sensor networks determines that its internal nodes have different connectivity from other wireless networks. If we use this feature, we can directly identify the approximate orientation of the unknown node. A member node Sk in the network issues a "Neighboring_request" message as soon as it finds another node which is in the same group. Within the effective communication range of the node, the message spreads quickly along the route. When the neighbor node of Sk receives the message, it sends the reply message "Neighboring_act" at once. When its neighbor node receives the packet, it immediately checks the information in it, records the relevant information of the anchor node, and creates an anchor node to access the visitor list. In this process, the node is likely to receive multiple packets from the same anchor node. By checking and comparing, the nodes only retain the group of information which has the least number of hops in its grouping. Subsequently, the hop distances from each anchor node in the bounded area are accumulated and added "1" to their hop count. Lately the packet is forwarded to the surrounding node. The network repeats this process until all nodes in the sensor networks record their location coordinate of each anchor node and the information of corresponding cumulative hop count. Through the above mechanism, all the sensor nodes in the belt-type sensor networks system obtain the cumulative hop distance and the minimum hop count of each anchor node. For any node that receives a message from an anchor node, we use "hmax" as the restriction range parameter for the discrimination. After adding the maximum number of hops, the unknown node is restricted to the extent of the anchor node information. It reduces the range of errors that can occur during packet delivery. More importantly, the use of this parameter reduces the probability of collision with interference information. If the message from the corresponding anchor node has been recorded in the node and the hop count value "k" is less than or equal to the newly received message hops, the newly received message of the node may be considered as an invalid message. Once the node receives a message that is defined as invalid, it immediately discards it and does not proceed with further processing in subsequent stages. Correcting the Accumulated Values In the framework of standard DV-Hop algorithm, the estimated distances of the target node to the anchor nodes in the range of the surrounding hop are very easy to identify as the same, which makes it difficult to determine the distances between the adjacent nodes. A large number of The special topology of the belt-type sensor networks determines that its internal nodes have different connectivity from other wireless networks. If we use this feature, we can directly identify the approximate orientation of the unknown node. A member node S k in the network issues a "Neighboring_request" message as soon as it finds another node which is in the same group. Within the effective communication range of the node, the message spreads quickly along the route. When the neighbor node of S k receives the message, it sends the reply message "Neighboring_act" at once. When its neighbor node receives the packet, it immediately checks the information in it, records the relevant information of the anchor node, and creates an anchor node to access the visitor list. In this process, the node is likely to receive multiple packets from the same anchor node. By checking and comparing, the nodes only retain the group of information which has the least number of hops in its grouping. Subsequently, the hop distances from each anchor node in the bounded area are accumulated and added "1" to their hop count. Lately the packet is forwarded to the surrounding node. The network repeats this process until all nodes in the sensor networks record their location coordinate of each anchor node and the information of corresponding cumulative hop count. Through the above mechanism, all the sensor nodes in the belt-type sensor networks system obtain the cumulative hop distance and the minimum hop count of each anchor node. For any node that receives a message from an anchor node, we use "h max " as the restriction range parameter for the discrimination. After adding the maximum number of hops, the unknown node is restricted to the extent of the anchor node information. It reduces the range of errors that can occur during packet delivery. More importantly, the use of this parameter reduces the probability of collision with interference information. If the message from the corresponding anchor node has been recorded in the node and the hop count value "k" is less than or equal to the newly received message hops, the newly received message of the node may be considered as an invalid message. Once the node receives a message that is defined as invalid, it immediately discards it and does not proceed with further processing in subsequent stages. Correcting the Accumulated Values In the framework of standard DV-Hop algorithm, the estimated distances of the target node to the anchor nodes in the range of the surrounding hop are very easy to identify as the same, which makes it difficult to determine the distances between the adjacent nodes. A large number of measured statistical results show that if the cumulative number of hops are used to estimate the distances between the target nodes and any anchor nodes, the estimated results are greater than the actual physical distance between them [21,35]. In the algorithm process, an appropriate correction method must be added in its framework when the estimation distances and coordinate calculation are carried out. In this way, it is possible to avoid the accumulation of errors between the anchor nodes and the positioning target node due to the accumulation of the positioning information. It also can reduce the accuracy of the positioning accuracy due to the blurring of the hops-distance estimation. Our purpose is to consider the differences between the distance between adjacent nodes (anchor nodes and neighbor nodes) and network connectivity. To improve the accuracy of positioning estimation is our ultimate goal. Based on the above reasons, an auxiliary distance correction estimation method based on the optimization of connectivity among nodes is proposed without adding additional communication overhead. Let the average hop distance of the network be expressed as d i , and its expression can be defined as: In Equation (4), md ij represents the cumulative hops distance between the anchor and the target, and h ij represents the minimum value of the direct hops between the two. According to the principle that in the network the communication radius of nodes is greater than the distance between nodes, we uniformly collect the average distance per hop of multiple recent anchors received by the target node, and then weighted processing of them is normalized. Firstly, θ m is defined as the average distance error between nodes m and n. In Equation (5), d est(m,n) represents the estimated distance between nodes m and n. d r(m,n) represents the actual distance between them. Assuming that the target node receives the data sent by the k anchor nodes at this time, the average hop-distance weighting coefficient from the m th anchor node is expressed as: In summary, when the member nodes of the network receive the distance value represented by the average hop per hop (h) through broadcasting, the distance estimation to the corresponding anchor node is expressed as d est(i,j) . It is shown in Equation (7): According to the principle of simplicity, we only select the last three anchor node information, and λ m is normalized. In a network, a single node can send test packets(S) to determine the neighbor relationship. We define γ as the threshold for the packet rate, which needs to meet the condition 1/S ≤ γ ≤ 1. When node i receives the packet scent by node j, if it is determined that the following relationship exists, node i and j can be determined as neighbors. A significant increase in the reliability of the node's neighbor distance makes it possible to have a unique relationship between the different unknown nodes and its recent anchor nodes. Based on it, we can eliminate some information from interfering nodes or false anchors. After removing the tampered beacon information and eliminating the interfering nodes, we can obtain the relative accurate node distance, so as to correctly compute the node location. The higher the accuracy of the neighbor node selected by the neighbor of the node, the closer the final position estimate is to the real. According to the following two conclusions [36]: (I) the greater the connectivity difference between nodes, the greater the distance between neighbors, and (II), the greater the distance between neighbor nodes, the closer the communication radius is to the other nodes. When quantifying the difference characteristics of connectivity between network nodes, we could achieve the summation of the absolute values of the connectivity differences. For the unknown node I (target node) and its neighbor anchor node m to be located, the value of connectivity difference between the two can be expressed as: In the Equation (8), h ij is defined as the hop count value of the shortest connection path between the fixed anchor j and the unknown node i which should be located; h mj is defined as the shortest hops between the neighbor anchor node m and the anchor node j. Thus, the neighbor distance of the maximum difference between the target node and undetermined target node i can be set as the communication radius of the node (R c ). The ratio of the node connectivity difference value C im obtained by Equation (8) to the maximum difference value C iMAX in the network is taken as the weight value. We can multiply the ratio with the node communication radius to obtain the actual distance d im from the target node i to the m th anchor node. d im is calculated as follows: We improve the method of estimating the hop count of the whole communication link, correct the distance value represented by each hop between the target node and the anchor node. The above work can more accurately represent and reflect the true connection of nodes in sensor networks. These jobs lay the key foundation for further reduction of positioning errors. Locating the Node and Fix the Result Once the target node obtains three or more distance parameters of the anchor nodes, the triangulation positioning method is run immediately. Let the coordinate of the anchor node be (x n , y n ), its cumulative distance to the unknown node (target node) is d n , and coordinate of the unknown node is (x, y). According to the distance relationship between two points, the distance system between the different anchor node and the target node is established as shown in Equation (10): · · · · · · · · · · · · · · · · · · · · · · · · · · · (x − x n ) 2 + (y − y n ) 2 = d 2 n According to the difference of the connectivity of the neighboring nodes, if the hop count is used directly to estimate the distance, the result is bound to be erroneous. When adopting the trilateral method to locate nodes, the process will produce many circles with different radius lengths. In essence, these equations of the circles with different radius are difficult to converge precisely at one point, which may cause the system to produce no solution. We can adopt the least squares method to obtain a set of approximate solutions for the unknown node. The expression after conversion is given by Equation (11): The matrix forms of A and b are shown in Equations (12) and (13): . . . The matrix form of the unknown node coordinates to be positioned is Equation (14): According to the estimation method of the standard minimum mean square error, we can calculate the following result:x In order to improve the distance estimation between nodes, to maximize the credibility of the distance between the neighbor anchor nodes and to improve the positioning accuracy of the ribbon network based on connectivity, the estimated position of the unknown target node needs to coordinate the calibration. The purpose of doing that is to make the estimated position constantly approach the true position. The algorithm can introduce a special mathematical control method according to the distance constraint relation between neighbor nodes with high reliability weights. It can improve the accuracy of the location results by making finite iteration updates to the subsequent node coordinates after the three-side measurement. If we adopt the recursive form of a Taylor series expansion, we need to use the results which have been achieved the last time as an initial value. The local solution of the measurement error is obtained by the least squares method, and the iteration value of the operation is obtained. The algorithm needs to fix the position of the node constantly, and the operation ends when the error reaches the pre-set threshold. The Taylor series expansion method is suitable for the positioning systems with three or more base stations. It is necessary to ensure that the process is convergent. However, this method is more dependent on the selected initial value. Also, the computational complexity is higher and cannot run in a single node. In this paper, we use the gradient descent algorithm which is relatively simple to correct the estimation results. This method can improve the positioning accuracy by optimizing and improving the target position estimation. The gradient descent algorithm is a minimized optimization method based on the Newton principle [8]. According to the properties of the proposed Newtonian iteration, the magnitude of the gradient represents an approximation of an estimate and an optimal point. The smaller the gradient, the closer the estimate is to the best position. When the result of the algorithm converges to the least square solution, the normal anchor node will continue to search along the normal gradient direction rapidly, while the false anchor node will deviate from this gradient direction. This method can also eliminate some of the anchors that are interrupted or maliciously tampered with. The core principle of this algorithm is: to set up the equivalent of the equations to convert, to build a multivariate function of the minimum value for the problem, and then solve it. In terms of convergence speed, the convergence rate is faster when its initial value is far from the true value, and then approaches the initial value. Newton method convergence can basically meet the requirements of connectivity-based positioning system. However, when using it we must obtain the partial derivative of each function. In that way, it is not conducive to rapid solution, or meets the reality that the computing power of the sensor node is weak. In contrast, the computational process of the gradient descent algorithm is relatively simple and does not require high precision of the initial point. It can quickly adjust the initial value to the real value while ensuring the convergence of the algorithm. More importantly, it is faster than the overall iteration. So this method is very suitable for fast operation in a single node. Thus, according to its principle, we can construct a system of the equations representing the distance between two points as a modular function. First, it is rewritten as Equation (16) shown in the equivalent form: In that case, the modulo function can be used to optimize the position estimation. Its expression is shown in Equation (17): (17) At this point, the question is how to get the minimum value of Equation (17). According to the Equation (14), we construct an optimized modular function expression. As shown in Equation (18): The zero-point of the Equation (18) is obtained as the solution of the Equation (17). According to that principle, the function f 2 i (x, y, D ki ) can be regarded as a spatial surface in geometry, and the point tangent to the coordinate plane is the zero minimum point. For any point within the domain of the function definition, there is always a contour line passing through it. Starting from the point (x 0 , y 0 ), moving in the direction of the monotonic decline of the function, the solution to the problem can be obtained by reaching the zero minimum. According to the plane geometry definition, the gradient direction at a point is the normal of the contour of the point. Therefore, its negative direction is the fastest direction of the decline of functions. Specific steps are as follows: Step 1: Algorithm Initialization: calculate the initial value, select the optimal step size, and find the gradient value. We assign values to (x (0) , y (0) ) and introduce parameters λ to get a new point (x (1) , y (1) ): In Equation (19), x (0) , y (0) can be calculated by least squares method. By using Equation (9), we can get d kn(0) . Step 4: According to Equation (21), the difference quotient (F (k) /x, F (k) /y) is calculated: Step 5: We calculate the one-step predictor, and then Equations (22) and (23) are established: In Equations (22) and (23), the following relationship exists, as shown in Equation (24): Step 6: According to the need for positioning accuracy, jump to Step 2 again, the iterative cycle calculation. Otherwise, the algorithm should be terminated. While adopting the gradient descent algorithm in the Gaussian channel environment, we should select and search the optimal solution in the controllable range on the basis of the initial value [21]. At the same time, for the whole positioning process, we use the results of the least squares algorithm to provide more accurate initial information for gradient descent algorithm. More importantly, it has not been necessary to add additional communication overhead to obtain the other conditions required for the calculation, which is very useful for belt-type topologies with limited computing power. Simulation and Analysis In this section, we simulate and analyze the performance of the proposed algorithm. In order to verify the effectiveness of the improved algorithm, we use Matlab for validation. In accordance with the height of the real scenario, we set the network environment similar to the narrow strip area (shown in Figure 6). The sensor nodes are deployed in the network, the horizontal spacing distance L is set to 1000 m, and the longitudinal distance M is set to 15 m. The anchor nodes are arranged along the edge of the runway according to the strategy described in Section 3.3, and the target nodes to be positioned are randomly arranged in the network coverage area. The overall network layout is shown in Figure 6. In this paper, the standard DV-Hop algorithm, DV-Distance algorithm and improved CDV-Hop algorithm are introduced in our experiment. Relative Error of Positioning Accuracy Errorave is the ratio of the target positioning error to the communication radius of the node itself. It is defined as the relative error of positioning accuracy and its expression is (25) The sensor nodes are deployed in the network, the horizontal spacing distance L is set to 1000 m, and the longitudinal distance M is set to 15 m. The anchor nodes are arranged along the edge of the runway according to the strategy described in Section 3.3, and the target nodes to be positioned are randomly arranged in the network coverage area. The overall network layout is shown in Figure 6. In this paper, the standard DV-Hop algorithm, DV-Distance algorithm and improved CDV-Hop algorithm are introduced in our experiment. Relative Error of Positioning Accuracy Error ave is the ratio of the target positioning error to the communication radius of the node itself. It is defined as the relative error of positioning accuracy and its expression is (25): In Equation (25), the coordinates of the target node are denoted as (x i,est , y i,est ), the coordinates of the real position of the target node are denoted by (x i,real , y i,real ), N is the number of unknown target nodes, and R t is the communication radius of the normal node. It can be seen from Figure 7 that the error of the average positioning accuracy for the target is maintained at 27% when the number of anchor nodes is about 11 and the other network conditions are unchanged. If the number of anchor nodes increases from 3 to 13, the algorithm positioning error basically displays decreasing trend. The standard DV-Hop has the largest positioning error because it does not use other corrective measures. However, when the number of anchor nodes reaches 13 or 14, the trend of positioning accuracy error is almost smooth. As the band network continues to extend along the edge, the difference between adjacent anchor nodes is narrowing, and their correction to distance estimation is weakening. edge of the runway according to the strategy described in Section 3.3, and the target nodes to be positioned are randomly arranged in the network coverage area. The overall network layout is shown in Figure 6. In this paper, the standard DV-Hop algorithm, DV-Distance algorithm and improved CDV-Hop algorithm are introduced in our experiment. Relative Error of Positioning Accuracy Errorave is the ratio of the target positioning error to the communication radius of the node itself. It is defined as the relative error of positioning accuracy and its expression is (25): , N is the number of unknown target nodes, and t R is the communication radius of the normal node. It can be seen from Figure 7 that the error of the average positioning accuracy for the target is maintained at 27% when the number of anchor nodes is about 11 and the other network conditions are unchanged. If the number of anchor nodes increases from 3 to 13, the algorithm positioning error basically displays decreasing trend. The standard DV-Hop has the largest positioning error because it does not use other corrective measures. However, when the number of anchor nodes reaches 13 or 14, the trend of positioning accuracy error is almost smooth. As the band network continues to extend along the edge, the difference between adjacent anchor nodes is narrowing, and their correction to distance estimation is weakening. In addition, when the anchor nodes increase to a certain percentage, the network connectivity no longer significantly changes. Therefore, when the number of anchor nodes reaches 13 or so, the positioning error basically becomes stable. When the number of anchor nodes is further increased, the positioning accuracy of the DV-Distance algorithm is slightly decreased. That is because the In addition, when the anchor nodes increase to a certain percentage, the network connectivity no longer significantly changes. Therefore, when the number of anchor nodes reaches 13 or so, the positioning error basically becomes stable. When the number of anchor nodes is further increased, the positioning accuracy of the DV-Distance algorithm is slightly decreased. That is because the DV-Distance algorithm must use the RSSI signal as an auxiliary estimation parameter. When the signal transmission is within a certain distance range, its amplitude change, resulting in increased signal fluctuations. The number of anchor nodes can increase the source of information needed for locating. But on the other hand, it also introduces more error sources to some extent. When such a node reaches a certain number, it can lead to an increase in cumulative errors and relatively negative effects on the elevation of positioning accuracy. The simulation data curve in Figure 7 shows that the relative positioning accuracy error of DV-Distance and proposed algorithm is significantly lower in general. When the number of nodes is less than 13, the precision of the proposed algorithm is slightly higher than that of DV-Distance algorithm. According to the simulation process, the proposed algorithm only needs to be carried out two iterations, the initial value is approaching the real value quickly. It shows some advantages: the convergence speed is faster, the computation is small. The confidence interval of the simulation results is shown in Figure 8. DV-Distance and proposed algorithm is significantly lower in general. When the number of nodes is less than 13, the precision of the proposed algorithm is slightly higher than that of DV-Distance algorithm. According to the simulation process, the proposed algorithm only needs to be carried out two iterations, the initial value is approaching the real value quickly. It shows some advantages: the convergence speed is faster, the computation is small. The confidence interval of the simulation results is shown in Figure 8. It can be seen from Figure 9, from the perspective of qualitative analysis, that the communication radius of a node can determine the number of its neighbors and number of connections. As the communication radius of the target node increases, it can be searched and acquired the location information for more anchor nodes within its communication range. At the same time, the radius of node communication increases, which can also make its own positioning coverage increase. In that way, the node can obtain more adjacent node location information, so the estimation accuracy can be improved. DV-Distance algorithm uses RSSI signal as auxiliary positioning parameters. If the distance between nodes is more than 20 m, the signal fading curve will show large fluctuations. At this moment, the communication radius can no longer directly improve the positioning accuracy, the overall positioning accuracy and the proposed improvement algorithm is basically close, but it uses additional hardware that can measure distance which greatly increases the communication consumption and reduces efficiency, so the positioning cost is also affected. The confidence interval of the simulation results is shown in Figure 10. It can be seen from Figure 9, from the perspective of qualitative analysis, that the communication radius of a node can determine the number of its neighbors and number of connections. As the communication radius of the target node increases, it can be searched and acquired the location information for more anchor nodes within its communication range. The simulation data curve in Figure 7 shows that the relative positioning accuracy error of DV-Distance and proposed algorithm is significantly lower in general. When the number of nodes is less than 13, the precision of the proposed algorithm is slightly higher than that of DV-Distance algorithm. According to the simulation process, the proposed algorithm only needs to be carried out two iterations, the initial value is approaching the real value quickly. It shows some advantages: the convergence speed is faster, the computation is small. The confidence interval of the simulation results is shown in Figure 8. It can be seen from Figure 9, from the perspective of qualitative analysis, that the communication radius of a node can determine the number of its neighbors and number of connections. As the communication radius of the target node increases, it can be searched and acquired the location information for more anchor nodes within its communication range. At the same time, the radius of node communication increases, which can also make its own positioning coverage increase. In that way, the node can obtain more adjacent node location information, so the estimation accuracy can be improved. DV-Distance algorithm uses RSSI signal as auxiliary positioning parameters. If the distance between nodes is more than 20 m, the signal fading curve will show large fluctuations. At this moment, the communication radius can no longer directly improve the positioning accuracy, the overall positioning accuracy and the proposed improvement algorithm is basically close, but it uses additional hardware that can measure distance which greatly increases the communication consumption and reduces efficiency, so the positioning cost is also affected. The confidence interval of the simulation results is shown in Figure 10. At the same time, the radius of node communication increases, which can also make its own positioning coverage increase. In that way, the node can obtain more adjacent node location information, so the estimation accuracy can be improved. DV-Distance algorithm uses RSSI signal as auxiliary positioning parameters. If the distance between nodes is more than 20 m, the signal fading curve will show large fluctuations. At this moment, the communication radius can no longer directly improve the positioning accuracy, the overall positioning accuracy and the proposed improvement algorithm is basically close, but it uses additional hardware that can measure distance which greatly increases the communication consumption and reduces efficiency, so the positioning cost is also affected. The confidence interval of the simulation results is shown in Figure 10. At a certain time, we gradually increase the number of unknown randomly distributed target nodes. The relationship between node number and location accuracy is shown in Figure 11. We see that the error of proposed algorithm is significantly smaller than the DV-Distance algorithm, the DV-Hop algorithm and CDV-Hop algorithm. On the whole, the larger number of nodes is beneficial to improve the accuracy of the distance estimation between nodes. That is because the localization algorithm using network connectivity (hop number) is designed based on the relationship between the number of hops and distances between nodes. The network connectivity will directly influence the accuracy of the algorithm. The confidence interval of the simulation results is shown in Figure 12. At a certain time, we gradually increase the number of unknown randomly distributed target nodes. The relationship between node number and location accuracy is shown in Figure 11. We see that the error of proposed algorithm is significantly smaller than the DV-Distance algorithm, the DV-Hop algorithm and CDV-Hop algorithm. On the whole, the larger number of nodes is beneficial to improve the accuracy of the distance estimation between nodes. That is because the localization algorithm using network connectivity (hop number) is designed based on the relationship between the number of hops and distances between nodes. The network connectivity will directly influence the accuracy of the algorithm. The confidence interval of the simulation results is shown in Figure 12. At a certain time, we gradually increase the number of unknown randomly distributed target nodes. The relationship between node number and location accuracy is shown in Figure 11. We see that the error of proposed algorithm is significantly smaller than the DV-Distance algorithm, the DV-Hop algorithm and CDV-Hop algorithm. On the whole, the larger number of nodes is beneficial to improve the accuracy of the distance estimation between nodes. That is because the localization algorithm using network connectivity (hop number) is designed based on the relationship between the number of hops and distances between nodes. The network connectivity will directly influence the accuracy of the algorithm. The confidence interval of the simulation results is shown in Figure 12. Network connectivity affects the accuracy of the algorithm directly. The better the network connectivity, the more accurate the distance between each hop. Connectivity refers to the existence of at least one direct path between any two nodes in the network, enabling efficient communication [16]. According to the basic theory of graph theory [26,27], the minimum node degree of the network is expressed as min ( ) D G : In Equation (26): v represents a node in the network, and D(ν) represents the number of neighbor nodes that can communicate with the node directly. In the network the connectivity of any node with node N and the remaining N − 1 nodes can be expressed as K , and the relationship can be expressed as (27): At a certain time, we gradually increase the number of unknown randomly distributed target nodes. The relationship between node number and location accuracy is shown in Figure 11. We see that the error of proposed algorithm is significantly smaller than the DV-Distance algorithm, the DV-Hop algorithm and CDV-Hop algorithm. On the whole, the larger number of nodes is beneficial to improve the accuracy of the distance estimation between nodes. That is because the localization algorithm using network connectivity (hop number) is designed based on the relationship between the number of hops and distances between nodes. The network connectivity will directly influence the accuracy of the algorithm. The confidence interval of the simulation results is shown in Figure 12. Network connectivity affects the accuracy of the algorithm directly. The better the network connectivity, the more accurate the distance between each hop. Connectivity refers to the existence of at least one direct path between any two nodes in the network, enabling efficient communication [16]. According to the basic theory of graph theory [26,27], the minimum node degree of the network is expressed as min ( ) D G : In Equation (26): v represents a node in the network, and D(ν) represents the number of neighbor nodes that can communicate with the node directly. In the network the connectivity of any node with node N and the remaining N − 1 nodes can be expressed as K , and the relationship can be expressed as (27): Network connectivity affects the accuracy of the algorithm directly. The better the network connectivity, the more accurate the distance between each hop. Connectivity refers to the existence of at least one direct path between any two nodes in the network, enabling efficient communication [16]. According to the basic theory of graph theory [26,27], the minimum node degree of the network is expressed as D min (G): D min (G) = min{D(v)} In Equation (26): v represents a node in the network, and D(ν) represents the number of neighbor nodes that can communicate with the node directly. In the network the connectivity of any node with node N and the remaining N − 1 nodes can be expressed as K, and the relationship can be expressed as (27): In Equation (27): p x is the probability of any two nodes being connected when the communication distance is fixed. The increase in connectivity index p x indicates that the number of hops between nodes decreases, and the average positioning error decreases with the increase of node density. The reason is that although the node position is relatively dispersed, its probability density function is continuous. When the node density increases, the discrete function approximates the continuous function. This can make the error smaller, thus improving the positioning accuracy. The better the connectivity, the closer to the true value for each hop estimate. When calculating the coordinates, the gradient method can be used to make the evaluation more accurate, so the estimation accuracy is higher. However, due to the number of anchor nodes has not changed, other nodes lead to an increase in its proportion, the whole error curve is not always showing a downward trend. The Time Required to Locate Nodes To a certain extent, the time required to complete the positioning can reflect the complexity of the positioning algorithm, and reflect how much energy is generated in the positioning process indirectly. The relationship between the number of anchor nodes and the time required for positioning is shown in Table 1, it can be seen that the anchor nodes have a relatively large number of tasks, including collecting hops information and collating and forwarding packets, so the direct impact on time is most obvious. Compared with the standard DV-Hop, DV-Distance increases the process of calculating the signal attenuation value. CDV-Hop increases the coordinate calibration after estimating, the improved algorithm adds the correction and iterative refinement process in the distance estimation and trilateral positioning, so the computational complexity of the three algorithms is increased, and the time required is relatively long. The CDV-Hop and DV-Hop algorithms are relatively simple, so the computational complexity is low, and they can be applied to those occasions that do not require high precision. Under normal circumstances, it can be seen from Table 1 that the time required for positioning will increase as the number of nodes participating in the network increases. The increasing number of network nodes means that the consumption time of information transmission and data acquisition also increases correspondingly. At the same time, the increase of data volume also increases the time required to calculate. When the topology of the wireless sensor network presents a long band, the connectivity of the nodes is changed to a certain extent, and the time required for different algorithms will be different. The number of nodes in the network and the time required for positioning are shown in Table 2. It can be seen that when the number of nodes participating in the positioning increases from 10 to 50, the range of DV-Distance and proposed algorithm is less than 500 ms~600 ms, while DV-Hop and CDV-Hop are nearly doubled. Conclusions Reliable and secure positioning algorithm is a significant issue in the IoT. In this work, we focus on the problem of target location in belt-type sensor networks. An energy-efficient strategy is adopted to deploy the IoT nodes, and decision mechanism of the broadcasting data is utilized to improve the security and reliability of the positioning information. We apply a new method to estimate the hop distances by relating the proximity of the neighbors to their connectivity difference. By weighting the average hop distance error of the anchor nodes, the average hop distance is modified and more accurate neighborhood distances are calculated. Through the improved strategy, compared with previous methods, the proposed algorithm effectively enhances the speed and precision of the positioning, reduces the possibility of the anchor nodes suffering abnormal interference, and meet the needs of energy efficiency. The main factors influencing the localization errors are analyzed by simulation, and the validity of the algorithm is verified as well. Theoretical analysis and validation results shows that our location algorithm has a good effect in detecting false anchor nodes and resisting interference attacks. Compared with current non-ranging methods, our algorithm improves the positioning accuracy by nearly 19%. The results also reveal that the proposed localization algorithm is more convenient. It has demonstrated some good characteristics in fast convergence, extension performance and computation amount. With the development of wireless technology, the value of the belt-type sensor networks will be further recognized and utilized in next generation IoT systems. Our next step will continue to focus on improving the accuracy, security, and energy-efficient of positioning. More deployment experiments will be executed.
16,168.6
2017-11-29T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Mitochondrial Haplogroups Associated with Japanese Centenarians, Alzheimer’s Patients, Parkinson’s Patients, Type 2 Diabetes Patients, Healthy Non-Obese Young Males, and Obese Young Males Copyright: © 2011 Takasaki S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Mitochondria are essential cytoplasmic organelles generating cellular energy in the form of adenosine triphosphate by oxidative phosphorylation. Because most cells contain hundreds of mitochondria, each having multiple copies of mitochondrial DNA (mtDNA), each cell contains several thousands of mtDNA copies. The mutation rate for mtDNA is a very high, and when mtDNA mutations occur the cells contain a mixture of wild-type and mutant mtDNAs. As the mutations accumulate, the percentage of mutant mtDNAs increases and the amount of energy produced within the cell can decline until it falls below the level necessary for the cell to function normally. When this bioenergetic threshold is crossed, disease symptoms appear and become progressively worse. Mitochondrial diseases encompass an extraordinary assemblage of clinical problems, usually involving tissues that require large amounts of energy, such as heart, muscle, kidney, and endocrine tissues [1][2][3]. Although mtDNA mutations have been reported to be related to aging and a wide variety of diseases-such as Parkinson's disease (PD), Alzheimer's disease (AD), type 2 diabetes, and various kinds of cancer [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]-those reports have were focused on the amino acid replacements caused by mtDNA mutations. Mitochondrial functions can of course be affected directly by amino acid replacements, but they can also be affected indirectly by mutations in mtDNA control regions. It is therefore important to examine the relations between all mtDNA mutations and centenarians or disease patients. In the present study the relations between each of eight classes of Japanese people and their mitochondrial single nucleotide polymorphism (mtSNP) frequencies at mtDNA positions throughout the mitochondrial genome were examined using a classification method based on the predictions of a radial basis function (RBF) network [21,22] and using a modified version of that classification method [23]. This examination revealed new mitochondrial haplogroups characteristic of these classes, and the relations between the haplogroups and classes differ from those reported previously [15,16,24,25]. Materials and Methods mtSNPs for the eight classes of people Tanaka et al. [26] sequenced the complete mitochondrial genomes of 672 Japanese individuals to construct an East Asia mitochondrial DNA (mtDNA) phylogeny [26]. Using these sequences and other published Asian sequences, they constructed the phylogenetic tree for macrohaplogroups M and N [26][27][28]. Kong et al. [29] recently corrected the above sequences by re-sequencing the dubious fragments and segments [29]. In the present study the mtSNPs in 112 Japanese semisupercentenarians were obtained from the report by Bilal et al. [25] and the mtSNPs in the other classes-96 Japanese centenarians, 96 Japanese Alzheimer's disease (AD) patients, 96 Japanese Parkinson's disease (PD) patients, 96 Japanese type 2 diabetes (T2D) patients, 96 Japanese T2D patients with angiopathy, 96 Japanese healthy non-obese young males, Abstract There are strong connections among mitochondria, aging, and a wide variety of diseases. In this paper, the relations between eight classes of Japanese people-96 centenarians, 112 semi-supercentenarians (over 105 years old but less than 116 years old), 96 Alzheimer's disease (AD) patients, 96 Parkinson's disease (PD) patients, 96 type 2 diabetes (T2D) patients, 96 T2D patients with angiopathy, 96 healthy non-obese young males and 96 healthy obese young males-and their mitochondrial single nucleotide polymorphism (mtSNP) frequencies at individual mtDNA positions of the entire mitochondrial genome were examined using the radial basis function (RBF) network and a modified RBF method. New findings of mitochondrial haplogroups were obtained for individual classes. The centenarians were found to be associated with the haplogroups D4b2a, B5b, and M7b2; the semi-supercentenarinas with B4c1a, F1, B4c1c1, B4c1b1, and M1; the AD patients with G2a and N9b1; the PD patients with N9a, G1a, B4e, and M7a1a; the T2D patients with D4b2b, M8a, and B5b; the T2D patients with angiopathy with N9a2, D4b1, and G2a; the healthy non-obese young males with D4g, N9a, D4b2b, and B4b/d/e; and the healthy obese young males with M7b2, D4b2b, B4c1, and M7a1a. These results are different from the previously reported haplogroup classifications. As the proposed analysis method can predict a person's mtSNP constitution and probabilities of becoming a centenarian, AD patient, PD patient, or T2D patient, it may be useful in the initial diagnosis of various diseases or longevity. and 96 Japanese healthy obese young males-were obtained from the GiiB Human Mitochondrial Genome Polymorphism Database (http:// mtsnp.tmig.or.jp/mtsnp). mtSNP classification using a RBF network A RBF network is an artificial network used for supervised learning problems such as regression, classification, and time series prediction. In the supervised learning a function is inferred from the examples (training set) that a teacher supplies. The elements in the training set are paired values of the independent (input) variable and dependent (output) variable. The RBF network shown in Figure 1 is the supervised learning, and the mtSNP classification for the eight classes of people was carried out individually. In the mtSNP classification for the centenarians, the mtSNPs of the centenarians were regarded as correct and mtSNPs of the other seven classes of people (i.e., semi-supercentenarians, AD patients, PD patients, T2D patients, T2D patients with angiopathy, non-obese young males, and obese young males) were regarded as incorrect. The mtSNP classifications for the other seven classes were carried out in the same way as that for the centenarians (Figure 1). The mitochondrial genome sequences of the eight classes of people were divided into two sets, one of training data and the other of validation data, and the processes of the classifications were carried out in two phases: training and validation. The steps are described in detail elsewhere [30]. Modified classification based on probabilities predicted by the RBF network Since a RBF network can predict the probabilities that persons with certain mtSNPs belong to certain classes, these predicted probabilities were used to identify mtSNP features. By examining the relations between individual mtSNPs and the persons with high predicted probabilities of belonging to one of these classes, other mtSNPs useful X TN ------------X 5 X 4 X 3 X 2 X 1 Input layer Hidden layer Output layer for distinguishing between the members in different classes were identified. The modified classification method based on the probabilities predicted by the RBF network was carried out in the following way [23]. 1) Select the target class to be analyzed. 2) Rank individuals according to their predicted probabilities of belonging to the target class. 3) Either select individuals whose probabilities are greater than a certain value or select the desired number of individuals and set them as a modified cluster. Results and Discussion Associations between Asian/Japanese haplogroups and mtSNPs for the eight classes of people O: haplogroups classified in the highest 15 individuals Table 3: Haplogroup-class relations determined using the individuals whose predicted probabilities were greater than 50%. Centenarians (69) Semi-super centenarians (18) AD patients (65) PD patients (24) T2D patients (7) T2D patients with antgiopathy (72) Non-obese young males (16) Obese young males (58) of the highest cluster (classification ID 1) for the eight classes of people were 66.7% for both the centenarians, and the semi-supercentenarians, 88.3% for the AD patients, 60% for the PD patients, 50% for the T2D patients, 81.3% for theT2D patients with angiopathy, 57.1% for the healthy non-obese young males, and 62.5% for the healthy obese young males. As individual classes have different predicted probabilities, for each of the eight classes the 15 individuals with the highest predicted probabilities were selected to examine the relations between Asian/ Japanese haplogroups and mtSNPs [26][27][28]. For the centenarians the association between the haplogroups and mtSNPs that was based on the highest 15 individuals is shown in detailed form in Figure The relations between the haplogroups for the eight classes of people are listed in Table 2. The haplogroup M7a1a was common in PD patients and obese young males; M7b2 was common in centenarians and obese young males; G2a was common in AD patients and T2D patients with angiopathy; D4b2b was common in T2D patients, nonobese young males, and obese young males; B5b was common in centenarians and T2D patients; and N9a was common in PD patients and non-obese young males. Then the individuals whose probabilities predicted using the modified classification method were greater than 50% were selected and their nucleotide distributions at individual mtDNA positions were examined. The individuals selected were 69 centenarians, 18 semisupercentenarians, 65 AD patients, 24 PD patients, 7 T2D patients, 72 T2D patients with angiopathy, 16 healthy non-obese young males, and 58 healthy obese young males. The associations between the haplogroups and mtSNPs for the eight classes of people are shown in Figure 3A to H (Provided in the Supplementary). The relations among the haplogroups for the eight classes of people are listed in Table 3. In the mtSNP analysis for the individuals whose probabilities were greater than 50% there were 30 haplogroups for the eight classes, whereas in the analysis for the 15 individuals with the highest predicted probabilities there were only 21 haplogoups. As a result, the ratios of individual haplogoups of the individuals whose probabilities were greater than 50% tended to be lower than those of the 15 individuals with the highest predicted probabilities. In addition, the analysis based on the individuals with probabilities greater than 50% yielded 12 common haplogoups among the eight classes of people, whereas the analysis based on the 15 individuals with the highest predicted probabilities yielded only 6 common haplogoups among the eight classes. Although the semi-supercentenarians were classified into similar haplogroups in the two cases, their ratios were lower in case 2. That is, in case 2 the ratios of B4c1a, F1, M1, B4c1b1, and B4c1c1 were respectively decreased from 40% to 33%, from 27% to 22%, from 7% to 6%, from 7% to 6%, and from 13% to 6%. In addition, a new haplogroup M7b2 (6%) was classified in case 2. Although the PD patients were classified into the same haplogroups in the two cases, their ratios were different in the two cases. That is, the G1a ratio was 20% in case 1 and 13% in case 2, the N9a ratio was 20% in case 1 and 21% in case 2, the M7a1a ratio was 13% in case 1 and 17% in case 2, and the B4e ratio was 13% in case 1 and 8% in case 2. Although the T2D patients were classified into the same haplogroups in both cases, the haplogroup ratios were different in the two cases. That Statistical technique Proposed method Technique Relative relations between target and normal data Supervised learning (RBF) by using correct and incorrect data Analysis position Each locus of mtDNA polymorphisms (independent position) Entire loci of mtDNA polymorphisms (succesive positions) Input (required data) Target (individual cases) and control (normal data) Correct (individual cases) and incorrect (others except correct) Output (results) Odds ratio or relative risk Clusters with predictions Analysis Check odds ratio or relative risk at each position Check individuals in clusters based on prediction probabilities is, the D4b2b ratio was 40% in case 1 and 29% in case 2, the M8a ratio was 27% in case 1 and 14% in case 2, and the B5b was 13% in case 1 and 14% in case 2. The healthy non-obese young males were classified into the same haplogroups in both cases and their ratios were also nearly the same in the two cases. That is, the haplogroup D4g, N9a, D4b2b, and B4b/d/e ratios in cases 1 and 2 were respectively 33% and 31%, 27% and 25%, 13% and 12%, and 7% and 12%. This similarity of haplogroup ratios is due to the numbers of selected individuals being nearly the same in both cases: 15 in case 1 and 16 in case 2. Then the relations between the haplogroups of pairs of related classes of people-centenarians and semi-supercentenarians, AD patients and PD patients, T2D patients and T2D patients with angiopathy, and healthy non-obese young males and healthy obese young males-were examined in cases 1 and 2. Centenarians and semi-supercentenarians: Although in case 1 these two classes of people had no common haplogroups, in case 2 they had two common haplogroups M7b2 and F1. AD patients and PD patients: Although AD and PD are both brain diseases, these patients had no common haplogroups in either case. T2D patients and T2D patients with angiopathy: Although these two classes of people had no common haplogroups in case 1, they had a common haplogroup B5b in case 2. Healthy non-obese young males and healthy obese young males: These two classes of people had a common haplogroup D4b2b in case 1 and had two common haplogroups D4g and D4b2b in case 2. Common haplogroups were found more often in case 2 because more haplogroups were classified in that case. As there were 112 individuals in the class of semi-supercentenarians, changes in haplogroup classifications with changes in the number of highest-probability individuals selected were examined. As one sees in Table 4, the number of haplogroups classified increased from 6 for the 15 individuals with the highest predicted probabilities to 9 for the 30 with the highest predicted probabilities, to 11 for the 45 with the highest predicted probabilities, to 14 for the 60 with the highest predicted probabilities, to 15 for the 75 with the highest predicted probabilities, to 16 for the 90 with the highest predicted probabilities, and to 17 for all 112 semi-supercentenarians. The ratios of the haplogroups B4c1a and F1 were respectively 40% and 27% for the 15 individuals with the highest predicted probabilities, but they decreased as the numbers of selected individuals increased and finally became respectively 5% and 4% when all 112 individuals were used. Although the haplogroup D4a was not classified when the 15 individuals with the highest predicted probabilities were used, its ratio was 3% when the 30 individuals with the highest predicted probabilities were used and was 29% when the 45 individuals with the highest predicted probabilities were used. This indicates that most of the semi-supercentenarians belonging to D4a were included in the range of the predicted probabilities 45% to 74%. From Table 4, it is implied that the feature of the semi-supercentenarians appears in the appropriate number of selected individuals used. In the case of the semi-supercentenarians, its number may be 45 individuals used. Comparison with previous works After analyzing the results of a large-scale study using hospitalbased sampling data, Fuku et al. [16] reported that the mitochondrial haplogroup F in Japanese individuals is associated with a significantly increased risk of type 2 diabetes mellitus (T2DM) (odds ratio 1.53, P=0.0032) [16]. In the present study, on the other hand, the haplogroups (risks) of T2D patients and T2D patients with angiopathy were respectively D4b2b (40%), M8a (27%) and B5b (13%), and N9a (47%), D4b1 (20%) and G2a (13%) ( Figures 2E, 2F (Provided in the Supplementary)). There were therefore big differences between the analysis of Fuku et al. [16] and results of this study. Differences between statistical technique and the proposed method Although the previously reported methods analyzed the relations between mtSNPs and Japanese T2D patients, centenarians, and semisupercentenarians by using standard statistical techniques [15,25], the mutual relations among the other classes of people-AD patients, PD patients, healthy non-obese young males, and healthy obese young males-were not investigated. The differences between and mutual relations among the eight classes of people were described in this study. In addition, the predicted probabilities of associations between mtSNPs and the eight classes of people cannot be obtained by the statistical techniques used in the previous methods, whereas the proposed method is able to compute them from the results obtained when learning the mtSNPs of individual classes. Although the previous methods used standard statistical techniques, a RBF network was used in the present study because the relations among individual mtSNPs for the eight classes of people should be analyzed as mutual mtSNP connections in the entire population of mtSNPs. The differences between standard statistical technique and the proposed method are listed in Table 5. In the statistical technique, odds ratios or relative risks are analyzed on the basis of relative relations between target and control data at each polymorphic mtDNA locus. In the proposed method, on the other hand, clusters indicating predicted probabilities are examined on the basis of the RBF using correct and incorrect data for the entire set of polymorphic mtDNA loci. The statistical technique determines characteristics of haplogroups by using independent mtDNA polymorphisms that indicate high odds ratios, whereas the proposed method determines them by checking individuals with high predicted probabilities. This means that the statistical technique uses the results of independent mutation positions, whereas the proposed method uses the results of all mutation positions. As there are the differences between the two methods, which method is better will need to be determined in future research. Furthermore, the proposed method may have possibilities for use in the initial diagnosis of various diseases or longevity on the basis of the individual predicted probabilities.
3,971.8
2011-05-27T00:00:00.000
[ "Biology", "Medicine" ]
Dynamical properties of dipolar Fermi gases We investigate dynamical properties of a one-component Fermi gas with dipole-dipole interaction between particles. Using a variational function based on the Thomas-Fermi density distribution in phase space representation, the total energy is described by a function of deformation parameters in both real and momentum space. Various thermodynamic quantities of a uniform dipolar Fermi gas are derived, and then instability of this system is discussed. For a trapped dipolar Fermi gas, the collective oscillation frequencies are derived with the energy-weighted sum rule method. The frequencies for the monopole and quadrupole modes are calculated, and softening against collapse is shown as the dipolar strength approaches the critical value. Finally, we investigate the effects of the dipolar interaction on the expansion dynamics of the Fermi gas and show how the dipolar effects manifest in an expanded cloud. Introduction In recent years, atomic quantum dipolar gases have received much interest, for the simple reason that the anisotropic and long-range nature of the dipole-dipole interaction gives rise to a rich spectrum of novel properties to such systems. The theoretical study of dipolar Bose-Einstein condensates started in 2000. Properties of ground state [1,2], collective oscillations [3,4], topological defects such as spin textures and vortex states [5,6] are studied. Moreover, when confined in optical lattice potentials, various quantum phases, such as ferromagnetism [7], and supersolid state [8,9], etc. are predicted. Theoretical studies of dipolar Fermi gas have been carried out for ground state [10], excitatations [11], BCS superfluidity [12] and rotating properties [13]. A recent review of dipolar quantum gases can be found in Ref. [14]. In experiments, Bose-Einstein condensation of chromium atoms, which possess a magnetic dipole moment six times larger than that of alkali atoms, have been realized [15,16]. The effect of dipole-dipole interaction in 52 Cr condensate is observed in its expansion dynamics [19]. Besides chromium, heteronuclear molecules [20,21,22,23,24,25] and Rydberg atoms [26,27,28] are also expected to interact via strong dipoledipole force due to their large electric dipole moment, and their experimental realization is under way in a number of groups. In Ref. [29], three of us studied the ground state properties of a dipolar Fermi gas by employing a variational Wigner function based on the Thomas-Fermi density of identical fermions. We showed that the dipole-dipole interaction induces a deformation of the momentum space distribution, and identified that such deformation arises from the Fock exchange term, which had not been paid particular attention in previous studies. The purpose of this paper is to extend the work of ref. [29], and investigate the collective excitations and expansion dynamics of the dipolar Fermi gas. We want to emphasize that, due to the Pauli exclusion principle, the energy scales of a fermionic system is much larger than those of a Bose condensate. Consequently, the dipolar effects in Fermi gas only becomes significant when the dipole moment is very large. Our calculations show that for heteronuclear molecules with typical electric dipole moment on the order of one Debye, dipolar effects can be easily detected. While dipolar effects are usualy negligible in atomic Fermi gases ‡. The content of the paper is organized as follows. In the next section, we present the model Hamiltonian and the total energy of the one-component dipolar Fermi gas under Hartree-Fock approximation. In section 3, we derive the total energy function in a uniform system with a variational ansatz of Fermi surface and compute various thermodynamic quantities of the system. Here we show how the Fock exchange interaction leads to Fermi surface deformation as well as the instability of the system. In section 4, we turn our attention to a trapped system and investigate various modes of collective excitations using the sum-rule method, and show the softening of the excitation frequency as the interaction strength is increases towards a critical value. In section 5, we study the expansion dynamics of an initially trapped Fermi gas and show how the expanded cloud bears the signature of the underlying dipolar interaction. Finally, a summary is presented in section 6. Total energy functional in phase space representation We consider a single component Fermi gas of atoms or molecules with dipole moment aligned along the axial axis of a cylindrical harmonic trap. The Hamiltonian of this system is described bŷ where m is the mass of fermions, and ω ρ and ω z are the oscillation frequencies along the radial and axial axes, respectively. The dipole-dipole interaction of the last term in Eq. (1) is described by V dd (r) = d 2 (1 − 3 cos 2 θ)/r 3 , where θ is the angle between r and the dipole moment d. In the Hartree-Fock approximation, the total energy derived from Hamiltonian (1) can be written as the sum of the kinetic, trapping potential, Hartree direct and Fock exchange energies where we have introduced the Wigner function f (r, k) defined by the following transformation: where the one-body density matrix n(r, r ′ ) = α ψ α (r)ψ * α (r ′ ) is defined in terms of a complete set of single-particle wave function {ψ α (r)}. In Eq. (6), we have introduced the center of mass coordinate R = (r + r ′ )/2 and relative coordinate s = r − r ′ . For the ground state, the summation over single-particle states α goes from the lowest one up to the Fermi energy. In our work, we do not calculate the Hartree-Fock energy represented by Eq. (2) in a fully self-consistent manner, which will be a quite complicated task. Instead, we adopt a much simpler semiclassical approach and calculate the total energy by employing a variational ansatz for the Wigner distribution function based on the Thomas-Fermi approximation, which assumes that the local Fermi surface has the same form as in homogeneous case at each spatial point. The ground state is then obtained by optimizing the Wigner function that minimizes the total energy. The details of this calculation can be found in Ref. [29]. In the present paper, we will focus on the dynamical properties such as the low-lying collective excitations and the expansion dynamics of the ground state. Equilibrium properties of a homogeneous dipolar Fermi gas It is instructive to first consider a homogeneous system (ω ρ = ω z = 0) in a large box of volume V(= d 3 r) with number density n f , as this will provide important insights into the trapped system to be studied later. We introduce the following number-conserving variational ansatz for the Wigner function where Θ(·) is Heaviside's step function, k 2 ρ = k 2 x + k 2 y , and k F = (6π 2 n f ) 1/3 corresponds to the Fermi momentum. The parameter α characterizes the deformation of the Fermi surface: α > 1 (< 1) corresponds to an oblate (prolate) Fermi surface. The physical origin of the Fermi surface deformation can be attributed to the anisotropic nature of the dipolar interaction. Given the ansatz Eq. (8), the total energy of the homogeneous system can be derived as where C 1 = 3(6π 2 ) 2/3 /10, C dd = md 2 n 1/3 f / 2 is the dimensionless dipolar interaction strength, and is the "deformation function" [29] and is illustrated in Figure 1. I(x) decreases monotonically from 4 to −2 as x increases from 0 to ∞, and passes through zero at x = 1. The first and the second term in the square bracket of Eq. (9) represent the kinetic and the Fock exchange energy, respectively. For the homogeneous system, the Hartree direct energy vanishes. Under this variational approach, the ground state is determined by the stationary condition for the total energy of Eq. (9) with respect to parameter α: [dε/dα] α=α 0 = 0. The optimal value α 0 is shown in Figure 2 as a function of the dipolar strength C dd . For free fermion systems, momentum density distribution is spherical, i.e., α 0 = 1 at C dd = 0. As the interaction strength increases, α 0 decreases, which means that the momentum density distribution becomes more prolate in shape. In other words, the Fermi surface is stretched along the direction of the dipoles. Once we have the energy of the system as represented by Eq. (9), we can easily obtain other important thermodynamic quantities. Here we provide our calculation for the pressure P , compressibility K and chemical potential µ: These quantities are illustrated in Figure 3. One can see that P , 1/K and µ all monotonically decrease as the dipolar interaction strength increases. In particular, when the inverse compressibility (i.e., the bulk modulus) becomes negative, the system is no Figure 3. Chemical potential µ, pressure P , and inverse compressibility or bulk modulus 1/K as functions of C dd . All quantities are normalized to their corresponding values in the non-interacting limit: f /m. The vertical line indicates the critical dipolar strength beyond which the system becomes unstable against collapse. longer stable against collapse. Our calculation indicates that the critical dipolar strength is about C dd = 3.23. Collective oscillations of trapped dipolar Fermi gas Let us now turn our attention to the trapped dipolar Fermi gas. First, to obtain the the total energy of Eq. (2), we introduce the following ansatz for the Wigner function: where ρ 2 = x 2 + y 2 , and a ho = /(mω) with ω = (ω 2 ρ ω z ) 1/3 . The variables β and λ represent the deformation and compression of the spatial density distribution of the system, respectively. When we take α = 1, β = (ω ρ /ω z ) 2/3 , and λ = 1, this trial function is consistent with the Thomas-Fermi density of a free Fermi gas in the harmonic trap. Fermi wave number k F is related to the number of fermions as Substituting Eq. (10) into Eqs. (3), (4), (5), and (6), we obtain the total energy in units of N 4/3 ω as [29] where β 0 = (ω ρ /ω z ) 2/3 represents the trap aspect ratio, strength for the trapped system. The momentum space deformation parameter α, as in the homogeneous case, appears only in the kinetic and the exchange energy terms, both of which are independent of the spatial deformation parameter β. This indicates that the momentum space distribution of the trapped system will also be elongated along the direction of the dipoles, regardless the geometry of the trapping potential. On the other hand, β appears only in the potential energy and the Hartree direct energy terms. The ground state is determined by the stationary condition for Eq. (12) with respect to the three variables α, β and λ: ∂ǫ/∂α = ∂ǫ/∂β = ∂ǫ/∂λ = 0. From the last condition, we can see that the energies of the dipolar Fermi gas satisfy the Virial theorem: In addition, the ground state has to satisfy the stability condition: The energy surface in the coordinates (α, β, λ) has to be a convex downward at the stationary point. If no values of (α, β, λ) can be found to satisfy both the stationary and the stability conditions, the systems is considered to be unstable against collapse [29]. This procedure leads to the stability phase diagram as shown in Figure 4(a). Just as in the homogeneous case, the trapped dipolar gas is only stable for dipolar interaction strength below a critical value. In Figure 4(b), we show the different energy terms [Eqs. (13) and (14)] as functions of β 0 at N 1/6 c dd = 1.5. Several features are worth pointing out: (1) The exchange energy is always negative, as in the homogeneous case, regardless of the trap geometry; whereas the sign of the direct energy ǫ d depends on trap geometry: ǫ d > 0 for β 0 1 (oblate trap) and ǫ d < 0 for β 0 1 (prolate trap). (2) Both the kinetic and the trapping energies depend on trap aspect ratio. By contrast, for non-interacting system, when expressed in the same units, we have ǫ kin = ǫ ho = 3c 1 ≈ 0.68 independent of β 0 . Next, we derive the collective oscillation frequency for several low-lying excitation modes of the system using the sum rules in the present formulation [30,31]. In this approach, we represent the excitation frequency Ω using the first and third energyweighted moments of the strength function for a given transition operatorF : where |ν denotes the ν-th eigenstate of the Hamiltonian with eigenenergy E ν . For our purpose, we choose the one-body operator aŝ where ξ and ζ are certain parameters. A collective oscillation is compressive when ξ and ζ have the same sign, and quadrupolar when they have opposite signs. The natural monopole and quadrupole operators correspond to ξ/ζ = 1 and ξ/ζ = −1/2, respectively. Using Eq. (18), the collective excitation frequency Ω in Eq. (15) in the present formulation can be shown to be where ǫ hoρ = 2c 1 β 0 /(λβ) and ǫ hoz = c 1 β 2 /(λβ 2 0 ) are the radial and axial components of the trapping energy, respectively [see Eq. (13)]. The excitation frequency Ω is calculated by substituting the variational parameters (α, β, λ) at the stationary point of the total energy (12). From Eq. (19) we can easily find the excitation frequencies of the monopole and quadrupole modes, which have the following expressions: The corresponding frequencies for non-interacting system can be recovered from Eqs. (21) and (22) as Ω M 0 = 12ω 2 ρ ω 2 z /(ω 2 ρ + 2ω 2 z ) and Ω Q0 = 12ω 2 ρ ω 2 z /(2ω 2 ρ + ω 2 z ). Figure 5 shows the excitation frequency of the monopole mode Ω M and the quadrupole mode Ω Q . As can be seen in Figure 4(b), the total interaction energy (ǫ ex + ǫ d ) is positive (in other words, the overall dipolar interaction is repulsive) for oblate traps (β 0 < 1) which makes the atomic cloud more compressible, hence Ω M is increased compared to its non-interacting values. For prolate traps (β 0 > 1), the opposite will be true. This is consistent with the result shown in Figure 5(a). The quadrupole mode frequency Ω Q , on the other hand, exhibits a roughly opposite trend. To account for the hybridization of different modes, we parameterize ξ and ζ in Eq. (18) as ξ = sin θ and ζ = cos θ, with 0 ≤ θ < π. We then investigate the minimum value of the excitation frequency Ω(θ) given by Eq. (19). The collective oscillation will be dominated by the compression mode for 0 < θ < π/2 and by the quadrupolar mode for π/2 < θ < π. Moreover, θ = π/2 represents a radial mode, and θ = 0 an axial mode. The natural monopole and quadrupole operators correspond to θ = π/4 and θ = π − arctan(1/2) ≈ 0.85π, respectively. Figure 6(a) shows the minimum excitation frequency Ω min as a function of the interaction strength N 1/6 c dd up to the critical value, while Figure 6(b) shows the angle θ that minimizes Ω(θ). For the spherical trap with β 0 = 1.0, the excitation frequency decreases monotonically as the interaction strength increases, and the minimum-energy mode is the monopole mode. For the prolate trap with β 0 = 10.0, the minimum-energy mode is dominated by the axial mode with θ ≈ 0 as the axial axis represents the direction of the soft confinement. Similarly, for the oblate trap with β 0 = 0.8, the minimumenergy mode is dominated by the radial mode with θ ≈ π/2 as the radial direction now becomes the soft axis. However, as the interaction strength increases towards the critical value, in both of these cases, the minimum-energy mode shifts towards the monopole mode, and we clearly see the tendency of the softening of the collective mode, indicating the approaching of the collapse instability. We note that, in particular for the case of β 0 = 0.8, Ω min does not completely decrease to zero at the critical value. This could be due to the calculation of the average frequency of the collective oscillation by sum-rule method. Deeper insights into collective excitations may be obtained from microscopic approaches such as the random-phase approximation [32,33]. Expansion dynamics We now turn to the expansion dynamics of an initially trapped dipolar Fermi gas. This study is important as in most cold atom experiments, the atomic cloud is imaged after a period of free expansion. Furthermore, the expansion dynamics may bear the signature of the underlying interaction. The dipolar effects in chromium condensate are first observed in the expansion dynamics [16,17,18]. Our starting point is the Boltzman-Vlasov equation: where the effective potential U includes both the external harmonic trap potential U ho and the mean-field potential due to the dipole-dipole interaction: where V dd (k) = (4πd 2 /3)(3k 2 z /k 2 − 1) is the Fourier transform of V dd (r). Note that the k-dependence of the effective potential U originates exclusively from the contribution of the exchange interaction, i.e., the last term at the r.h.s. of Eq. (24). To study the dynamics, we shall make use of the scaling transformation where f 0 represents the equilibrium Wigner distribution function obtained in previous section, whose form is given by Eq. (10), and b i are the dimensionless scaling parameters. This scaling approach has been used previously to study the expansion of Fermi gases [34,35] and Bose-Fermi mixtures [36]. From the Boltzman-Vlasov equation, we can derive the equations governing the scaling parameters [34,36] b j + γ 2 with γ j = ω j /ω, ǫ dd = N 1/6 c dd and R 2 j = d 3 R R 2 j n 0 (R) with n 0 being the equilibrium density. The second and third terms in Eq. (25) represent, respectively, the restoring then solve for b ρ (t) and b z (t) using Eq. (25) with the restoring force term γ 2 j b j removed and with the initial conditions b ρ (0) = b z (0) = 1. Before presenting our results, we recall that when the exchange interaction is ignored, the direct dipolar interaction always tends to stretch the cloud along the direction of dipole moments in both real and momentum spaces [35]. Figure 7 displays several examples of the cloud aspect ratio during time of flight for different trap geometries. As expected, asymptotically, the aspect ratios in momentum and real spaces become equal to each other, i.e., κ r (∞) = κ p (∞) = κ ∞ . A notable feature is that, regardless of the initial trap geometry, the shape of the expanding cloud eventually becomes prolate as κ ∞ < 1. This result is in stark contrast to the expansion dynamics of a dipolar condensate whose asymptotic aspect ratio is sensitive to the initial trap geometry [16,17,18]. Furthermore, the interaction effects during the time of flight is also evident from Figure 7: Had interaction been ignored, the expansion would have become ballistic with κ p a constant in time. Figure 7(b) indicates that the expansion is essentially ballistic for an initial spherical trapping potential, as for such traps, the interaction energy is rather weak as shown in Fig. 4(b). That the expanded cloud eventually becomes prolate in shape is also obtained in Ref. [35] when the exchange dipolar interaction is ignored, which indicates that the effect of the exchange interaction during the expansion is not very important. This is consistent with Fig. 4(b) which shows that, except for nearly spherical traps, the magnitude of the direct energy is in general much larger than that of the exchange energy. However, we want to emphasize that the exchange term is crucial for the equailibrium momentum distribution inside the trap: Without the exchange term, the momentum distribution would be isotropic for any trap geometry. To get a closer look, we compare in Fig. 8 κ ∞ with the initial momentum space aspect ratio κ p (0) which characterizes the momentum distribution for the ground state in the trap. The initial momentum distribution is always prolate in shape as κ p (0) < 1. In general, the effect of the interaction during the expansion, with the dominant contribution from the direct term, is to further enhance this anisotropy such that κ ∞ < κ p (0). Exceptions may occur Figure 8. (a) The dipolar interaction strength dependences of asymptotic aspect ratios κ ∞ (solid lines) and the initial momentum space aspect ratio κ p (0) (dashed lines) for various trap aspect ratio β 0 's. (b) The difference between the asymptotic aspect ratio and the initial momentum space aspect ratio for β 0 = 1. for nearly spherical traps, for which one may have κ ∞ > κ p (0) as shown in Fig. 8(b). However, this effect is very small since, as we have already mentioned earlier, the total dipolar interaction is weak for such traps. Summary In summary, we have studied the properties of dipolar Fermi gases in both homogeneous system and in the cylindrical harmonic trap with the dipole moments oriented along the symmetry axis. The total energy functional of this system is derived under the Hartree-Fock approximation. The one-body density matrix in the energy functional is obtained from a variational ansatz based on the Thomas-Fermi density distribution in phase-space representation, which accounts for the interaction-induced deformation in both real and momentum space. Our calculations show that deformation of the spatial density distribution comes from the Hartree direct energy term, while deformation of the momentum density distribution arises from the Fock exchange energy term. Note that the exchange term, a consequence of the anti-symmetry of the many-body fermionic wave function, does not appear in Bose-Einstein condensate. We have calculated several thermodynamic quantities such as the pressure, the compressibility and the chemical potential of the homogeneous system and investigated the low-lying collective excitations of a trapped dipolar Fermi gas using the sum rule method for various trap geometry and interaction strengths. We observe the softening of the collective excitations as the interaction strength approaches the critical value for collapse. Finally, we have studied the expansion dynamics of the initially trapped system. We show that, in stark contrast to dipolar condensate [16,17,18], the atomic Fermi gas will eventually become elongated along the direction of the dipoles regardless of the initial trap geometry. This feature makes it convenient to detect the dipolar effects in
5,290.6
2008-12-04T00:00:00.000
[ "Physics" ]
How Well Do Text Embedding Models Understand Syntax? Text embedding models have significantly contributed to advancements in natural language processing by adeptly capturing semantic properties of textual data. However, the ability of these models to generalize across a wide range of syntactic contexts remains under-explored. In this paper, we first develop an evaluation set, named \textbf{SR}, to scrutinize the capability for syntax understanding of text embedding models from two crucial syntactic aspects: Structural heuristics, and Relational understanding among concepts, as revealed by the performance gaps in previous studies. Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges, and such ineffectiveness becomes even more apparent when evaluated against existing benchmark datasets. Furthermore, we conduct rigorous analysis to unearth factors that lead to such limitations and examine why previous evaluations fail to detect such ineffectiveness. Lastly, we propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios. This study serves to highlight the hurdles associated with syntactic generalization and provides pragmatic guidance for boosting model performance across varied syntactic contexts. Cats chase mice. Mice chase cats. However, as we delve deeper into different hierarchies of language comprehension, the question that naturally arises is: How well do these text embedding models understand syntax?For example, could the state-of-the-art text embedding models understand and distinguish the difference between "Cats chase mice" and "Mice chase cats"?Syntax, constituted by a set of rules that define sentence structures, forms a pivotal aspect of natural language.It integrates both heuristics and compositional elements, establishing the bedrock for the expansive and intricate nature of human language (Manning and Schütze, 1999;Jurafsky and Martin, 2000).A thorough comprehension of syntax is essential for a text embedding model to effectively ascertain the relationships among words, thereby facilitating a level of language understanding that mirrors human cognitive processes.Moreover, as text embedding models are increasingly deployed in LLM-based agents and real-world applications, ensuring that they maintain a solid understanding of syntax is critical to guarantee their reliability and efficacy. In this study, we address this question by introducing a new evaluation set called SR designated to probe the ability of text embedding models from three syntactic aspects (Partee, 1995;Gibson, 1998;Gildea and Jurafsky, 2000;Mitchell and Lapata, 2010;McCoy et al., 2019;Linzen and Baroni, 2020): 1) Structural heuristics: the rules and patterns that govern sentence structures, and 2) Relational understanding among concepts: the models' capability to infer relationships between different concepts in text.These dimensions represent the multifaceted nature of syntax. We take this inquiry a step further by conducting an incisive analysis to uncover the underpinnings of these limitations.By examining various models and their responses to syntactic challenges, we delineate the factors that contribute to the manifestation of these limitations and the reasons why they have eluded detection in conventional benchmarks.Recognizing these challenges is a precursor to addressing them.Finally, we show that simply augmenting the training Data with SR-like examples, which can be generated through ChatGPT with designed prompts, can significantly enhance the generalization capabilities of text embedding models in syntactically diverse settings. This research is aimed at shedding light on the challenges of syntactic generalization in text embedding models.By establishing a more rigorous evaluation set and proposing strategies for enhancement, we contribute to the advancement of text embedding models that are capable of nuanced understanding and high performance across varied syntactic contexts.Our work provides valuable guidance and sets the stage for future research aimed at achieving more syntactically aware and robust text embedding models.SR benchmark and code are released1 . SR Benchmark Typical evaluation sets have often been inadequate in rigorously assessing a model's understanding of syntax.They tend to focus on high-level performance metrics, while overlooking the finer aspects of syntactic understanding.SR was designed to address the limitations and specifically evaluate text embedding models from two important syntactic aspects: Structural heuristics, and Relational understanding.These dimensions were chosen due to their fundamental roles in natural language understanding.In this section, we introduce the construction process of the SR benchmark. Foundation Corpus for SR Construction For constructing the SR benchmark, it was vital to base it on rich and diverse foundational corpus that encompass different domains and compositional structures.These corpus include: STS: We adopt the STS benchmark (Cer et al., 2017), which comprises a selection of STS tasks organized in the context of SemEval between 2012 and 2017.The dataset include 1,379 sentence pairs from image captions, news headlines and user forums.They provide a range of sentences with varying complexity and structures, making them an ideal starting point for our SR benchmark. SICK: SICK(Sentences Involving Compositional Knowledge) (Marelli et al., 2014) includes 9927 sentence pairs that are rich in the lexical and syntactic phenomena.They provide a range of sentences with diverse compositional structure for our SR benchmark. CQADupStack: CQADupStack (Hoogeveen et al., 2015) is a dataset derived from Stack Exchange and is geared towards the identification of duplicate questions in community Question-Answering forums.It brings in diversity from realworld queries and their paraphrased versions. Twitter: The Twitter dataset (Xu et al., 2015) consists of pairs of tweets together with a crowdannotated score if the pair is a paraphrase.Including this dataset allows the SR benchmark to account for the informal and concise nature of social media text, which often involves unique linguistic constructs. BIOSSES: BIOSSES (Sogancıoglu et al., 2017) is a dataset designed to evaluate biomedical semantic similarity with 100 testing pairs.By including BIOSSES, the SR benchmark encompasses the domain-specific language used in biomedical texts. AskUbuntu: AskUbuntu (Lei et al., 2015) is a collection of user posts from the technical forum AskUbuntu.The inclusion of AskUbuntu allows for the SR benchmark to encompass complex and technical questions typically encountered in community-driven platforms. By utilizing these datasets, the SR benchmark is well-equipped to evaluate text embedding models' syntactic understanding across varying domains and compositional structures.More important, the recent text embedding models have already demonstrated remarkable performance on these benchmarks.This configuration not only ensures a holistic evaluation but also effectively pinpoints the models' strengths and weaknesses in diverse settings. Input: You are a semantic similarity scoring assistant.Here is the standard of scoring. The two sentences are completely equivalent, as they mean the same thing.Scoring 5 e.g.Sentence 1: The bird is bathing in the sink.Sentence 2: Birdie is washing itself in the water basin. ... The two sentences are completely dissimilar.Scoring 0 e.g.Sentence 1: The black dog is running through the snow.Sentence 2: A race car driver is driving his car through the mud. Follow the standard to score the similarity between Sentence 1 and Sentence 2. The workflow begins with the selection of foundation datasets, followed by the generation of data probing structural heuristics and relational understanding among concepts.Subsequently, the generated data is annotated with a score to represent the semantic similarity of sentence pairs.Finally, the assembled SR benchmark is utilized to evaluate the syntactic understanding capabilities of text embedding models. Assessing Structural Heuristics To assess how well text embedding models comprehend structural heuristics, we focus on the transformation of sentences into different syntactic structures (Gibson, 1998).In particular, we emphasize the exchange between active and passive voice as well as creating inverted sentences or partially inverted sentences. Active and Passive Voice Exchange: In the active voice, the subject of the sentence performs the action, whereas in the passive voice, the subject is acted upon.We transform sentences from the foundation dataset into their passive or active counterparts.For instance, an active sentence such as "The chef cooked the meal" can be transformed into its passive equivalent, "The meal was cooked by the chef."Text embedding models should rec-ognize that these sentences convey the same action but with a different focus and structure. Inverted and Partially Inverted Sentences: Inverted sentences involve reversing the canonical subject-verb-object order, often for emphasis or stylistic purposes.Partial inversion involves only a segment of the sentence being inverted.Assessing text embedding models on their ability to understand these structural changes is critical for gauging how well they can adapt to varied syntactic constructs.For instance, a standard sentence like "The team has won the championship."can be converted into an inverted sentence, such as "Won the championship has the team.",while a partially inverted example might be, "The championship, the team has won.".The models should be able to understand that, despite the alteration in structure, the core information remains the same. Assessing Relational Understanding The arrangement and relationship of concepts within a sentence contribute significantly to the meaning and interpretation of the text.Changing the order or relationship of these concepts may radically alter the sentence's meaning. Concept Order Manipulation: Concept order within sentences often plays a critical role in conveying meaning.We manipulate the order of concepts in sentences from the foundation dataset to create new sentences.The objective is to examine whether text embedding models can recognize how these manipulations impact the meaning of sentences.For example, consider the sentence: "Tom likes Taylor Swift.".By altering the order of the concepts, we get: "Taylor Swift likes Tom.".Though structurally similar, these sentences have completely different meanings.The models should recognize the shift in the subject and object and the corresponding change in the meaning. Conceptual Relationships Perturbations: Furthermore, we evaluate the models' ability to understand various relationships among concepts, such as cause-effect, part-whole, and synonymy etc.For instance, in a sentence like "Rain causes floods.", the model should identify the cause-effect relationship between "rain" and "floods".As another example, consider the sentence "He read the book because he was interested in history.".By changing the order, e.g., "Because he was interested in history, he read the book.", the meaning remains the same, but the structure has changed.The model should be able to recognize the constancy in the meaning despite the alteration in syntax. Annotation and Validation As presented in Figure 2, we first utilize ChatGPT to generate sentence pairs that probes the above three syntactic dimensions with specific designed prompts.Afterwards, ChatGPT is also utilized to annotate the sentence pairs with semantic similarity scores.Through a process of duplication and filtering of sentences that are too short, we manage to assemble a dataset comprising 9,424 sentence pairs for each syntactic dimension in the SR benchmark. To verify the generation and annotation quality, we first conduct additional validation by randomly sampling 800 sentence pairs from the SR benchmark.Any sentences that are inaccurately generated will be corrected as needed.Any sentences that are inaccurately generated will be modified as needed.Table 1, Table 2, and Table 3 present the statistical breakdown of the revision rate for these corrected sentences.Moreover, we utilize ChatGPT to assign similarity scores to the standard STS-B evaluation set, and we compute the correlation between these scores and the STS-B standard annotated scores. Upon completion of these exercises, we find that the revision rate is no more than 3% and the correlation scores between ChatGPT annotations and human annotations stand at 83.8%.This indicates that ChatGPT is proficient in generating syntax varied sentences and annotations that closely resonate with human evaluations, thereby affirming the reliability and authenticity of the SR benchmark. Evaluating Test Embedding Models on SR In this section, we employ the SR Benchmark to evaluate the syntactic understanding capabilities of five state-of-the-art text embedding models, including SentenceBERT (Reimers and Gurevych, 2019), SimCSE (Gao et al., 2021), Sentence-T5 (Ni et al., 2021), One embedder (Su et al., 2023) and OpenAI Embedding (Neelakantan et al., 2022).Detailed information on these models can be found in their original papers.Following previous works, we employ Spearman's correlation as the evaluation metric to assess how well the relationship between the cosine similarities of the sentence pairs and the annotated scores. We present the evaluation results on the SR benchmark in Table 4. Though these models have previously exhibited remarkable performance on foundational test datasets, they all perform poorly with low correlations on the SR benchmark.This suggests that existing text embedding models have not been optimized sufficiently to address the syntactic understanding challenge. Interestingly, we observe that, despite being trained solely on natural language inference datasets, SimCSE outperforms SBERT, Instructor and OpenAI embedding models.These competing models often have more diverse supervised pairwise training datasets or a greater number of parameters, or both.For instance, SimCSE achieves performance with an average Spearman's correlation of 44.16% on the benchmark for assessing relational understanding, while SBERT, Instructor and OpenAI embedding models achieve an average Spearman's correlation of 29.67%, 38.74% and 31.45%,respectively.This could indicate that training on natural language inference tasks may provide valuable syntactic understanding that isn't captured through mere parameter scale or breadth of training data.Sentence-T5, which is trained on a large scale of web-based question-answering datasets, demonstrates most robust performance.This might suggest that the diversity and complexity found in web-based question-answering data could be beneficial for embedding models in capturing syntactic aspects.We hypothesize that the web-based question-answering data, often encompassing a wide range of topics, styles, and structures, are likely to present a richer and more varied set of syntactic compositions compared to other data types. We use tree kernel (Collins and Duffy, 2002;Yu and Sun, 2022) as a measure of sentence structure diversity.The central idea of tree kernel is to count the number of common subtrees between two constituency pars.We compare the syntactic diversity of two corpora: WebQA (233k sentences selected) and NLI (275k sentences selected).We use Stan-fordCoreNLP to obtain constituency parse trees of sentences.Then, we randomly select 1,000 parse trees and use the tree kernel to calculate their similarity.The results are shown in Moreover, a noteworthy trend across all the text embedding models we assessed was the consistent pattern of comparatively better performance on the benchmark for relational understanding, as opposed to benchmarks for structural heuristics.This suggests that the current text embedding models may be more adept at capturing relations among concepts in sentences than they are at grappling with the finer points of syntax, such as sentence structures.This superior performance in capturing relations could be a result of the training data, which often tends to focus on semantic relationships.However, it might also indicate that the models have an innate aptitude for discerning semantic relations as compared to understanding complex syntactic structures. Shortcomings of Traditional Evaluation Paradigms It is noteworthy that many text embedding models, despite exhibiting poor syntactic understanding, have consistently shown high performance on semantic embedding matching tasks.A primary reason for this is that these models tend to capture semantic content effectively, even when syn-A man is watering a flower. Original Sentence Pair Is man a flower a watering. A man is pruning a flower. Order Perturbation A man is pruning a flower. A flower is watering a man. A man is pruning a flower. Relational Perturbation Figure 3: Retrain SBERT under syntactic perturbations such as randomized word order and exchanged relationships among concepts.Despite significant alterations in syntactic structure, the models continue to exhibit high similarity scores, highlighting their reliance on semantic content over syntactic understanding. tactic information is not properly encoded.The rich semantic information contained in the largescale corpora on which these models are trained allows them to make reasonably accurate predictions about the similarity between sentence pairs based on shared content words, irrespective of their syntactic structure. Traditional evaluation paradigms in semantic matching tasks often lack sensitivity to syntactic nuances.Specifically, they tend to reward models for correctly identifying semantic content overlap, but do not penalize them adequately for failing to recognize distinct syntactic constructions.As such, a model might receive a high similarity score for two sentences that share similar content but have different syntactic structures, thus effectively ignoring syntax. To empirically demonstrate that models can achieve high performance on semantic embedding matching tasks without proper syntactic understanding, we conducted an experiment where we manipulated the input data to the text embedding models to remove or alter syntactic information while preserving semantic content.The models' performance was then evaluated in terms of their ability to accurately measure semantic similarity in the manipulated data.As presented in Figure 3, we created two variants of perturbations on the STS dataset -one with randomized word order, and another with exchanged relationships among concepts -both of which significantly transform the syntactic architecture of the sentences.STS training dataset consists of sentence 1, sentence 2, and its semantic similarity score is marked by humans.We denote it as [S1, S2, score].In this section, we fixed sentence 1 and did the perturbation on sentence 2. Each sentence pair is changed into [S1, Perturbed S2, score].Remarkably, after retraining SBERT with perturbed input while keeping the original annotations unaltered, we discerned that the text embedding models continued to yield high similarity scores on the standard STS evaluation set.To illustrate, the correlation score following relational perturbation stood at 85.2%, nearly on par with the original score of 86.2%. This experiment underscores the models' proclivity to latch onto semantic content, often overlooking syntactic structures when adjudicating the similarity between sentences.These findings prompt a more rigorous and discerning evaluation paradigm that factors in both semantic and syntactic elements, paving the way for more holistic and accurate assessments of text embedding models. Simple Solution: Enhancing Syntactic Understanding through Targeted Data Augmentation In light of the observation that contemporary text embedding models tend to exhibit weak syntactic understanding, one simple approach entails enriching the training data with examples that mirror those in the SR benchmark, which is specifically designed to probe syntactic understanding.The process involves creating additional training examples that emulate the two facets of the SR benchmark: structural heuristics and relational understanding among concepts.Such new sentences can be synthesized by altering the original sentences through ChatGPT with designed prompts.Similar to Section 4.1, we fix sentence 1, do syntactic changes on each sentence 1 and ask ChatGPT(gpt-3.5-turbo-0301) to score the semantic similarity between the fixed sentence 1 and its perturbed one based on similarity scores with explanations and English examples from (Cer et al., 2017).We manually verified the generation reliability and annotation rationality.Each sentence pair can be written as [S1, Perturbed S1, re-score]. For our experiment, we adopt SBERT as the base model and conducted experiments by enhancing the STS training dataset with 10,000 SR-like examples for each of the syntactic dimensions (structural heuristics and relational understanding among concepts).We then retrained SBERT on the augmented dataset.We choose microsoft/mpnet-base as our raw model which is also the basic model of the best SBERT text embedding model SBERT-allmpnet-base-v2.We follow Sentence-BERT training settings (Reimers and Gurevych, 2019), use the regression objective function, a batch-size of 16, 4 training epochs, Adam optimizer with learning rate 2e-5, and a linear learning rate warm-up over 10% of the training data.Our default pooling strategy is MEAN.We save the best parameters according to the dev set at the end of each epoch.At evaluation time, we compute the cosine-similarity between the sentence embeddings. Figure 4 presented the results.We can observe that the incorporation of the augmented training examples led to a substantial improvement in the performance on the SR evaluation set, especially in the aspect of relational understanding.Specifically, the Spearman's rank correlation coefficient in relational understanding witnessed a remarkable increase from 24.9% to 60.1%.This indicates that the augmented data effectively enabled the model to develop a better grasp of the syntactic relationships between concepts.However, while the performance improvement on the SR evaluation set is significant, it is also important to observe how this augmentation impacts the performance on the original STS test set (STS-B).We noticed a slight decline in the performance on the STS-B test set, with the score decreasing from 86.7% to 84.4% after data augmentation.This slight decrease suggests that while the model became more proficient in understanding complex syntactic structures, it may have marginally compromised on some aspects it had initially learned. This experiment demonstrates the feasibility and potential effectiveness of targeted data augmentation as a means to enhance the syntactic understanding of text embedding models.It also emphasizes the importance of careful data curation and balanced training to ensure that improvements in one area do not come at the expense of performance in another.Future work can focus on refining this approach to optimize the trade-offs and achieve improved performance across different of syntactic understanding. Related Work Text Embeddings Text representation learning is a fundamental task in natural language processing.In recent literature, contrastive learning (Hjelm et al., 2019;He et al., 2020;Chen et al., 2020) has emerged as the dominant paradigm for training text embedding (Carlsson et al., 2020;Zhang et al., 2020;Gao et al., 2021;Yan et al., 2021;Ni et al., 2021;Chuang et al., 2022;Neelakantan et al., 2022;Su et al., 2023).These approachese typically involves the use of large pairwise pretraining datasets with rich compositional structures, wherein the model is optimized to distinguish between similar and dissimilar text samples.Contrastive pretraining is geared towards optimizing text embedding matching.As a result, most text embedding models (Reimers and Gurevych, 2019;Gao et al., 2021;Ni et al., 2021) are evaluated on the basis of their performance in semantic similarity matching tasks (Cer et al., 2017;Muennighoff et al., 2023).However, these tasks not sufficiently encompass the complexity of syntax in natural language, which involve not only the arrangement of words but also their relationships and compositional semantics. Syntax Probing in NLP Syntactic Probing seeks to understand to what extent the learned representations in NLP models capture syntactic information and how such information can be effectively extracted and analyzed.There have been numerous studies focused on this topic, each offering unique insights and methods for syntactic generalization.(Dyer et al., 2016;Linzen et al., 2016;Hupkes and Zuidema, 2017;Conneau et al., 2018;Lin et al., 2019;McCoy et al., 2019;Shen et al., 2020;Newman et al., 2021).For example, Conneau et al. (2018) assess the syntactic generalization of modern sentence encoders through a series of selected downstream tasks.Their approach is more geared towards understanding what properties are encoded in the sentence embeddings and how these embeddings can be used for various NLP tasks.On the other hand, McCoy et al. (2019) developed the HANS dataset to specifically examine the performance of Natural Language Inference (NLI) models, particularly focusing on whether these models are adopting superficial syntactic heuristics over a deeper semantic understanding.Their work is especially relevant in the context of NLI tasks and aims to discern the methods that models use to arrive at decisions. In contrast, our work aims to construct a benchmark for directly evaluating the syntactic capabilities of text embedding models, without being confined to a specific task like NLI.The primary goal is to facilitate the selection of robust and effective text embedding models for training in various applications by providing a direct measure of their syntactic understanding.This benchmark thus serves as an essential tool for researchers and practitioners who are looking to employ text embedding models that are both syntactically sound and effective in real-world applications. Conclusion This paper highlights the shortcomings of current text embedding models in understanding syntax.We introduced the SR benchmark to analyze models across two syntactic dimensions: structural heuristics and relational understanding.Our findings reveal that while these models are adept at semantic tasks, they struggle with syntax.We proposed a data augmentation technique using examples tailored for syntactic understanding, leading to notable performance gains on the SR benchmark.However, a slight performance dip was observed on the original test set.Future work could refine augmentation strategies to balance syntactic and semantic learning and create more holistic evaluation benchmarks. Limitation One limitation is that the SR benchmark may not comprehensively cover all the intricacies of natural language syntax.Real-world text data can be vastly more complex and varied.Also, the enhancement in syntactic understanding was primarily based on simple data augmentations, while a augmentation strategy is on demand to ensure that enhancing syntactic understanding does not come at the expense of semantic comprehension. A Lexical Compositionality We also do additional research which we call Lexical Compositionality.It refers to how the individual meanings of words come together to form the composite meaning of the larger linguistic structure in which they are embedded.For evaluating text embedding models' adeptness in lexical compositionality, we create variants of the sentences in the foundation datasets through the insertion or replacement of different types of words: nouns, adjectives, and verbs.The modified sentences retain the basic structure of the originals but feature an additional or changed word, either a noun, adjective, or verb.This method of dataset generation aims to examine whether text embedding models can adapt to and accurately capture the semantic changes brought by these insertions and replacements. Noun: The insertion or replacement of a noun is anticipated to modify the subject matter or the entities referred to within the sentence.For instance, the original sentence, "The cat jumped over the fence," could be transformed into "The cat jumped over the garden fence," by inserting the noun "garden".Similarly, by replacing the noun "cat" with "dog", we rewrite the original sentence as "The dog jumped over the fence".This method allows us to gauge the text embedding models' ability to discern the introduction of new elements within the sentence's subject matter and evaluate their sensitivity and adaptability to these semantic alterations. Adjective: The insertion or replacement of an adjective adds or changes a descriptive aspect to the sentence, and it is imperative for a text embedding model to recognize how these change the sentence's attributes or qualities.For instance, by modifying the sentence "The car is fast." to "The leading car is incredibly fast.",we can test the model's ability to comprehend the enhanced emphasis on the car's position and its speed.The robustness of embedding models can be verified. Verb: The insertion or replacement of a verb can modify the actions or states conveyed in the sentence while the verb alter may radically transform its essence.Thus, it's crucial for text embedding models to detect how this influences the sentence's dynamics.For instance, changing "She reads books." to "She reads and enjoys books."by inserting the verb "enjoys" examines the model's capability to understand the introduction of an additional action.In a similar way, when the sentence "Two kids are walking on a path in the woods." is transformed into "Two kids are fighting on a path in the woods," through verb replacement, it serves to test the model's proficiency in redirecting its focus to the altered semantic components of the sentence. We follow experiment settings in Section 3. The results are reported in Table 6. B Generated Example Table 7 showcases an example of sentence perturbation. Figure 2 : Figure2: Workflow of SR benchmark construction.The workflow begins with the selection of foundation datasets, followed by the generation of data probing structural heuristics and relational understanding among concepts.Subsequently, the generated data is annotated with a score to represent the semantic similarity of sentence pairs.Finally, the assembled SR benchmark is utilized to evaluate the syntactic understanding capabilities of text embedding models. Figure 4 : Figure 4: Enhancing syntactic proficiency of SBERT through strategic data augmentation. Table 1 : Statistical breakdown of the revision rate for modified sentences in STS. Table 2 : Statistical breakdown of the revision rate for modified sentences in Twitter. Table 3 : Statistical breakdown of the revision rate for modified sentences in CQADupStack. Table 4 : Results of five text embedding models on the SR Benchmark.Spearman's correlation is reported. Table 5 . The lower Table 5 : Comparison of tree kernel similarity between WebQA and NLI corpora.
6,252.4
2023-11-14T00:00:00.000
[ "Computer Science", "Linguistics" ]
Recognition of N-Glycoforms in Human Chorionic Gonadotropin by Monoclonal Antibodies and Their Interaction Motifs* Background: The N-linked oligosaccharides of human chorionic gonadotropin become more complicated in cancer. Results: MCA1024 alters its binding affinity against different human chorionic gonadotropin glycoforms. Conclusion: The N-glycosylations at Asn-13 and Asn-30 on the β subunit are crucial to the binding affinity of MCA1024. Significance: The aberrant glycosylation of cancer biomarkers might potentially be monitored by specific antibodies. The glycosylation of human chorionic gonadotropin (hCG) plays an important role in reproductive tumors. Detecting hCG N-glycosylation alteration may significantly improve the diagnostic accuracy and sensitivity of related cancers. However, developing an immunoassay directly against the N-linked oligosaccharides is unlikely because of the heterogeneity and low immunogenicity of carbohydrates. Here, we report a hydrogen/deuterium exchange and MS approach to investigate the effect of N-glycosylation on the binding of antibodies against different hCG glycoforms. Hyperglycosylated hCG was purified from the urine of invasive mole patients, and the structure of its N-linked oligosaccharides was confirmed to be more branched by MS. The binding kinetics of the anti-hCG antibodies MCA329 and MCA1024 against hCG and hyperglycosylated hCG were compared using biolayer interferometry. The binding affinity of MCA1024 changed significantly in response to the alteration of hCG N-linked oligosaccharides. Hydrogen/deuterium exchange-MS reveals that the peptide β65–83 of the hCG β subunit is the epitope for MCA1024. Site-specific N-glycosylation analysis suggests that N-linked oligosaccharides at Asn-13 and Asn-30 on the β subunit affect the binding affinity of MCA1024. These results prove that some antibodies are sensitive to the structural change of N-linked oligosaccharides, whereas others are not affected by N-glycosylation. It is promising to improve glycoprotein biomarker-based cancer diagnostics by developing combined immunoassays that can determine the level of protein and measure the degree of N-glycosylation simultaneously. The glycosylation of human chorionic gonadotropin (hCG) plays an important role in reproductive tumors. Detecting hCG N-glycosylation alteration may significantly improve the diagnostic accuracy and sensitivity of related cancers. However, developing an immunoassay directly against the N-linked oligosaccharides is unlikely because of the heterogeneity and low immunogenicity of carbohydrates. Here, we report a hydrogen/deuterium exchange and MS approach to investigate the effect of N-glycosylation on the binding of antibodies against different hCG glycoforms. Hyperglycosylated hCG was purified from the urine of invasive mole patients, and the structure of its N-linked oligosaccharides was confirmed to be more branched by MS. The binding kinetics of the anti-hCG antibodies MCA329 and MCA1024 against hCG and hyperglycosylated hCG were compared using biolayer interferometry. The binding affinity of MCA1024 changed significantly in response to the alteration of hCG N-linked oligosaccharides. Hydrogen/deuterium exchange-MS reveals that the peptide ␤65-83 of the hCG ␤ subunit is the epitope for MCA1024. Site-specific N-glycosylation analysis suggests that N-linked oligosaccharides at Asn-13 and Asn-30 on the ␤ subunit affect the binding affinity of MCA1024. These results prove that some antibodies are sensitive to the structural change of N-linked oligosaccharides, whereas others are not affected by N-glycosylation. It is promising to improve glycoprotein biomarker-based cancer diagnos-tics by developing combined immunoassays that can determine the level of protein and measure the degree of N-glycosylation simultaneously. Early diagnosis is critical in cancer treatment. Both therapy efficacy and survival rate can be significantly improved if specific and sensitive cancer biomarkers are available to facilitate the diagnosis and prognosis (1,2). As of 2013, the Food and Drug Administration (FDA) has approved approximately 15 biomarkers for monitoring drug response, performing surveillance, or monitoring the recurrence of cancers (3). The majority of these biomarkers are glycoproteins (4). The abnormal glycosylation of proteins has been revealed to participate in the occurrence and development of many diseases, including various cancers (5,6). For example, aberrant prostate specific antigen glycosylation is associated with prostate cancer (7). The fucosylation in ␣-fetoprotein N-linked oligosaccharides has been proven as an indicator for hepatocellular carcinoma (8). Simultaneously monitoring the level of protein biomarkers and their glycosylation modifications may significantly improve the accuracy of diagnostics. Clinically, the concentration of protein biomarkers in patients' serum or urine is usually measured by an in vitro immunoassay using mAbs (9,10). However, it is difficult to develop mAbs that can specifically bind to aberrant oligosaccharide structures due to the low immunogenicity and the heterogeneity of carbohydrates. hCG 4 is a glycoprotein hormone mainly produced by the placental syncytiotrophoblast cells (11). It can also be secreted by several normal non-placental tissues and certain cancerous tumors (12). The hetero-dimer structure of hCG consists of an ␣ subunit and a ␤ subunit that are held together by non-covalent hydrophobic and ionic interactions (13). The ␣ subunit of hCG consists of 92 amino acid residues with two N-glycosylation sites, Asn-52 and Asn-78. The ␤ subunit consists of 145 amino acid residues with two N-glycosylation sites, Asn-13 and Asn-30, and four O-glycosylation sites, Ser-121, Ser-127, Ser-132, and Ser-138 (14). Additionally, Thr-54 of the ␣ subunit is occasionally O-glycosylated (15). The carbohydrate moiety accounts for ϳ30% of the total molecular mass of hCG, which is ϳ36.7 kDa (16). hCG-H is a major glycosylation variant of hCG, in which oligosaccharide chains become more complicated than the normal form (17). hCG-H has been reported to contain larger fucosylated sialyl-N-acetyllactosamine tri-antennary oligosaccharides at the two N-glycosylation sites on the ␤ subunit and abnormal tri-antennary oligosaccharides with the ␣1,3 antenna terminating in mannose at the two N-glycosylation sites on the ␣ subunit (18). In addition, hCG-H has a particular double molecular size O-linked oligosaccharide when compared with the normal trisaccharide structures (19). The roles of hCG-H include promoting the invasion and growth of choriocarcinoma and reproductive cancer cells, as well as driving malignancy in these cancers (20). Extensive efforts have been made to develop mAbs that can distinguish the level of hCG-H from the total hCG in human serum or urine to improve the diagnostic specificity and accuracy for various cancers, including choriocarcinoma, placental site trophoblastic tumors, and testicular germ cell tumors (21). The mAb B152, recognizing core 2 type O-glycosylation at Ser-132 of the hCG ␤ subunit, has been developed to detect this abnormal hCG glycosylation isomer in Down syndrome pregnancies and trophoblastic disease (22)(23)(24). The mAb CTP104 has been reported to recognize the O-linked oligosaccharides at Ser-138 in the C-terminal portion of the hCG ␤ subunit, binding to both the core 1 and the core 2 type glycoforms (22). Efforts to develop an anti-hCG mAb that can specifically bind to the aberrant N-linked oligosaccharides have been unsuccessful due to the low immunogenicity of carbohydrates. However, the more branched and elongated N-linked oligosaccharides may change the spatial structure of hCG-H and affect its binding affinity to certain mAbs, especially ones with epitopes adjacent to the N-glycosylation sites. In this study, the HDX coordinated with electrospray (ESI)-MS epitope mapping technique was applied to discovering the spatial relationship between the N-glycosylation sites of hCG and epitopes of antibodies with different binding affinities to normal and aberrant hCG N-glycoforms. We demonstrated that the binding affinity of mAb MCA1024 to hCG was significantly affected by the N-glycosylation patterns, whereas mAb MCA329 exhibited no obvious changes in binding affinity to hCG with different N-linked oligosaccharides. The structural motif of the distinct hCG antigen-antibody interaction patterns in relation to N-glycosylation was discussed. These findings have important implications for monitoring hyper-N-glycosylated hCG and diagnosing choriocarcinoma, germ cell tumors, and other diseases in which alterations to hCG N-glycosylation may be involved. Experimental Procedures Materials-hCG purified from the urine of pregnant women (purity Ͼ98%) was purchased from United States Biological (Salem, MA). The recombinant hCG ␤ subunit expressed in Pichia pastoris was purchased from Sigma-Aldrich. Peptide-Nglycosidase F (PNGase F) was from ProZyme (San Leandro, CA). Pepsin from porcine gastric mucosa and Trypsin Gold were purchased from Promega (Madison, WI). The porous graphitized carbon (PGC) solid-phase extraction columns were purchased from Alltech (Columbia, MD). The anti-hCG mAbs, MCA329 and MCA1024, were purchased from AbDSerotec. CNBr-activated Sepharose 4B was purchased from GE Healthcare. Acetonitrile (HPLC grade), methanol (HPLC grade), water (HPLC grade), and formic acid (certified ACS grade) were purchased from Fisher Scientific. Vinylpyridine, D 2 O (purity Ͼ99.9%), and tris(2-carboxyethyl) phosphine hydrochloride (purity Ն98%) were purchased from Sigma-Aldrich. Ultra-centrifugal membranes were obtained from Millipore (Billerica, MA). The protein silver stain kit was purchased from Cwbiotech. All other reagents and chemicals were of the highest quality available. Purification of hCG-H from Urine Samples of Patients-hCG-H was purified from four female patients between the ages of 23 and 40 years who were clinically diagnosed with invasive mole. Early morning urine (ϳ400 -600 ml from each patient) was collected in sterile wide-mouthed containers and stored in an ice bath. NaN 3 was immediately added to the urine samples to a final concentration of 0.5 g/liter. The samples were clarified using filter paper to remove precipitates and further filtered using a disposable filter unit with 0.22-m pore size. The protein fraction containing hCG was separated by centrifugation with ultra-centrifugal filtration units (molecular weight cut-off 10 kDa) and then purified by immuno-affinity chromatography, following the protocol described by Valmu et al. (25). Briefly, the mAb was immobilized onto the CNBr-activated Sepharose 4B according to the manufacturer's instructions. The column was equilibrated with 50 mM sodium phosphate (pH 7.4). The sample was applied to the affinity column and equilibrated at room temperature for 2 h. The column was washed with 3 volumes of 10 mM ammonium acetate (pH 4.5), and the bound protein was then eluted with 10 ml of 3 M acetic acid. The eluate was immediately neutralized with NH 4 OH aqueous solution, and the buffer was changed to PBS using ultra-centrifugal filtration units (molecular weight cut-off 10 kDa). SDS-PAGE Analysis-SDS-PAGE was performed using 12% (w/v) separating gel and 5% (w/v) stacking gel according to the protocol described by Laemmli (26). Each sample (ϳ1 g) was dissolved with 8 l of water and then mixed with 2 l of loading buffer containing 2% (v/v) ␤-mercaptoethanol as a reducing agent. The samples were boiled for 2 min. Electrophoresis was performed on a Mini-PROTEAN Tetra system (Bio-Rad). The initial voltage was kept at 80 V until the samples entered into the separating gel. Then, it was increased to a constant voltage of 120 V. The bands were visualized by silver stain. N-Linked Oligosaccharide Profiling of hCG Glycoforms-The N-linked oligosaccharides of hCG samples from normal preg-nant women and invasive mole patients were released by PNGase F. First, 15 g of each sample was mixed with 2.5 l of denaturation buffer, 2.5 l of Nonidet P-40, and 2 l of PNGase F solution (5 units/ml). The mixture was incubated at 37°C for 72 h. The released N-linked oligosaccharides were recovered using PGC columns and dried by vacuum centrifuge (Eppendorf, Hamburg, Germany). The N-linked oligosaccharide samples were then reconstituted with 10 l of 10 mM NH 4 HCO 3 solution. LC-MS analysis was performed on an 1100 capillary LC system (Agilent) coupled with an LTQ-Orbitrap Velos Pro mass spectrometer (Thermo Scientific). A Hypersil Hypercarb Kapp PGC column (Thermo Scientific, 0.32 mm ϫ 100 mm, 5 m) was used to separate the N-linked oligosaccharides. Mobile phase A was 10 mM NH 4 HCO 3 in 2% aqueous acetonitrile, and mobile phase B was 10 mM NH 4 HCO 3 in 85% aqueous acetonitrile. A step gradient of 0 -60% B, 0 -60 min; 60% B, 60 -75 min; 60 -100% B, 75-90 min; and 100% B, 90 -105 min was used. The flow rate was 6 l/min. The column temperature was maintained at 35°C. The mass spectrometer was operated in the positive ion mode with the following parameters: sheath gas flow rate, 15 arbitrary units; auxiliary gas flow rate, 5 arbitrary units; spray voltage, 4.5 kV; capillary temperature, 275°C; and S-lens level, 69%. The mass acquisition range was set at 500 -2000, and the resolution of the orbitrap was set at 60,000 (m/z ϭ 400). Kinetic Analysis of the Interaction between hCG Glycoforms and Anti-hCG mAbs-The binding affinity of hCG glycoforms against anti-hCG mAbs was measured using an Octet-RED 96 biolayer interferometer (FortéBio). MCA329 and MCA1024 were biotinylated and then immobilized on the streptavidincoated biosensors according to the manufacturer's instructions. The biosensors were pre-wetted with PBS buffer. A 60-s washing step was performed, followed by the association step, which was performed for 60 -150 s depending on the situation. Finally, the dissociation step was performed for 300 s. Data were generated and processed by the Octet-RED User Software (version 3.1). Epitope Mapping Using HDX-ESI-MS-The recombinant hCG ␤ subunit was used to determine the epitopes of MCA329 and MCA1024. The stock solutions of hCG ␤ subunit without mAbs or with equivalent moles of mAbs were prepared at the concentration of 40 M in 50 mM sodium phosphate buffer (pH 7.8). A 5-l aliquot from a stock solution was mixed with 45 l of 50 mM sodium phosphate in D 2 O (pH meter reading 7.8) to initiate each HDX period. A 5-l aliquot of hCG ␤ subunit solution was diluted with 45 l of 50 mM sodium phosphate in water (pH 7.8) as an unexchanged control. All HDX reactions were performed at 20°C for 10 min and then quenched by adding 25 l of 200 mM tris(2-carboxyethyl) phosphine hydrochloride solution in 1% formic acid. Then, 25 l of 1 g/l pepsin solution in 1% formic acid was added immediately, and proteolysis was performed at 20°C for 5 min. On-line ESI-MS analysis was performed on a Prominence LC-20A system (Shimadzu, Kyoto, Japan) coupled with an LTQ-Orbitrap Velos Pro mass spectrometer (Thermo Scientific). The proteolytic products were injected onto a Hypersil ODS2-C 18 column (Dalian Elite Analytical, 4.6 mm ϫ 20 mm, 5 m) and eluted with a step gradient of 5% mobile phase B for 1.5 min; from 5 to 45% B over a period of 6.5 min; from 45 to 85% B over a period of 0.5 min; and 85% B for 1.5 min. Mobile phases A and B were 1% formic acid in 2 and 98% aqueous acetonitrile, respectively. The flow rate was 0.6 ml/min, and one-sixth of the fluids flew into the MS interface, controlled by a post-column splitter. The operation was maintained at 0°C to prevent the back exchange of deuterium to hydrogen. ESI-MS analysis was set in the positive ion mode with the following parameters: source voltage at 4.5 kV, sheath gas flow rate at 12 arbitrary units, auxiliary gas flow at 3 arbitrary units, capillary temperature at 275°C, and S-lens level at 69.8%. The mass acquisition range was set at 350 -1500, and the resolution of the orbitrap was set at 100,000 (m/z ϭ 400). Data Processing-A database containing all theoretical pepsin-digested hCG ␤ subunit peptides was generated. The experimental monoisotopic m/z values corresponding to all peptides in unexchanged form were input into the database to look for matches. The deuterium levels of each assigned peptides were then determined using the HX-Express software (27). Site-specific N-Glycosylation Analysis-hCG, hCG-H #2, and hCG-H #3 were reduced and alkylated according to the method described by Toll et al. (28). Briefly, samples (ϳ10 g) were denatured with 6 M guanidine hydrochloride, 0.5 M Tris-HCl, and 2.75 mM EDTA and reduced with 0.01 M dithiothreitol for 2 h at 37°C. The samples were then alkylated by 0.04 M vinylpyridine for 30 min at 37°C. The buffer was subsequently changed to 25 mM NH 4 HCO 3 using ultra-centrifugal filtration units (molecular weight cut-off 3 kDa). Trypsin digestion was performed at 37°C overnight with a 25:1 protein-to-protease ratio. N-Deglycosylation of hCG-H in Native Condition-For this purpose, 60 g of hCG-H #1 (ϳ1 g/l) was digested with 1.5 l of PNGase F solution at room temperature for 5 h. Purification of hCG-H from Urine of Cancer Patients- hCG-H is the abnormal glycosylated variant of hCG, which can be found in the blood and urine of patients with a variety of reproductive cancers. Invasive mole disease is a type of repro-hCG N-Glycoforms Bind mAbs Differently SEPTEMBER 11, 2015 • VOLUME 290 • NUMBER 37 ductive cancer, and the patients are usually treated with chemotherapy. In this study, hCG-H samples were purified from the urine of invasive mole patients. The patients were diagnosed with invasive mole disease based on pathology tests showing placental villi. The villi had extensive stromal edema with central cisterns, and the trophoblast proliferated circumferentially. Three hCG-H samples were prepared. hCG-H #1 was prepared from the pooled urine of patient 1 and patient 2. hCG-H #2 and #3 were prepared from the urine of patient 3 and patient 4, respectively. The purity and glycosylation were evaluated by reducing SDS-PAGE, using the regular hCG from urine of normal pregnant women as a reference. As shown in Fig. 1, the hCG showed two major bands representing the ␣ subunit (the lower band) and the ␤ subunit (the upper band). The ␤ subunit of hCG-H exhibited the same molecular masses but smeared to a much broader degree than in hCG, indicating that the glycosylation pattern of hCG-H was more complicated than the normal form. Characterization of N-Linked Oligosaccharides of hCG and hCG-H-It has been reported that hCG-H possesses a different N-glycosylation profile from normal hCG, as the percentage of more complicated tri-and tetra-antennary N-linked oligosaccharides is significantly larger (29). The overall N-linked oligosaccharide composition of hCG-H #1 was analyzed by LC-MS and compared with normal hCG. A total of 19 N-linked oligosaccharides were detected from hCG and hCG-H #1, eluting at approximately from 30.0 to 50.0 min on the PGC column. The structures of the N-linked oligosaccharides of hCG have been reported previously (28). We used high-resolution MS to confirm the identities of these oligosaccharides and sorted them into bi-, tri-, and tetra-antennary structures. The LC-MS data from 30.0 to 50.0 min were averaged to one MS spectrum, and the monoisotopic peak intensity of each assignable ion was recorded. The proportion of each specific oligosaccharide was then calculated using the sum of signals of its monoisotopic peaks with all charge states divided by the total peak intensity of all monoisotopic peak ions related to the 19 N-linked oligosaccharides. The N-glycosylation profiles of hCG-H #1 and hCG were compared by plotting the logarithms (Log 2 ) of the ratios of the proportion of each oligosaccharide in hCG-H to the pro-portion in hCG (Fig. 2). Although the 19 N-linked oligosaccharides were detected in both hCG and hCG-H #1, their relative abundances were significantly different. The proportions of 9 N-linked oligosaccharides in hCG-H #1 were more than 2-fold larger than in hCG. Most of them belong to the complicated triand tetra-antennary structures. The result showed that the hCG-H #1 purified from the cancer patients was hyperglycosylated, which was consistent with previous studies (25). BLI Measurements of hCG Glycoforms and Anti-hCG mAb Interaction-Binding kinetics and affinity are key characteristics for evaluating antigen-antibody interaction. Label-free techniques, such as SPR and BLI, provide rapid and real-time approaches to characterizing protein-protein interaction. The BLI technique has become increasingly popular because the samples can be easily recovered after measurement, which is of particularly vital importance when limited amounts of biological samples are available, as in the case of the biomarker glycoproteins purified from the fluids of cancer patients. A number of biosensors have been screened in preliminary studies, including Anti-Mouse Fc Capture sensor, protein A sensor, and streptavidin sensor. The Anti-Mouse Fc Capture and protein A sensors capture mAbs through the Fc portions and require no derivatization of the mAbs, but significant nonspecific adsorption was observed at the initial baseline measurement step. Minimal nonspecific interaction was achieved using biotinylated mAbs bound to the streptavidin sensor. There are many commercial available anti-hCG mAbs against more than 17 different epitopes. The ␤2 and ␤4 epitopes are located at the ␤ loop 3 region of hCG, where there are three N-glycosylation sites (Asn-52 of ␣ subunit, and Asn-13 and Asn-30 of ␤ subunit) nearby. MCA329 against the ␤2 epitope and MCA1024 against the ␤4 epitope were selected to interact with different hCG glycoforms. Both mAbs were biotinylated, immobilized onto streptavidin biosensors, and used to assay hCG and hCG-Hs (Fig. 3). The affinity constants (K D ) were hCG N-Glycoforms Bind mAbs Differently calculated using the FortéBio Octet QK software and are presented in Table 1. The binding affinities of MCA329 showed no obvious change from hCG (K D value 0.64 nM) to hCG-Hs (averaged K D value 1.08 nM). In contrast, the K D value of MCA1024 increased dramatically when the N-linked oligosaccharides of hCG were altered. Epitope Mapping by HDX-MS-The HDX-MS approach is very useful in investigating the peptide domains that participate in protein-protein interaction in aqueous solution. hCG, FSH, luteinizing hormone, and thyroid-stimulating hormone all belong to the hetero-dimeric glycoprotein hormone family. They share identical ␣ subunits but are distinguished from each other by the ␤ subunits. The HDX-MS analysis of the hCG-mAb complex was unsuccessful, mainly due to the resistance of the intact hCG to the protease. Instead, the recombinant hCG ␤ subunit was used because both epitopes of MCA329 and MCA1024 are located at the ␤ subunit. To optimize the HDX-MS experiment, the pepsin digestion of the hCG ␤ subunit in the absence of mAb was performed first. The peptide sequence coverage of 86.9% was achieved after the incubation, digestion, and MS conditions were carefully adjusted (Fig. 4). The entire ␤ loop 3 region was recovered within the 12 mapped peptides. The deuterium exchange of hCG backbone amides in the presence of mAbs was then performed in the same environment. The proteolytic peptides were identified using in-house developed software, and the deuterium exchange rate was calculated using the HX-Express software. The ⌬Deuterium exchange rate of each peptide was expressed using equation 1 where D Complex and D Control are the deuterium uptake values of each peptide derived from antigen-antibody complexes and the antigen-only control, respectively. H Backbone amides is the total amount of backbone amides of the corresponding peptide. The ⌬Deuterium exchange rates of all 12 peptides were plotted in Fig. 5. The majority of the peptides exhibited very small changes between the absence and presence of mAbs, as their ⌬Deuterium exchange rates remain close to the horizontal axis. However, ␤65-77, ␤78 -83, and ␤116 -133 showed a substantial ⌬Deuterium exchange rate decrease when the hCG ␤ subunit was incubated with MCA329. In the case of MCA1024, the ⌬Deuterium exchange rates of ␤65-77 and ␤78 -83 dropped significantly. JOURNAL OF BIOLOGICAL CHEMISTRY 22719 Site-specific N-Glycosylation Analysis-The BLI experiment showed that the binding affinity of anti-hCG mAb MCA329 and MCA1024 responded differently to hCG glycoforms. The HDX-MS experiment revealed that although the two mAbs shared the sequences ␤65-77 and ␤78 -83 as parts of their epitopes, they bound to hCG from different directions. Three N-glycosylation sites, Asn-52 on the ␣ subunit and Asn-13 and Asn-30 on the ␤ subunit, were spatially adjacent to the epitope peptides (30). Site-specific analysis revealed that these three sites possessed different N-glycosylation patterns. Asn-52 was not hyperglycosylated, whereas Asn-13 and Asn-30 had more tetra-antennary oligosaccharides than normal hCG (Fig. 6). Confirmation of the Involvement of N-Linked Oligosaccharides in the Interaction between hCG and Anti-hCG mAbs-To confirm that the N-linked oligosaccharides played a critical role in the interaction between hCG and anti-hCG mAbs, N-deglycosylated hCG-H (hCG-deN) was prepared by cleaving its N-linked oligosaccharides using PNGase F under mild conditions. Kinetic measurement of the interaction between hCG-deN and the two mAbs, MCA329 and MCA1024, was performed using the same procedure described under "Kinetic Analysis of the Interaction between hCG Glycoforms and Anti-hCG mAbs." The results are presented in Fig. 7, and all K D values are summarized in Table 1. The K D values of MCA329 against three different hCG glycoforms, hCG, hCG-H #1, and hCG-deN, were barely altered. In contrast, the binding affinity of MCA1024 was affected dramatically by the N-linked oligosaccharides. Its K D values were 0.001 nM against hCG and 0.28 nM against hCG-H #1. However, when the N-linked oligosaccharides were removed from hCG-H #1, the K D value decreased back to the equivalent level of normal glycosylated hCG. Discussion We have described herein the distinct responses of two anti-hCG mAbs to the different glycoforms of hCG. The binding affinities of mAbs MCA329 and MCA1024 to normal hCG, hCG-H, and hCG-deN were measured by BLI technology. Interestingly, the K D value increased significantly from hCG to hCG-H for MCA1024. After the removal of N-linked oligosaccharides from hCG-H, the K D value decreased accordingly. Meanwhile, MCA329 showed almost no change in its K D when bound to normal hCG, hCG-H, and hCG-deN. To elucidate the mechanism of the two different binding behaviors of MCA329 and MCA1024, their interacting epitopes were determined using HDX-MS technology. Seventeen different epitopes on hCG have been described as ␣1 to ␣6, ␤1 to ␤7, and c1 to c4 based on a mAb panel (31). The MCA329 was against the ␤2 epitope, whereas MCA1024 was against the ␤4 epitope, according to the manufacturer's specifications. However, the exact amino acid sequences comprising these epitopes were not yet clear. Early studies based on mutating specific amino acid residues showed that substitutions of Arg-68, Arg-74, Gly-75, and Val-78 completely abolished the binding of mAbs against the ␤2 and ␤4 epitopes. Moreover, the single substitution of Glu for Arg-68 also abated the binding affinity of mAbs against the ␤2 and ␤4 epitopes (32). These 4 amino acid residues most likely contribute to the epitope clusters ␤2 and ␤4, which is consistent with our HDX-MS results. The deuterium uptake of peptide ␤65-77, containing Arg-68, Arg-74, and Gly-75, and peptide ␤78 -83, containing Val-78, decreased when MCA329 and MCA1024 were present. MCA329 and MCA1024 shared the epitopic sequence ␤65-83, but the deuterium uptake of peptide ␤116 -133 decreased only when MCA329 was present. Previous studies showed that the nicked form of the hCG ␤ subunit lacking the sequence ␤109 -145 could still bind to mAbs against the ␤2 epitope (34), suggesting that peptide ␤116 -133 was not a part of the epitope for MCA329. The peptide ␤116 -133 was located at the C terminus of hCG ␤ subunit and was assumed to be a random, FIGURE 5. Epitope mapping and spatial relationship between epitope and N-glycosylation site. A, ⌬Deuterium exchange rates of hCG ␤ subunit-derived peptides. The blue line is from hCG ␤ subunit incubated with MCA329, and the red line is from subunit incubated with MCA1024. B, the schematic diagram of MCA329 and MCA1024 interacting with hCG. MCA1024 was assumed to bind epitope loop 3 (amino acid residues 63-85 in red) from the same side of N-linked oligosaccharides at Asn-13 and Asn-30 of the hCG ␤ subunit, whereas MCA329 was assumed to bind the same epitope but on the opposite side from these two N-linked oligosaccharides. MCA329 also hindered the random movement of the unfolded C-terminal peptide because the ⌬Deuterium exchange rate of peptide 116 -133 (in yellow) was decreased. The hCG structure is from Protein Data Bank (PDB): 1HCN (13) and 1HRP (30). FIGURE 6. Site-specific N-glycosylation analysis of hCG and two hCG-H samples. Site-specific N-linked oligosaccharides of hCG, hCG-H #2, and hCG-H #3 were characterized by C 18 reverse phase nanoLC coupled with nanoESI-MS. A, the total ion chromatogram of hCG-H #2. Thirteen glycopeptides linked by tetra-antennary oligosaccharides were detected. B, the mass spectra of tetra-antennary N-linked oligosaccharide-attached glycopeptides in hCG-H #2. The glycopeptides were named by their attachment position and structures. For example, ␤ 13 -N 4,1,F represents the N-linked glycopeptide with a tetra-antennary, mono-sialylated, fucosylated N-glycan linked to the Asn-13 residue of ␤ subunit. C, the log 2 (Ratio hCG-H/hCG ) was plotted. The tetra-antennary oligosaccharides linked to Asn-13 and Asn-30 of the ␤ subunit in hCG-H #2 and hCG-H #3 increased significantly when compared with hCG. In contrast, the relative amount of tetra-antennary oligosaccharides linked to Asn-52 of the ␣ subunit decreased. SEPTEMBER 11, 2015 • VOLUME 290 • NUMBER 37 non-folded component (13,30,33). It is likely that MCA329 hindered the exposure of the random C-terminal tail to the deuterium solution when binding to hCG. Meanwhile, MCA1024 approached the antigen from a different direction and did not interfere in the movement of the C-terminal tail. hCG N-Glycoforms Bind mAbs Differently The three-dimensional structure of hCG has been partially generated by different research groups using x-ray crystallography (13,30,33). Unfortunately, they were unable to crystallize natural hCG due to the heterogeneity of the eight carbohydrate side-chains. The four O-linked oligosaccharides accompanying the 34 amino acid residues (␤112-145) from the C-terminal of the hCG ␤ subunit had to be removed to successfully crystallize the complicated glycoprotein. The x-ray crystallography structure of hCG revealed that three N-glycosylation sites were spatially close to the epitope loop ␤65-83 (30). Asn-52 of the hCG ␣ subunit was on one side of the loop, whereas Asn-13 and Asn-30 of the ␤ subunit were on the other side. The ␣ subunit is common among the hetero-dimeric glycoprotein hormone family, whereas the ␤ subunit is unique to hCG. Site-specific analysis also revealed that only the two N-glycosylation sites on the ␤ subunit were hyperglycosylated. MCA1024 was assumed to bind the epitope loop from the same side as the Asn-13 and Asn-30 linked oligosaccharides, as its binding affinity decreased significantly when the N-linked oligosaccharides became more complicated (Fig. 5). The antigen-antibody interaction might have been impeded by either steric hindrance from the more branched antennas of the N-linked oligosaccharides at Asn-13 and Asn-30 or stronger static repulsion of the sialic acid residues at the end of each antenna. This assumption was proved by the restoration of the binding affinity of MCA1024 after the N-linked oligosaccharides were removed from hCG-H. The structural motif of hCG antigen-antibody interaction and the influence of N-linked oligosaccharides on the binding affinities of hCG to different mAbs have not previously been studied in depth. In most cases, such as x-ray crystallography analysis, the heterogeneous carbohydrate moiety of hCG must be removed first. In this study, we demonstrated the use of HDX-MS to reveal the binding domains of two different mAbs on hCG in aqueous solution. We found that MCA1024 was sensitive to changes in hCG glycosylation status. In other words, it was able to recognize different glycoforms of hCG. The aberrant modification of hCG glycosylation is a critical indicator of many placental and germ cell-originated tropho-blastic tumors. Clinically, an immunoassay that can sensitively, accurately, and rapidly monitor the status of patients' hCG glycosylation is in high demand. Based on the findings described herein, it is conceptually feasible to develop an immunoassay combining two distinct anti-hCG mAbs. For example, MCA329 can be used to determine the level of total hCG regardless of glycosylation. Simultaneously, MCA1024 can be used to measure the degree of N-glycosylation modification. Moreover, the change in N-glycosylation is a common feature in the development and metastasis of many cancers, such as lung cancer, liver cancer, breast cancer, prostatic cancer, and so on. Thus, the methods described herein may broaden the application of many existing biomarkers for malignancy and improve their diagnostic power. The recognition of N-glycosylation alterations of specific glycoprotein biomarkers for these cancers using the strategy developed in this study is currently under investigation. Author Contributions-Lianli Chi and Q. Z. designed the study. D. L. performed all LC-MS and HDX-ESI-MS experiments. P. Z. selected the patients and prepared samples. F. L. performed the BLI experiment. Lequan Chi designed the data processing programs. D. Z. contributed to Fig. 5.
7,271.6
2015-08-03T00:00:00.000
[ "Biology" ]
Challenges and strategies for in situ endothelialization and long-term lumen patency of vascular grafts Vascular diseases are the most prevalent cause of ischemic necrosis of tissue and organ, which even result in dysfunction and death. Vascular regeneration or artificial vascular graft, as the conventional treatment modality, has received keen attentions. However, small-diameter (diameter < 4 mm) vascular grafts have a high risk of thrombosis and intimal hyperplasia (IH), which makes long-term lumen patency challengeable. Endothelial cells (ECs) form the inner endothelium layer, and are crucial for anti-coagulation and thrombogenesis. Thus, promoting in situ endothelialization in vascular graft remodeling takes top priority, which requires recruitment of endothelia progenitor cells (EPCs), migration, adhesion, proliferation and activation of EPCs and ECs. Chemotaxis aimed at ligands on EPC surface can be utilized for EPC homing, while nanofibrous structure, biocompatible surface and cell-capturing molecules on graft surface can be applied for cell adhesion. Moreover, cell orientation can be regulated by topography of scaffold, and cell bioactivity can be modulated by growth factors and therapeutic genes. Additionally, surface modification can also reduce thrombogenesis, and some drug release can inhibit IH. Considering the influence of macrophages on ECs and smooth muscle cells (SMCs), scaffolds loaded with drugs that can promote M2 polarization are alternative strategies. In conclusion, the advanced strategies for enhanced long-term lumen patency of vascular grafts are summarized in this review. Strategies for recruitment of EPCs, adhesion, proliferation and activation of EPCs and ECs, anti-thrombogenesis, anti-IH, and immunomodulation are discussed. Ideal vascular grafts with appropriate surface modification, loading and fabrication strategies are required in further studies. Introduction Vascular diseases are the most prevalent cause of ischemic necrosis of tissue and organ, which has attracted much attention [1]. Vascular defect caused by trauma or underlying diseases like diabetes can reduce oxygen and nutrients supply for tissues and organs, which may result in severe consequences, like claudication, sores, organ disfunctions, necrosis, or even death [2,3]. When long-segment defects occurred or the defects happened in vital organs like heart, artificial vascular grafts are required to restore blood supply for tissues. Synthetic vascular grafts have been widely utilized in clinics as conventional strategies for vascular impairment, like polyurethane, polyester, expanded polytetrafluoroethylene (ePFTE), and etc., with diameter greater than 6 mm [4]. However, these synthetic grafts have long-term risk since they are prone to intimal hyperplasia (IH) and thrombogenesis, and result in implantation failure [5], particularly for small diameter vascular grafts (diameters less than 4 mm) [6]. Hence, ideal vascular grafts are required to imitate the framework and constitution of native vessels, as well as inhibit protein deposition, blood coagulation, and immunological rejection [7,8]. To construct a biomimetic vascular graft, it is indispensable to figure out the critical factors and challenges in graft development. It has been widely recognized that endothelialization is critical for blood contacting devices [9,10]. The endothelium, the inner tunica with monolayer endothelial cells (ECs) lining in vessel lumen, directly contacts with blood, and plays an important role in maintaining vascular hemostasis and patency by releasing regulatory molecules including nitric oxide (NO), heparins, and plasmin, etc. [9]. Losing endothelium layer may lead to a cascade of pathological reactions, like thrombogenesis, inflammation reactions, and smooth muscle cell (SMC) hyperplasia [11,12]. Thus, endothelium regeneration is crucial for vascular graft. In conventional tissue engineered vascular grafts (TEVGs), ECs are cultured and seeded on scaffolds prior to implantation, which is called in vitro endothelialization [13]. The proliferation ability of in vitro cultured ECs is limited. And greater stemness stem cells are applied, like endothelial progenitor cell (EPC), induced Pluripotent Stem Cell (iPSC), and mesenchymal stem cell (MSC) [14][15][16][17]. However, the viability, bioactivity and stability of seeded cells after implantation cannot guarantee, and the clinical application of this strategy is inhibited by its poor effectiveness and practicality [18,19]. Moreover, in vitro cell culture consumes more time and cost, and have greater risk of contamination. In situ endothelialization, commanding the regeneration of a healthy endothelium on the surface of vascular grafts directly after implantation, is more effective than in vitro endothelialization [20,21]. Early strategies pay attention to appealing cells from anastomotic regions, but poor EC proliferative ability hinders the long-term expectation. Thus, the mobilization, recruitment and homing of EPC from peripheral blood and bone marrow has appealed much attentions [22,23]. Furthermore, ideal in situ endothelialization needs more attention on biomaterial type, surface modification and releasing factors to regulate cell performance, in which enhanced adhesion, orientation, proliferation and activation of ECs and EPCs on graft surface are required. For long-term patency, thrombogenesis is the key factor leading to vascular occlusion. Vascular grafts, as the foreigner directly exposed to blood flux in vasculature, are easy to cause protein deposition and provoke thrombogenesis [24,25]. The aggregation of insoluble fibrin, platelet, and red cells induces the coagulation cascades and results in thrombus formation [26][27][28]. ECs serve as the first line for thrombogenesis. ECs can release and control key molecules, including tissue plasminogen activator (tPA), anti-thrombins, and plasmin, etc., to modulate anti-thrombogenesis process for vascular grafts, and these molecules are also potential treating drugs for application [24,29]. Another high potential risk is IH. The platelets, inflammatory cells, and SMCs aggregate and release growth factors, resulting in SMCs proliferating and migrating to vascular intima uncontrolledly, which exerts adverse effects on lumen patency [30]. Furthermore, inflammatory responses, induced by vascular graft implantation, are crucial in modulating graft development [31,32]. Biomaterial degradations act as stimulus, activate toll-like receptors (TLRs), and thus induce initial inflammation responses, which then cause white blood cells including neutrophils, monocytes, and lymphocytes to infiltrate from blood flow to implantation scaffolds [33][34][35]. And then growth factors released by macrophages, like tumor necrosis factor-α (TNF-α), TNF-β, and interleukin-1 (IL-1), etc., may play a role in influencing the biological behavior of ECs and SMCs, and thus modulating in situ endothelialization and lumen patency of vascular graft [36,37]. Maintaining long-term lumen patency for small-diameter vascular graft is still challenging. Days after graft implantation, the proliferation of ECs happens, and simultaneously blood cells, platelets and fibrin deposit onto foreign graft. Weeks after implantation, more ECs proliferate and adhere onto graft surface, but no intact EC layer forms, and coagulation cascades may be activated, then resulting in thrombogenesis. Months after implantation, SMCs migrate from anastomotic sites and proliferate uncontrollably, thus leading to IH. Meanwhile, inflammatory cells, especially macrophages, can regulate EC and SMC behavior via inflammatory factors. The process is shown in Fig. 1. Multiple strategies can be adopted to respond to the challenges described above for enhanced long-term lumen patency of vascular grafts (Fig. 2). ECs form the inner endothelium layer, and are crucial in preventing coagulation and thrombogenesis. Thus, promoting in situ endothelialization in vascular graft remodeling takes top priority, which requires recruitment of EPCs, migration, adhesion, proliferation and activation of EPCs and ECs. Surface modification with heparin or hydrophilic polymers can reduce thrombogenesis, and some drug release can inhibit IH. Additionally, NO and macrophages also play a crucial role in regulating the biological behavior of ECs and SMCs. Thus, in this paper we will review and summarize different strategies in promoting in situ endothelialization and inhibiting thrombogenesis and IH for longterm lumen patency of vascular grafts. Homing and adhesion of EPCs and ECs for enhanced in situ endothelialization EPCs, circulating cells located in bone marrow and low quantities in peripheral blood, can differentiate into ECs [22,38]. EPCs has also been applied in cell therapy for the treatment of critical limb ischemia [39]. It has been recognized that EPCs plays a critical role in endothelialization of vascular grafts [40]. However, the quantity of EPC homing to the neovascularization sites is limited. For enhanced in situ endothelialization, the homing of EPCs and recruitment of ECs is vital, which includes chemotactic effects, capture and adhesion of cells on graft surface [41]. Multiple chemokines can be utilized for EPC chemotaxis, and strategies like nanofibrous structure, biocompatible surface with bioactive binding sites and specific molecules modification can be applied for cell adhesion (Fig. 3). Homing of EPCs by chemokines EPCs have multiple sub-populations and display different markers on their surface. Early EPCs display markers including CD34, CD133 and VEGFR2, etc., which are reduced as cell maturity increases [39,42]. These EPCs in different subpopulations exert synergistic effects on endothelialization [43,44]. Multiple chemokines that can be utilized for EPC homing are summarized in Table 1. The quantity of EPCs in circulation blood flow is low, and EPCs from bone marrow can mobilize into peripheral blood for enhanced endothelialization. Multiple growth factors play a role in the chemotaxis of EPC homing to the region of neovascularization, but the underlying mechanism of signaling pathways in EPC homing has not been elaborated. Chemokines targeting CXC families on EPC surface Stromal cell-derived factor-1α (SDF-1α), which can act as the chemoattract for CXC family, has potential in EPCs chemotaxis and recruitment [45][46][47]. SDF-1α can bind to CXCR4 on hematopoietic stem cell (HSC) surface for stem cell homing, and it has also been reported that SDF-1α can bind to CXCR4 expressed on EPC surface [48][49][50]. Yu et al. [51] immobilized SDF-1α on vascular graft and found that SDF-1α immobilization could recruit EPCs and smooth muscle progenitor cells (SMPCs) simultaneously for enhanced in situ endothelialization. The in vivo results indicated that the lumen patency 12 weeks after implantation for naked graft was 44%, heparin coated graft was 67%, and SDF-1α/heparin modified graft was 89%. Issa et al. [52] studied the performance of dickkopf-3 (Dkk3) and results indicated that Dkk3 could interact with cell surface ligand CXCR7, activate ERK1/2 and PI3K/AKT signaling pathway, and thus enhance the recruitment and differentiation of EPCs. Chemokines can directly immobilize on graft surface, for example, Wang et al. [53] immobilized SDF-1α onto PLLA/PLGA/PLCL vascular scaffolds for EPC homing. Nanoparticles (NPs) are also ideal strategy for bioactive molecules carrier. He et al. [54] constructed chitosan/fucoidan NPs to load SDF-1α to induce EC migration for promoted in situ endothelialization. Chemokines targeting integrin families on EPC surface It has been reported that the integrins play a role in homing of EPCs. De Visscher et al. [50] constructed a synthetic graft coated with fibronectin (FN) and SDF-1α, and proved that FN could activate α4-integrin-VCAM1/FN axis for EPC homing. Particularly, integrinβ2 have potential in regulating EPC recruitment to ischemic sites [55,56]. Chavakis et al. [55] found that integrinβ2 not only induced the adherence of EPCs to ECs and ECM proteins, but also regulated the chemotaxis of EPCs to the neovascularization sites. Furthermore, integrinβ2 activated by specific anti-β2-integrin antibody can effectively enhance the homing of EPCs for in situ endothelialization. Integrin mediated migration of EPCs can be promoted by enhancing the activity of GTPase Rap1, and Rap1 can be activated by Epac1 [57,58]. Carmona et al. [59] utilized 8-pCPT-2 ′ -O-Me-cAMP to directly activate Rap1 for recruitment of EPCs, providing a new strategy for promoted homing of EPCs. . Moreover, some other molecules have also been explored and found to be able to promote homing of EPCs, like ephrine-B2-Fc chimera [62] and HMGB-1 [60], which target to specific receptors on EPC surface. The markers like CD34 and CD133 on EPC surface are not specific, which may also display on hematopoietic stem cell (HSC) surface. Lacking specific makers on EPC surface for chemokines reduces homing efficiency. Moreover, the mechanism under chemokines induced stem cell homing has not been clearly figured out. Deep explorations are needed to uncover the mechanism, and more specific chemokines are required to effectively promote homing and recruitment of EPCs for endothelialization. Adhesion of EPCs and ECs on graft surface After homing of EPCs, cell adhesion on graft surface utilizing nanofibrous structure, biocompatible surface with bioactive binding sites or specific molecules modification takes prior consideration (Fig. 3). Biomimetic nanofibrous scaffolds for enhanced cell adhesion Extracellular matrix (ECM), with nanoscale construction, is the micro-environment for cell adhesion, proliferation and differentiation, and is essential for the maintenance of cell biological activity [63][64][65]. Biomimetic nanofibrous scaffolds, with greater surface to volume ratio, provide more binding ligands for cell adhesion and biomolecules adsorption [66,67]. Multiple approaches can be applied to obtain nanoscale vascular grafts. Generally, there are three approaches for nanofiber fabrication in vascular graft construction, including self-assembly, phase separation, and electrospinning, etc. Self-assembly is a fabrication process by which individual components are arranged into hierarchical organized structures spontaneously supported by non-covalent interactions [68]. This process is ubiquitous in various molecular bio-behavior [3,69]. Peptide amphiphiles (PAs) have been widely applied in fabricating collagen-like adaptable nanofibrous biomaterials utilizing self-assembly strategy [70,71]. Nanofibrous structure developed by self-assembly can simulate the ECM with the lowest scale at 5-8 nm, but the synthesis process is difficult to be regulated and the efficiency of productivity is relatively low. Thus, the application of this strategy in vascular graft construction is limited [72]. Phase separation can also fabricate nanofibers at the size similar to native ECM collagens, with different porous structures at macroscale [73], which can also effectively enhance cellular adhesion onto the scaffold surfaces [74]. Various biodegradable aliphatic polyesters can be developed into nanofibers using phase separation, with fiber diameter ranging between 50 and 500 nm [72,75,76]. The porosity and pore sizes can be tuned by modulating parameters, like polymer concentrations, porogen morphology utilized, gel temperatures and frozen temperatures [73]. Compared to self-assembly strategy, phase separation is a relatively simpler strategy that does not need professional techniques. However, its application is restricted because of low yield efficiency, a finite number of selected polymers for fabrication and consequently inappropriate for industrialized scale manufacture [75]. Electrospinning has been considered as an appealing strategy to simulate native ECM, for its simplicity and scalability. To fabricate optimal electrospun scaffolds, construction parameters including voltage, solution concentration, interval to collector, and collection approaches, can be tailored to acquire ideal fibrous orientation, diameters, porosity and mechanic characteristics [77][78][79][80]. Compared to self-assembly and phase separation, a broader spectrum of biomaterials can be produced into nanofibers using electrospinning. Furthermore, convenient preparation methods, abundant biomaterials with electrospin-ability and high yield efficiency make the electrospinning strategy accessible for scaffold fabrication both on laboratory and industrial scales [75,81]. Multiple biomaterials have been utilized in electrospinning, including synthetic polymers, natural polymers and hybrid biomaterials. Non-degradable synthetic polymers like poly (ethylene terephthalate) (PET) [82,83], expanded poly (tetrafluoroethene) (ePTFE) [84][85][86][87], polyurethane (PU) [5,88,89] have ideal electrospinnability and outstanding mechanical characteristics. Non-degradable biomaterials can be implanted for long term application, but excellent tissue engineered grafts should possess proper biodegradability for minimized inflammatory reactions. Degradable biomaterials are favorable for adhesion and proliferation of ECs [90,91]. Biodegradable synthetic polymers like poly (ε-caprolactone) (PCL) [92,93], PGA [94], PLGA [95], and natural polymers like collagen [96,97], elastin [98], silk fibroin [99], gelatin [100], are also applied for vascular graft fabrication. Natural polymers have ideal biocompatibility, but insufficient mechanical properties. It is an appealing approach to blend the natural polymers possessing outstanding biocompatibility with the synthetic polymers possessing adequate mechanical properties to overcome these disadvantages, for example, collagen and PEG [96], elastin and PCL [98], gelatin and PVA [100], etc. For construction of biomimetic vessel structure, layer by layer (LBL) strategy can be applied in vascular graft fabrication, for example, silk fibroin as inner layer, hydrogels as the medial, and TPU nanofibers as outer layer, to simultaneously obtain mechanical strength and biological activity, and simulate natural vessel structure [101]. Biocompatible surface with bioactive binding sites for enhanced cell adhesion Synthetic polymers can provide sufficient mechanical strength in vascular graft construction, but inadequate bioactivity since lacking sufficient cellular recognition sites for cell adhesion [102]. Natural polymers with outstanding biocompatibility can provide enough cellular ligands [103]. Thus, surface modification with natural polymer for synthetic polymers is an effective strategy for enhanced cell adhesion in vascular graft fabrication. Gelatin possesses cell surface binding ligands RGD which are favorable for cell adhesion. Gelatin has attracted much attention for surface modification because of its biocompatibility, biodegradability, and editability. Cationized gelatin can be covalently grafted on electrospun PLLA nanofibers for better surface bioactivity [104]. Merkle et al. [105] applied co-axial electrospinning to construct core-shell structure, with PVA as core to provide mechanical strength, and with gelatin as shell to display biocompatible surface. The Young's modulus of core-shell structure was about 169 MPa and the tensile strength was about 5.4 MPa, and the mechanical properties was greatly enhanced compared with single PVA or gelatin [105]. Blended gelatin and heparin can also be an alternative strategy. Wang et al. [106] fabricated gelatin, heparin NPs, and polylysine nano-coating using self-assembly to construct a biomimetic vascular structure. Collagen, component of natural ECM, possesses excellent biocompatibility and bioactivity [107]. Grus et al. [108] fabricated polyester vascular scaffold coated with collagen to enhance biocompatibility and improve lumen long-term patency. Furthermore, vascular graft Poly-Maille (Perouse Medical, France) with collagen coating is already available. Some other natural polymers, like elastin [109], silk fibrin [110,111], are also alternative for vascular graft application. These polymeric coatings can provide non-specific binding sites not only for EC adhesion, but also for other blood cells like platelets, white cells, and SMCs, which may induce thrombogenesis and IH. Thus, more specific binding coatings are required for in situ endothelialization. Cell-capturing molecules on surface for enhanced cell adhesion EPCs can differentiate into ECs and generate the inner endothelium layer on graft surface. Hence, it is important to targetedly capture the circulating EPCs and ECs onto graft surfaces for enhanced in situ endothelialization. To promote the migration and adhesion of cells, antibodies, cell adhesive peptides and cell-specific aptamers have been extensively investigated [23,112,113]. CD34 and VEGFR-2 are the surface markers on EPC surface [39]. Anti CD34 antibodies (Ab) have been the most widely utilized for EPC target and capture [114,115]. But there are some disadvantages concerning about anti CD34 Abs, since anti CD34 Abs are not only specific for EPCs, they can also capture other cells, some of which can even differentiate into SMCs and lead to thrombosis [116,117]. Clinical investigations indicated that anti CD34 Abs coated graft cannot reduce risk of vascular occlusions than conventional graft surface, especially of IH [118,119]. To overcome this problem, it was proposed that anti CD34 Abs can combine with drugs like sirolimus to reduce IH [120,121]. Anti VEGFR-2 Abs are also alternative for EPC capture. Anti VEGFR-2 Abs can effectively target and capture EPCs and ECs from blood flowing [122,123]. It has been reported that the specificity of VEGFR is superior than that of CD34 and CD31. Although the superior specificity, the VEGFR-2 also expresses on surface of monocytes and macrophages, and the recruitment of immune cells will induce undesirable inflammatory responses. The specific markers for stem cell recognition remains unclear and needs more explorations. Cell adhesive peptides are also critical for biological identification between cell membrane and relevant ligand for cell capture and adherence. Integrins on cell surface mediate the adhesion of cell onto ECM in a dominated manner [124]. Multiple peptide sequences have been applied for surface modification for enhanced adhesion of EPCs and ECs, including RGD [125][126][127][128][129], CAG [130,131], REDV [132,133], and YIGSR [134,135], etc. These peptide sequences modified on graft surface display specific affinity with ECs, enhancing the adhesion of ECs and inhibiting the adherence of platelets [136][137][138][139]. Aptamer, the short oligonucleotide sequences, exhibits affinity to specific targeted molecules. Aptamer sequences can be obtained through cell-SELEX technology [140]. It was reported that aptamer could capture porcine EPCs for enhanced in situ endothelialization [141]. But the application of aptamers in EPC capture has not been widely used. The effects of aptamers on the in vitro and in vivo cell performance and vascular patency still require more studies to verify. The stability of aptamers in vivo is unclear. Moreover, some aptamers have risks of causing inflammation responses, and the relative immunomodulation is poor known [142]. The cell-adhesive peptides, originating from ECM, bind to integrins on EC surface for cell capture. The ECs possess stronger specificity to peptides like YIGSR and REDV than SMCs. Peptides can promote cell adhesion, but with poor targeting. Antibodies can target to relevant markers on cell surface, but the targeting markers are not specific on EPC or EC surface. Aptamers, the oligonucleotide sequence, have high affinity with target cells. However, the immune responses of aptamers in vivo remain unclear. Cell behavior regulation for enhanced in situ endothelialization Ideal vascular graft is also required for enhanced cell elongation, proliferation, activation and differentiation of EPCs and ECs. Circulating EPCs can be subdivided into two main categories, hematopoietic lineage EPCs and nonhematopoietic lineage EPCs. The hematopoietic EPCs originate from bone marrow and represent a provasculogenic subpopulation of hematopoietic stem cells (HSCs), which play an indispensable role in vascular repair. Different stem cells have different stemness. Embryonic stem cells (ESCs) possess totipotent differentiation potential, and mesenchymal stem cells (MSCs) are also pleuripotent in osteogenic, chondrogenic and adipogenic differentiation. But the stemness and differentiation potential of EPCs, the somatic stem cell, is limited, without multiple differentiation potential. Under normal physiological conditions in adults, the stem cells are in quiescent conditions, and maintain a dynamic balance in growth and decay of tissues. But under pathological conditions or external inductions, the ability to differentiate, regenerate and re-new can be activated. Thus, after the circulating EPCs are recruited, the defected vascular environment will release the molecules, and initiate the oriented differentiation of EPCs into ECs. To further enhance the proliferation and oriented differentiation of EPC to EC for promoted endothelialization, topography to regulate cell orientation, bioactive molecules and therapeutic genes can be applied. Regulated cell alignment and orientation Topography cues for vascular graft surface can exert significant effects on modulating biological behaviors of ECs and endothelialization [143,144]. Micro-and nano-scale topography factors including aligned nanofibers and surface patterns can induce the formation of uniformly aligned ECs for intima construction [145,146]. Aligned nanofibers Electrospinning technology has been widely applied in nanofibrous vascular graft fabrication, and fiber orientation can be tuned by regulating spinning parameters. The aligned nanofibers can induce cell orientation and modulate cell morphologies and biological performances [93]. Multiple studies have reported that the axially oriented fibers can arrange the morphology and alignment of ECs or MSCs for intima reconstruction [147,148]. Furthermore, the aligned fibers can also provide tensile mechanical strength, enhance SMC alignment in outer layer [149], and higher lumen patency rate [150]. Surface micro-and nano-patterns Cell-surface interactions play a crucial role in enhancing endothelialization. Graft surface with micro-/nano-scale grove/ridge patterns can enhance the construction of an intact EC layer spontaneously aligned along a prior orientation, which then effectively simulates the elongated endothelium layer structure [151,152]. The aligned ECs have been reported to inhibit leukocyte invasion and consistent with elongated morphology of native ECs under blood flow [153,154]. Surface patterning has potential in promoting spontaneous in situ endothelialization [146]. Photolithography, electron beam lithography, and soft lithography can be utilized to develop micro-/nano-scale tube/groove/pillar patterns in different sizes [151,155,156]. The patterns can enhance EC elongation and endothelialization, as well as inhibit platelet adhesion and maintain long-term patency rate [144,[157][158][159]. Wang et al. [160] constructed a biomimetic vascular graft modified with nano-topographic lamellar structure utilizing freeze-cast technique, and the lamella was 10 μm high, 200 nm thick (Fig. 4A). The results indicated that the nano-lamellar framework could prevent platelet activation and enhance EC orientation (Fig. 4C). The number of platelet adhesion on lamellar surface was about 3 times fewer than that on non-lamellar surface. The HE staining images indicated that lamellar structure could promote in situ endothelialization, while distinct thrombus formed in non-lamellar structure group (Fig. 4D). Zorlutuna et al. [161] constructed a vascular graft with channel nanopatterns at the periodicity of 650 nm. The nanopatterns were on both sides of surface to induce orientation and proliferation of ECs and SMCs simultaneously. Different surface nanopatterns are effective strategies to tune cell orientation and vascular remodeling in a pattern-dependent way. The arrangements of adhesive peptides also influence the migration, morphology and proliferation of ECs [162]. Wang et al. [163] explored the effects of RGD nano-spacing in different nanoscale size (37-124 nm) on MSC performance, and the results indicated that RGD nano-spacing might have a role in modulating differentiation of MSCs. Saux et al. [164] found that micro-scale pyramids could impede cell migration, while RGD spacing with density of 6✕10 8 mm 2 could enhance cell spreading and adhesion. Karimi et al. [165] compared random and nano-clustered RGD spacing on vascular graft surface, and demonstrated that the nano-island pattern on surface could promote the migration and adhesion of ECs for enhanced in situ endothelialization. RGD is specific for EC capture, and designing RGD pattern on surface can further promote cell migration and proliferation. Promoted cell proliferation and activation To promote the proliferation and activation of ECs and EPCs after cell adhesion, bioactive molecules and therapeutic genes are promising approaches (Fig. 5). Microenvironment regulation for enhanced cell performance Multiple molecules possess capability to motivate the proliferation, differentiation and activation of ECs and EPCs, including growth factors, gas and microRNAs (Fig. 5). Vascular endothelial growth factor (VEGF) is vital in vascularization, and plays a key role in regulating EC behavior [38,166,167]. Sustained VEGF releasing in vascular graft can promote endothelialization via facilitating differentiation of EPCs and enhancing the proliferation and activation of ECs [168,169]. VEGF can be loaded on vascular grafts through multiple strategies, like NPs [170], coaxial electrospinning [171], direct blending electrospinning [172], and emulsion electrospinning [173], etc. Remarkably, VEGF can also serve as chemotaxis for EPC homing via target of VEGFR1 and VEGFR2 receptors on cell surface [174,175]. Fibroblast growth factor-2 (FGF-2), with potential in directing stem cell differentiation, is also applied in vascular graft [176,177]. Rajangam et al. [178] combined heparin and PAs, and constructed a self-assembly nanofibrous gel to capture VEGFs and FGF-2 for enhanced angiogenesis. Furthermore, platelet-derived growth factor (PDGF) can promote the migration and proliferation of SMCs [179]. Han et al. [180] constructed a double-layer electrospun nanofibrous scaffold, with inner layer loaded with VEGF for EC proliferation, and outer layer with PDGF for SMC proliferation, thus inducing an intact vascular blood vessel formation. Growth factors can directly play a role in vascular construction, but for protein delivery, protein bioactivity and concentration maintenance in vivo are still concerned. To Fig. 5. Bioactive molecules and therapeutic genes for enhanced in situ endothelialization. Strategies including micro/nano particle loading, nanofibers embedment or graft surface coating can be utilized to deliver therapeutic factors for promoted cell proliferation and activation. Furthermore, targeting molecules are used for more efficient gene delivery to targeted cells. enhance delivery efficiency and maintain protein activity and concentration in vivo, multiple nanoscale and microscale carriers are utilized, which are summarized in Table 2. VEGF is efficient in promoting proliferation and activation of ECs, but the immunogenicity and high expense make the application of VEGF in clinic difficult. MicroRNAs (miRNAs), the non-coding RNAs, have been reported to play a role in regulating vascularization by bonding with promotor region of target genes in recent studies [184,185]. It has been showed that miRNA-126 can regulate the vascular development via modulating the responding of ECs to VEGF, and inhibiting Spred-1 expression, which restrains angiogenic signal pathways [186,187]. Electrospun nanofibers are viable and convenient to load biomolecules, and have been widely utilized to carry miRNAs. Zhou et al. [173] utilized REDV modified PEG-trimethyl chitosan to load miRNA-126, and delivered the miRNA to the targeted ECs. The miRNA complex was incorporated into electrospun polymers utilizing emulsion electrospinning to construct vascular scaffold, and miRNA was released sustainedly for enhanced EC performance. Cui et al. [188] loaded miRNA-126 in inner electrospun fibers, and miRNA-145 in outer fibers, respectively to regulate biological behavior of ECs and SMCs. Moreover, some other miRNAs are also found and explored their effects on vascularization. MiRNA-22 could prevent the apoptosis of SMCs via targeting p38-MAPK pathway during vascular remodeling [189]. Suppression of miRNA-21 restrains EC growing through PTEN dependent-PI3K pathway [190]. Nevertheless, effects of these miRNAs are still at primary research stage, and has not been widely recognized. Moreover, NO can also stimulate proliferation and activation of ECs, as well as homing of EPCs [191,192], which makes NO donors attractive for vascularization. The biological functions and multiple NO donors applied in vascular grafts will be introduced in the following contents. Therapeutic gene delivery for enhanced cell performance Delivery of proteins like VEGF and FGF can directly bind to receptors on target cell surface, and modulate the expression of angiogenesis related genes via signaling pathways, but have risk of protein degradation, inactivation and gradual consumption. Gene therapy is a favorable approach for promoting endothelialization through transfecting ECs, since genes can be transfected into nucleus and translated for protein releasing, potential in maintaining a relative high protein concentration in vivo (Fig. 5) [193]. VEGF, FGF and ZNF580 genes are promising genes for therapeutic gene delivery in vascular graft applications. The cell adhesive peptide like RGD [208,209], REDV [210] can be utilized for target gene delivery to facilitate peptide modified pDNA complex bind to integrins on EC surface. Wang et al. [210] constructed a REDV modified pZNF580 NP complexes utilizing self-assembling strategy. The REDV mediated NPs could prevent the pZNF580 from DNase degradation, display better hemocompatibility and enhance delivery efficiency for enhanced in situ endothelialization. Kibria et al. [211] utilized RGD and PEG as dual-ligand modification to promote target gene delivery efficiency. Delivered genes can integrate into target cell genome, and stably expressing transfected proteins in a long period time, compared with direct protein delivery. But the transfection efficiency is a potential risk, and the transferred genes cannot function in vivo immediately like proteins. Preventing vascular incidents for long-term lumen patency Thrombogenesis, IH and calcification can reduce lumen diameter, and are major risks in maintaining long-term lumen patency after vascular implantation. Preventing thrombus formation, IH and calcification is crucial for survival of vascular graft. Hydrophilic surface and heparin coating are effective in anti-thrombogenesis, and some drugs can inhibit IH. Moreover, NO can play a role in inhibiting coagulation and IH. Anti-thrombogenesis for enhanced long-term lumen patency It is easy to provoke thrombogenesis if lacking ECs on graft surfaces, but during early healing process, there have not been adequate ECs lining on surface to release molecules for thrombus prevention. Thus, at the beginning of implantation, vascular grafts may direct contact with blood cells in vasculature. The vascular grafts are transplanted as foreign matters, and easy lead to the absorption of plasma proteins and blood cells, and then activate the coagulation cascades [212]. The graft surface with minimized protein adsorption, drugs or gases for anti-coagulation are effective strategies for thrombogenesis prevention. To inhibit thrombogenesis, minimizing the absorption of plasma proteins on graft surfaces is crucial. Thus, a biocompatible surface with minimized protein adsorption is required. The hydrophilic surface can effectively prevent the protein adsorption. Some biocompatible and hydrophilic biomaterials have been utilized for the surface modification of vascular grafts, for example, PEG [213], zwitterionic polymers [214,215]. PEG and zwitterionic polymers or groups can be directly coated [216], blended [217,218] or covalently grafted [213,219] on the scaffold surfaces, and the hydrophilic surfaces can effectively inhibit the Note: Poly (ethylene glycol)-b-poly (L-lactide-co-ε-caprolactone): PELCL; poly (L-lactide-co-glycolide): PLGA. Y. Zhuang et al. protein adsorption and platelet adherence [214,220]. Furthermore, heparin, known as an anti-coagulative drug, can be coated or immobilized on vascular graft surface, which plays a role in anti-thrombogenesis [221][222][223]. Heparin can interact with anti-thrombase AT III, to restrain the function of thrombase and coagulation factor Xa. Heparin, with carboxyl groups, can also be blended with gelatin to provide a biocompatible and anti-coagulative surface [88,224]. Liu et al. [225] constructed heparin/poly-L-lysine mixed NPs, and immobilized these NPs on a dopamine modified surface. They found that heparin modified surface could enhance biocompatibility, inhibit fibrin induced platelet adherence, and prolong thrombin time (TT) to 23.7-27.9s. Moreover, LBL technology can be utilized to graft heparin on the graft surfaces [226]. Easton et al. [227] constructed a LBL coating, utilizing ploy (acrylic acid) and polyethyleneimine to immobilized heparin on an electrospun nanofibrous scaffold, and the results indicated that the hemocompatibility of scaffold was enhanced. Some other anti-coagulation drugs can also be utilized, for example, clopidogrel, warfarin, t-PA [228]. The t-PA is the enzyme converting plasminogen to plasmin, inducing the fibrinolytic process [229]. Liu et al. [228] immobilized t-PA on electrospun PVA nanofibrous mats, aiming to mimic natural fibrinolysis function and prevent thrombus formation. Furthermore, Gastrodin, which is applied in cardiovascular diseases, can lower blood viscosity and regulate inflammation reactions. Zheng et al. [25] fabricated Gastrodin-loaded PU scaffolds, and found that the Gastrodin modified grafts showed greater potential in preventing thrombogenesis and inflammations. Apart from surface modification and drugs, one novel nanocomposite has been indicated to possess the ability to prevent thrombogenesis and display ideal mechanical characteristics [230][231][232]. To inhibit thrombogenesis of PU vascular graft scaffolds and improve bistability of PU in vivo, Kannan et al. [233] attached polyhedral oligomeric silsesquioxane (POSS) nanoparticles into poly (carbonate urethane) (PCU) scaffold with covalent bonding and developed a new POSS-PCU nanocomposite biomaterial. This POSS-PCU nanocomposite can be developed into small-diameter vascular grafts with enhanced mechanical properties and hemocompatibility, free from adverse events like IH and calcification. For hydrophilic surface, it can reduce fibrin adherence and coagulation cascades, but can also decrease EC adhesion. For heparin modification, the bioactivity and coating density are concerned. Hence, more studies are required to obtain ideal graft surface with reduced platelet and fibrin adherence and enhanced EC adhesion. Anti-IH for enhanced long-term lumen patency IH has high incidence in causing long-term vascular occlusion several months after implantation, which is caused by uncontrollable SMC pathological proliferation [234].ECs, serving as the paramount defender in vascularization, can release molecules for IH inhibition. Apart from promoting EC adhesion and activation, some drugs like E2F and MK2i also have potential in preventing IH by controlling SMC proliferation in a more direct way [235]. In clinic, E2F transcription factor was utilized to keep a 30 min lumen patency for the vascular tissue in vitro. But the clinical trial results indicated that E2F treated grafts in vitro failed to preventing IH via inhibiting proliferation of SMCs after implantation [236,237]. The p38 MAPK signaling pathway can play a role in activating proliferation of SMCs, and then induce the downstream inflammatory and fibrotic cascades in IH [238][239][240]. To prevent IH, Evans et al. [241] utilized MAKP inhibitory peptide (MK2i) and constructed MK2i NPs for graft modification to prevent uncontrollable proliferation of SMCs. Role of NO in enhancing long-term lumen patency NO can play an important protective role in vasculature, and also play an effective role in influencing relevant physiological functions (Fig. 6A) [191]. In physiological environment, NO is secreted by endothelial cells, which is synthesized through catalytic reactions in the presence of nitric oxide synthase (NOS) [242]. NO can inhibit the aggregation and activation of platelets, to avoid thrombogenesis [243], restrain proliferation of SMCs, to avoid IH [242,244], as well as prohibit the recruitment and activation of inflammatory cells, to avoid inflammation (Fig. 6A) [245]. Furthermore, NO plays a critical role in promoting the growth of endothelial cells (Fig. 6A) [246]. Insufficient ECs lead to deficient NO release, which may then trigger pathological process [247]. Hence, abundant NO release is required to enhance ECs growth, inhibit SMCs proliferation, and provide an appropriate environment for endothelialization of vascular graft [248]. N-diazeniumdiolates (NONOates) and S-nitrosothiol (RSNOs) can serve as the potential NO donors to be applied in biomaterials. One mole of NONOate [ − N(O) = NO − ] can be catalyzed to release 2 mol NO via the hydrolysis reactions in the physiological environment (temperature at 37 • C, pH at 7.4) [250]. In RSNOs, thiols (-SH) can react with nitrous acid (HNO 2 ) to release NO, generally under the catalysis of copper ion (Cu 2+ ) [251,252]. Selenocystamine (SeCA), having glutathione peroxidase (GPX)-like functions, is also a potential catalysts for NO synthesis by decomposition of RSNOs [249,253,254]. Multiple strategies have been utilized in vascular devices to catalyze NO generation, like selenocystamine (SeCA) [249,254], metal-phenolic surface [253], and copper ion (Cu 2+ ) [242,255]. Qiu et al. [249] utilized SeCA coating to catalyze NO generation, and the results indicated that NO can promote EC proliferation and inhibit SMC growth ( Fig. 6B and C). The amount of ECs on SeCA/heparin surface was about 1.5-2.0 folds more than that on naked surface, and EC migration distance and density was enhanced by 26% and 23%, respectively on SeCA/heparin compared with the naked one [249]. Nanoparticles (NPs) can serve as effective delivery vehicles for NO release. The better surface to volume rate can offer more chance for optimized quantity of NO donors. Targeted NPs can deliver NO via altering surface chemical characteristic to regulate the localized areas [256]. Various NPs have been studied to serve as carriers for NO donors, including silica nanoparticles (SiNPs) [257][258][259], liposome nanoparticles [260,261], and metallic nanoparticles [255]. SiNPs are easily compounded in the nanoscale, and contain functional groups like amine groups (-NH 2 ) on the surface [262]. The amine-functionalized SiNPs were synthesized, with the size ranging from 20 to 500 nm, to serve as NO carriers via converting the -NH 2 groups into NONOates donors under high pressure of NO [262]. To promote the storage and releasing characteristics of NO, the synthesis of SiNPs following the preparation of NONOates modified aminosilanes was proposed by the same team [263]. Fumed silica (FS) particles can also serve as NO carrier, and can be embedded into polymers to control NO releasing [258]. Zhang et al. [259] synthesized FS particles (200-300 nm), with NONOates formed on the surface, and embedded within PU films as anti-thrombotic coating. Liposomes are competent carriers, owing to their efficient cell encapsulation ability [264]. The influence of hydrophobic liposomes and surface micelles on the NO releasing were studied. Dinh et al. [261] found that anionic 1,2-dipalmitoyl-sn-glycero-3-[phospho-(1-glycerol)] sodium salt (DPPG) liposomes possessed greater NO releasing catalysis efficiency than sodium dodecylsulfate (SDS) micelles. Dinh et al. [265] further explored the influence of unilamellar (anionic and cationic) phospholipid vesicles on dissociating NO from NONOates, and found that anionic liposome NPs showed an enhanced NO releasing. Thermosensitive liposome NPs [266] and photo-sensitive NO donors [267] were also explored. Silica and liposome NPs described above can effectively load and liberate NO, but cannot deliver the gas to targeted tissues. Some metallic NPs are potential strategies to serve as loading vehicles for NO delivery to targeted tissues, including gold (Au) NPs [268][269][270], platinum (Pt) NP [271], silver (Ag) NPs [272][273][274]. Furthermore, metal organic frameworks (MOFs), composed of metal ions as nodes and organic ligands as linkers, have been utilized to embed and release NO to promote re-endothelialization for vascular grafts [275]. Fan et al. [255] constructed nanoscale copper-based MOFs (Cu-MOFs), and proved that Cu-MOFs performed as heterogeneous catalysts for NO regeneration synthesized by endogenous RSNOs. Simultaneous delivery of NO and Cu 2+ could restrain restenosis and enhance endothelialization synergistically [275]. Moreover, NO gas can be combined with growth factor VEGF, and spontaneously released to promote endothelialization and inhibit thrombogenesis and IH [276]. Although multiple studies have proved the influence of NO in in vitro and in vivo experiments, no clinical trials in therapeutic effects of NO donors have been conducted. Furthermore, the therapeutic effects on diabetic wounds still need more concerns [193]. Immunomodulation in vascular graft development Inflammatory responses, induced by graft implantation, are crucial in modulating graft development. Molecules released from immune cells influence the biological behavior of ECs and SMCs, and thus modulating in situ endothelialization and lumen patency of vascular graft. Immunomodulation and in situ endothelialization Macrophages, as the key cells in innate immunity, can release multiple molecules modulating in situ endothelialization process [277,278]. It has been reported that TNF-α secreted by macrophages plays a role in regulating migration and differentiation of EPCs and graft development. TNF-α can induce the differentiation of EPCs through activating TNF-α receptor 1 and NF-κB signaling pathway [279]. Moreover, TNF-β1 can promote platelet mediated-EPC homing via integrin β3 on cell surface [280]. The polarization of macrophages can be controlled to regulate the inflammatory microenvironment. Classical macrophages (M1) may secret inflammation cytokines, while alternatively induced macrophages (M2) tend to activate anti-inflammation molecules and enhance tissue repairment [281]. The acute inflammation responses can be modulated by lipid mediators which possess potential in anti-inflammation, like Resolvins [282,283]. Shi et al. [284] incorporated Aspirin-Triggered resolvin D1 (AT-RvD1) into electrospun PCL vascular graft, and demonstrated that the incorporation of AT-RvD1 enhanced vessel tube formation in vitro through M2 macrophage polarization. Inflammatory cells paly a complex role in regulating angiogenesis. VEGF-A is an important modulator in vascularization, and recently one subset type of neutrophils (which was shown as CD49d + VEGFR1high CXCR4 high ) were identified [285]. Massena et al. [285] demonstrated that homing this subset neutrophils to hypoxia neovascularization site could promote angiogenesis. The interaction between angiogenic cells and inflammatory cells is still not completely clarified, which requires more further explorations. Immunomodulation and long-term lumen patency Inflammatory cells also play a role in modulating the long-term lumen patency after graft implantation, like IH and intima calcification. Molecules secreted by macrophages and platelets, like TGF-β, cause enhanced migration and proliferation of SMC, and the unregulated SMC proliferation and the inrush of macrophages are dominating factor leading to IH [286]. SMCs from anastomosed sites migrate towards intima, and SMCs transform from the quiescent phenotype into the dedifferentiated, proliferative type, thus resulting in IH [287][288][289]. Macrophages are the predominant inflammatory cells targeting SMCs, and lead to the pathological proliferation of SMCs [290,291]. Thus, reducing IH via regulating macrophage behavior has attracted great attention. To attenuate chronic inflammation induced IH, Ding et al. [292] constructed resveratrol (RSV) modified carbon nanotube (CNT) coating for tissue engineered blood vessels. RSV has been reported to inhibit inflammation process via inducing M2 macrophage polarization [293]. The RSV modified CNT can be utilized for macrophage intake, and released intracellularly, which effectively promoted transformation of M1 into M2 and prevented IH [292]. Yang et al. [294] modified a decellularized vascular graft with the rapamycin (RM)-blended electrospun PCL coating, and the results indicated that the RM-loaded graft could enhance M2 polarization, inhibit IH, and promote endothelialization. Inflammation is also the leading factor contributing to atherosclerotic calcification [295]. Wei et al. [296] extracted small extracellular vesicles (sEVs) containing VEGF, miRNA126, and miRNA145 from MSC, and loaded sEVs on a heparinized electrospun PCL scaffold. Wei et al. [296] found that sEVs loaded graft exhibited immunomodulatory function, and induced the transition of M1 into M2, which effectively reduced graft calcification and enhance lumen patency for hyperlipidemia patients. Macrophages, the key cell in innate immune system, can polarize from M1 type to M2 type, which is induced by biomaterials, drugs or sEVs, to regulate behavior of ECs and SMCs, thus promote endothelialization and inhibit IH and calcification (Fig. 7). NK cells also play a role in vascular remodeling. BALB/c mice lack in NK functions display distinctly less IH occurrence [297], but approaches to regulate NK cell functions using bioactive scaffolds have not been available. Generally, the role of complements and cytokines in innate immune system are understood. However, the performance of adaptive immune system, including T cells, B cells and mast cells, is not that clear. For better modulation of immune functions, more studies on regulation of adaptive immune cell behavior are required. Conclusions and further perspectives The challenges for in situ endothelailization and long-term patency of small-diameter vascular grafts still exist. ECs form the inner endothelium layer, which play a crucial role in maintaining vascular hemostasis and lumen patency, but homing and capture of EPCs and ECs for conventional graft is poor. Naked graft surface without lined EC layer is easy to be deposited with blood cells, fibronectin, and platelets, thus inducing coagulation cascades and thrombogenesis. Furthermore, uncontrollable proliferation of SMCs migrating to intimal layer may result in IH, and inflammatory cells also play a role in regulating biological behavior of ECs, EPCs, and SMCs. Multiple strategies can be adopted for enhanced in situ endothelialization and long-term patency of vascular grafts (Fig. 8). Strategies for in situ endothelialization promotion, thrombogenesis and IH prevention, and immunomodulation in vascular graft remodeling are summarized as following. (1) Strategies for enhanced in situ endothelialization: 1) Homing of EPCs: Chemokines aimed at ligands like CXC family and integrin family on EPC surface can be utilized for EPC homing. 2) Migration and adhesion of EPCs and ECs: Nanofibrous structure, biocompatible surface for more binding sites like gelatin and cell-capturing molecules on graft surface including antibodies, specific peptides and aptamers can be applied for better cell adhesion. 3) Proliferation and activation of EPCs and ECs: Topography of scaffold can regulate cell orientation, like aligned nanofibers, surface micro-/nano-patterns, and RGD patterns on surface. Growth factors, microRNAs and therapeutic genes can modulate cell bioactivity. (2) Strategies for long-term patency: 1) Preventing thrombogenesis: Surface modification with heparin or hydrophilic polymers can inhibit activation and adhesion of platelets, as well as activate AT III, thus reduce thrombogenesis. 2) Preventing IH: Some drugs releasing like MK2i can inhibit IH via inhibiting proliferation of SMCs. 3) The role of NO: NO also plays a crucial role in vascular graft remodeling. NO is potential in inhibiting activation of thrombin, platelets, immune cells and proliferation of SMCs, as well as promoting proliferation and activation of ECs. (3) Strategies for modulating immunomodulation: Immunomodulation: Some drugs like AT-RvD1, RSV and RM promote M2 polarization, and exert influences on behavior of ECs and SMCs. sEVs can be utilized for M2 polarization and prevent calcification. Multiple strategies have been explored to promote in situ endothelialization, inhibit thrombogenesis and IH, but the approaches are limited to experimental researches. More researches concerning toxicity, mechanical properties, degradation rate and delivery efficiency should be considered and conducted for further application in clinics. Author contributions Y.Z., C.Z., and M.C. collected references, summarized perspectives and wrote the manuscript. J.H. and Q.L. collected references. G.Y. gave the suggestion and help on the revision of this review. K.L. and Y.H. conceived the concept of this review. All authors discussed and commented on the manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
10,690.4
2020-12-05T00:00:00.000
[ "Medicine", "Engineering" ]
Quantifying the Impact of Chronic Ischemic Injury on Clinical Outcomes in Acute Stroke With Machine Learning Acute stroke is often superimposed on chronic damage from previous cerebrovascular events. This background will inevitably modulate the impact of acute injury on clinical outcomes to an extent that will depend on the precise anatomical pattern of damage. Previous attempts to quantify such modulation have employed only reductive models that ignore anatomical detail. The combination of automated image processing, large-scale data, and machine learning now enables us to quantify the impact of this with high-dimensional multivariate models sensitive to individual variations in the detailed anatomical pattern. We introduce and validate a new automated chronic lesion segmentation routine for use with non-contrast CT brain scans, combining non-parametric outlier-detection score, Zeta, with an unsupervised 3-dimensional maximum-flow, minimum-cut algorithm. The routine was then applied to a dataset of 1,704 stroke patient scans, obtained at their presentation to a hyper-acute stroke unit (St George's Hospital, London, UK), and used to train a support vector machine (SVM) model to predict between low (0–2) and high (3–6) pre-admission and discharge modified Rankin Scale (mRS) scores, quantifying performance by the area under the receiver operating curve (AUROC). In this single center retrospective observational study, our SVM models were able to differentiate between low (0–2) and high (3–6) pre-admission and discharge mRS scores with an AUROC of 0.77 (95% confidence interval of 0.74–0.79), and 0.76 (0.74–0.78), respectively. The chronic lesion segmentation routine achieved a mean (standard deviation) sensitivity, specificity and Dice similarity coefficient of 0.746 (0.069), 0.999 (0.001), and 0.717 (0.091), respectively. We have demonstrated that machine learning models capable of capturing the high-dimensional features of chronic injuries are able to stratify patients—at the time of presentation—by pre-admission and discharge mRS scores. Our fully automated chronic stroke lesion segmentation routine simplifies this process, and utilizes routinely collected CT head scans, thereby facilitating future large-scale studies to develop supportive clinical decision tools. INTRODUCTION The functional organization of the brain is highly complex. The clinical consequences of focal brain injury therefore depend not merely on the volume of damaged tissue but also on its anatomical location (1,2). In stroke, a multiplicity of locations is commonly affected, forming a complex anatomical pattern shaped by the vascular supply to the brain. Optimal prediction of clinical outcomes in stroke then depends on understanding the relation between the patterns of injury and the underlying functional anatomy. This relation is commonly oversimplified, treating most of the resultant variability as noise, with interventional studies usually modeling only the volume of the lesion, or its gross anatomical location, and prognostic studies usually identifying a small number of variables, such as the proportion of the corticospinal tract affected (3)(4)(5)(6). The unmodeled variability degrades the sensitivity for detecting interventional effects (7) and limits the predictive power of prognostic measures. Given sufficient data, machine learning-enabled, highdimensional models drawing on thousands of anatomical variables taken together can capture the underlying complexity, with potentially dramatic impact on inferential and predictive performance (8)(9)(10). These considerations apply not only to acute stroke, but also to the background ischemic damage frequently superimposed on it, with an estimated 10 additional silent infarcts for every symptomatic stroke (11). Such infarcts unsurprisingly have been shown to worsen prognosis (12,13), confirming the need to model their impact on the acute outcome. We seek to quantify the predictive value of high-dimensional modeling of background ischemic damage in stroke, and to enable such modeling at large scale, within the current clinical routine. Patients In this single center retrospective observational study, we evaluated all patients presenting to St George's Hospital, London, UK between January 2015 and December 2016, recorded in the local collected Sentinel Stroke National Audit Programme (SSNAP) database, managed within the hyper-acute thrombolysis pathway, and imaged with a computer tomography (CT) head scan on admission. Three hundred and eightyseven patients who presented with intracerebral hemorrhage were excluded. Also excluded were 403 patients where the images were corrupted by motion or metal artifact or were acquired at external centers. We included all patients with a diagnosis of acute ischemic stroke (n = 1,704), and a randomly drawn subset of patients without evidence of acute or chronic stroke on CT (n = 78), and another randomly drawn subset of patients with only chronic injury (n = 50). This study was approved by the Health Research Authority, Local Research Ethics Committee (London-Camden & King's Cross REC). Study Outcomes and Predictors All patients underwent a plain CT head scan on admission, typically within the first hour of assessment and 3 h of estimated symptoms onset. All imaging was performed on a Siemens SOMATOM definition flash CT scanner and consisted of axially acquired 512 × 512 volumes of typical in-plane resolution of 0.3 × 0.3 mm and a z-plane resolution between 3 and 5 mm. The following demographic and clinical information were extracted from the routine SSNAP record: age, sex, hypertension, diabetes, congestive heart failure, atrial fibrillation, preadmission-mRS (pre-mRS), discharge-mRS (dis-mRS), and NIHSS scores. Fifteen patients did not have dis-mRS scores and were excluded from their respective analysis. Outcome measures were dichotomized to enable the application of classification models. For pre-mRS and dis-mRS, the two categories were low (0-2) vs. high (3)(4)(5)(6); for patients who received intravenous (IV) thrombolysis therapy, those with an increase in NIHSS score by more than 2 points after 24 h vs. those whose did not; and patient sex. High Dimensional Modeling and Model Evaluation For each of the 1,704 patients with ischemic stroke, our novel lesion and tissue segmentation and registration routine described below was applied to the admission CT brain scan. This yielded a series of derived volumetric maps projected in standard stereotactic space (Montreal Neurological Institute [MNI]) at a resolution of 4 × 4 × 4 mm 3 . The maps included a binary "lesion mask" where voxels falling within injured tissue are distinguished from all others; an "anomaly map" where voxels are labeled by a statistical measure, zeta (14), of their degree of abnormality; and the gray and white matter tissue probability maps. We trained a series of SVM models based on LibSVM (15) using the demographic, co-morbidities and neuroimaging data. The models in the series are hierarchically organized to incorporate progressively more information as it naturally becomes available during a patient's admission. We thus first examined models using age only, and incrementally increased the complexity by adding history and examination features followed by neuroimaging. Neuroimaging features included total lesion volume, voxel-level lesion mask, and the combined voxellevel lesion mask and zeta map. Radial basis function (rbf) kernels were used to train the SVM models and were evaluated using a k-fold (k = 10) cross-validation technique (16). The optimal parameters were identified using a grid search with the performance for each parameter combination being the area under the receiver operator curve (AUROC) averaged across the 10-folds. Model AUROC values were compared (17), and a bootstrapping technique (n = 1,000) was applied to obtain the 95% confidence intervals (CI) for each of the optimal models. Background Lesion Segmentation All CT image pre-processing was performed in SPM12 (18) and in-house developed software written in MATLAB 2016 (19). Preprocessing of the CT image involved affine alignment to the midsagittal plane, and a signal intensity transformation using the method described by Rorden et al. (20), to emphasize the tissuecerebrospinal fluid contrast. SPM12's combined segmentationnormalization routine (21) was then used to generate the transformation parameters to warp the CT image between MNI and the CT scan's native space. We address the problem of lesion segmentation by first creating 3 binary maps describing regions that are non-lesion (healthy tissue, sulci and ventricles). The image was thresholded at 100 Hounsfield units (HU) to remove bone, while all voxels below zero were clamped to zero, and the image filtered using a Total Variation (TV) algorithm (22) to improve the tissue-CSF contrast. For the healthy tissue map the TV processed image was passed through a top-hat filter. For the sulci map, the TV image was clustered into 3 classes based on signal intensity (14,24, and 34 HU) using a 3-dimensional Maximum Flow Minimum Cut (23-25) (MFMC) algorithm. The probability map based on the guide signal value of 14 HU was then thresholded. Third, for the ventricular system, the TV image was passed to the MFMC algorithm, but this time clustered into two classes (20,27 HU). The probability map for the lower guide signal was thresholded to reveal the ventricular system. As the ventricular system exhibits symmetry across the mid-sagittal plane, each voxel was assessed to determine whether its mirror-pair was similarly labeled, and incongruent voxel pairs removed. Finally all three maps were then processed individually with a 2-dimensional watershed transform (26) to cluster similar voxels together. A map identifying lesioned regions is created from the TV image, by using the MFMC algorithm and specifying a signal range with a higher sensitivity for lesion voxels. From this lesion map, the 3 non-lesion maps are subtracted. Clusters residing along the medial margins of the lateral ventricles, that extend across the mid-sagittal plane were removed. This forms the first binary lesion mask. By spatially normalizing a set of healthy brains, we can then index each voxel in our test brain according to how different it is from the reference population, thereby creating a map defining abnormal regions based on location and signal value (27). Here we use the zeta anomaly score, using a method detailed and validated in Mah et al. (28) to identify the lesioned regions. A second binary lesion mask is created by subtracting the sulci and ventricle binary maps from the zeta map and thresholding the resultant image. The two binary lesion masks are combined to form a single binary mask, and then clustered using a 2-dimensional watershed transform created from the CT image. Only watershed regions with a minimum occupancy of 50% are preserved. Finally, a noise reduction step is performed to remove small clusters and very large clusters that only span a single plane. A flow diagram of the process is available in the Supplementary Material. To validate the accuracy of our automated background lesion segmentation routine, 50 lesion masks of chronic stroke lesions were manually segmented from the axial CT scans in native space with ITK SNAP (29) by a neurologist experienced in the task (YM) and reviewed by an experienced neuroradiologist (ADM). These manual lesion masks represented the "ground truth" for the chronic lesion parameters against which the automated segmentation masks were compared. The performance of the RESULTS The mean (standard deviation, SD) age of the acute ischemic stroke dataset was 73 (15) years, and the 50 patients with only chronic lesions was 78 (49) years. The reference dataset of CT scans without evidence of acute or chronic lesions was significantly younger [mean 49 (16) years] compared against the other two datasets (p < 0.001 for both). All comparisons with co-morbidities and sex did not reach statistical significance ( Table 1). The relevant patient characteristics and clinical information for the acute ischemic stroke dataset and its subsequent dichotomization are shown in Table 2. No significant difference was found between the dichotomized groups except for age, where patients were older in the high pre-mRS and dis-mRS groups. The prevalence of identified background injury was 1,520/1,704, in the context of a distribution pre-mRS scores consistent with previous studies (31). The anatomical pattern of injury across the population shows the highest density in the anterior and posterior watershed territories of the middle cerebral artery, and posterior thalamus, with lower densities around the cerebellum and posterior fossa (Figure 1). Pre-admission mRS: 0-2 vs. 3-5 Classification models trained to discriminate between the low and high pre-mRS groups showed increasing performance with the incremental addition of clinical and imaging features (Figure 2). Models based on clinical features alone performed worst, exhibiting an AUROC of <0.60. The addition of lesion volume to clinical features increased this to 0.70 (95% CI 0.67-0.73). In comparison, the combined lesion mask and zeta map alone was significantly different at 0.76 (95% CI 0.73-0.79, p = 0.008). The addition of clinical features did improve the AUROC to 0·77 (95% CI 0.74-0.79), whose receiver operating curve is shown in Figure 3A, but the improvement did not reach significance (p = 0.405). Discharge-mRS Score: 0-2 vs. 3-6 Classification models trained to discriminate between the low and high dis-mRS score groups showed a similar pattern of performance with increasing clinical features. The model using the pre-mRS score alone achieved an AUROC of 0.71 (95% CI 0.69-0.74) which was not significantly different to the model using the combined lesion mask and zeta map (AUROC 0.72, 95% CI 0.70-0.74, p = 0.340). The optimal model incorporated the clinical features, pre-mRS and imaging (in the form of the combined lesion mask and zeta map) achieving an AUROC of 0.76 (95% CI 0.74-0.78) ( Figure 3B). However, it did not perform significantly better than the same model with imaging information excluded (AUROC 0.74, 95% CI 0.71-0.76, p = 0.055). Increase in NIHSS Score of More Than 2 Points Following Thrombolytic Therapy There were 305 patients who received IV thrombolysis therapy with available admission and 24-h post thrombolysis NIHSS scores recorded. Increments in modeled clinical features (age, co-morbidities, and admission NIHSS score) accompanied an increase in AUROC, with the optimal model using all these features (AUROC 0.75, CI 0.64-0.84). The addition of imaging information did not result in a significant difference in predicting future patient deterioration (Figure 4). Sex Prediction The gray and white matter probability maps for the 1,704 patients were extracted from the normalized plain CT scans and used to train an SVM model to determine the sex of the patient, as an internal quality control of the image segmentation process and subsequent modeling. The linear kernel model using gray matter maps performed the best, achieving an AUROC of 0.95 (95% CI 0.94-0.96), shown in Figure 4. DISCUSSION With the digitization of neuroimaging and its extensive use in stroke medicine, there is an opportunity to capitalize on recent advances in machine learning to develop predictive models of sufficient individual-level accuracy to support clinical decisions. We have shown that complex, high-dimensional models of brain injury have better predictive power compared with simple, lowdimensional ones incorporating only age and lesion volume. Our fully automated chronic lesion segmentation routine simplifies the necessary image pre-processing, facilitating the interrogation of the large datasets high-dimensional modeling requires. We have shown that incremental additions of clinical and imaging features improve the performance of predictive models estimating a patient's pre-mRS and dis-mRS scores. Although the high mRS groups for both pre-mRS and dis-mRS analyses were significantly older (p < 0.001), the models using age alone, or in combination with co-morbidities failed to exceed an AUROC of 0.68. This result is likely due to the variability in functional level for a specific age, with co-morbidities and age probably acting synergistically as a surrogate marker of the general health of the patient. Past studies have demonstrated that modeling the acute lesion can predict a patient's future functional ability, therefore intuitively, modeling chronic lesions should provide an insight into a patient's pre-admission level of function, which is reflected in our results. We also found that high-dimensional neuroimaging models that use each voxel as a separate dimension achieved better AUROC values, suggesting that there is an additional benefit in modeling the pattern of damage. Focal injuries are believed to influence the structure of the brain distant to the original lesion, with diaschisis playing a critical role in stroke recovery (32,33). This phenomenon would be consistent with the observed improvement in performance following the addition of the zeta map to the lesion mask, with a significant increase in AUROC (0.66 vs. 0.76 p < 0.0001) as the zeta map encodes changes in the brain distant to the chronic lesion, such as atrophy. The SSNAP database from which the comorbidity features were retrieved may not reflect the information available at presentation but instead accrued over the patient's admission. This will improve accuracy and minimizes missing data but may positively bias the observed predictive performance of the model using the clinical features to estimate pre-mRS scores. Nevertheless, the addition of clinical features to the combined lesion mask and zeta map did not significantly increase the AUROC. In the dis-mRS prediction models, the combined lesion mask and zeta map model achieved a similar performance as the model using pre-mRS scores. The optimal dis-mRS model combined all clinical information and neuroimaging into a single model. This was significantly better than using the pre-mRS scores alone, however failed to reach significance when compared against the model using all clinical information (p = 0.055). Early prognostication is a difficult task, especially when information is reliant on human recall or patient interaction, in the hyperacute phase of admission. The THRIVE score (34) used age, stroke severity (as determined with the NIHSS score) and comorbidities (hypertension, diabetes and atrial fibrillation), to estimate the likelihood of an mRS score of <3 at 90 days. When applied to patients who received endovascular treatment, their AUROC was 0.71, and 0.29 in those who received IV thrombolysis therapy (35). These results are comparable to our model using age and co-morbidities (AUROC 0.68), however their analysis did not include a significant proportion of patients who did not receive recanalization therapy, limiting the clinical application of the THRIVE score. In comparison the ASTRAL score (36) is an integer-based method to estimate the 90-day level of function based on a dichotomized mRS score of 0-2 vs. 3-6, in the emergency room. It managed an impressive AUROC of 0.90, and 0.79 when externally validated on the VISTA cohort (37). The ASTRAL score predominantly used information pertaining to the acute injury, and included an assessment of the visual fields, which can be very challenging in an aphasic or somnolent patient. In contrast our models only used information from the past, with the neuroimaging focusing on the pattern of chronic injury, to estimate the functional independence at discharge | Area under the receiver operator curve results for different support vector machine models trained to predict sex and deterioration in NIHSS score. The sex discrimination models (red) were trained using linear kernels, while the increase of more than 2 points in the NIHSS score (yellow) used radial basis function kernels. 95% confidence intervals shown as error bars. The comorbidities included in the models were diabetes, hypertension, atrial fibrillation, and congestive cardiac failure. rather than at 30 days. By using the information contained within the admission CT head scan, our model achieved an AUROC of 0.72, and thereby minimizes the reliance on operator and patient co-operation. Future work incorporating both the pattern of chronic and acute injury, may improve the performance further. Our models demonstrate the potential of applying machine learning to neuroimaging to produce clinical tools of value in the clinical management of stroke patients. At the hyperacute stage, one study has examined the development of patient selection tools to help identify suitable candidates for IV thrombolysis (38) and endovascular clot retrieval (39). The pre-mRS score was identified as an independent predictor of outcome at 90 days, as was a previous history of stroke. Though the interaction with treatment was small, a patient's history of stroke was treated as a binary measure, ignoring the burden of chronic damage, still less its anatomical pattern. Clearly, different patterns of injury will have different functional consequences; a reductive approach ignores valuable information which may shed light on the general health of the patient's brain and its ability to recover from the acute insult. Sex determination was used as an internal quality control technique for both the image segmentation and subsequent modeling. Comparable work based on MR volumetric imaging has shown sex differences in gray and white matter patterns (40), and are able to differentiate the sex of the subject with an accuracy of 89% (41). Our models based on the gray and white matter probability maps derived from CT scans performed similarly, with an AUROC of 0.95 and 0.90, respectively, and suggests our technique preserves the individual's tissue class differences present in the non-contrast CT scan. Although this study has shown parameterizing the complex pattern of chronic injury improves our ability to predict the level of functional independence in patients, our work presents some limitations. First, the optimal model to predict a deterioration in NIHSS score after IV thrombolysis therapy achieved an AUROC of 0.75 (95% CI 0.64-0.84). However, unlike the mRS models, increases in model complexity did not demonstrate a significant improvement in performance. The unfavorable ratio of number of patients to number of modeled features means these models may have struggled to capture the high-dimensional signals in the data and would be expected to perform better with a larger dataset. Second, the lesion segmentation routine has been designed to extract the pattern of chronic injury, there may be details relating to the acute lesion in the imaging, as the zeta map is a whole brain representation of anomaly, thus subtle reductions in CT density from the acute lesion may be conveyed numerically to the SVM algorithm. However, the median delay from symptom onset to CT scan time was <3 h, therefore features of the acute lesion are not expected to be readily visible on a plain CT scan. Although we can presume information relating to the acute lesion to be small, we cannot exclude it entirely. Nevertheless, this result suggests the presence of chronic lesions interacts with the presenting acute lesion, and the pattern of injury confers meaningful information for predicting a patient's future course. Third, this retrospective study uses information routinely gathered during clinical practice over 2 years and collected as part of the SSNAP initiative in the UK. The modified Rankin Scale score is a general summary measure of patients' function, which is subject to moderate variability, particularly in the clinical setting (31,42). Although converting the score to a binary outcome may obviate some of this variability, both aspects will impact on our model's potential to accurately classify patients. Fourth, endovascular clot retrieval (thrombectomy) is becoming more prevalent in routine clinical practice either alone or in combination with thrombolysis. While thrombectomy is the preferred hyperacute intervention, it will only be possible where the thrombus is of sufficient size to be accessed and desirable where thrombolysis is not comparably effective. Currently, the proportion of eligible patients is estimated to be 5-9% (43) and projected to increase to around 22-25% (44). Therefore, the prospect of thrombolysis being wholly superseded by thrombectomy is unlikely, with three quarters of patients with acute ischemic stroke continuing to receive either thrombolysis or no hyperacute treatment. Finally, our study was a single center retrospective observational project, exploring the impact of parameterizing the pattern of chronic injury. Further validation studies are required to assess the generalizability of the models before formal introduction into clinical practice. Our proposed method facilitates the analysis of large datasets that can power the development of high-dimensional models to address this issue with greater individual-level precision. Our method of combining neuroimaging information can be adapted to incorporate specialized sequences, such as intracranial CT angiography or MR imaging. The substantial logistical difficulties of sufficiently rapid MR, especially in patients with abnormal consciousness or potential contraindications, mean that CT will remain the first line modality of choice in most stroke units for the foreseeable future. Were both CT and MR are performed, the former to guide initial management and the latter for subsequent decision-making, it is possible to combine information from both scans, maximizing the intelligence drawn from the available data. Regardless of the modality, the clinical application of the quantified extent of damage will be to capture additional variability in clinical outcome parameters of any kind for prognostic or prescriptive purpose. Our results further open the possibility of reducing clinical reliance on the patient's recall of his or her history, always a potential problem where dysfunction of any part of the brain may exist. The detailed characteristics of a stroke patient's CT brain scan contain information about the patient's functional status and risk of deterioration. We have demonstrated that by increasing the number of features used to parameterize the spatial patterns of brain injury on CT, we are able to stratify patients-at the time of presentation-by pre-admission and discharge mRS scores, and to estimate who are likely to have further deterioration in their NIHSS score following IV thrombolysis treatment. Our image segmentation routine exhibits excellent agreement with manual segmentation, and extracts this information in an automated fashion, thereby placing minimal demands on the operator. Our approach enables processing of routinely collected CT scans for training high dimensional models that can support clinical and service decision making, especially during a time-critical and challenging period of the patient's admission. DATA AVAILABILITY STATEMENT The datasets generated for this study will not be made publicly available. The datasets generated for this study are subject to ethical clearance and request to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Health Research Authority, Local Research Ethics Committee. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS Y-HM and PN contributed to the conception and design of the study. Y-HM and AM contributed to the acquisition and analysis of the data. All authors contributed to the drafting or critical revision of the manuscript for important intellectual content. FUNDING This work was supported by funding from the Academy of Medical Sciences (Grant No. SGL015\1015). PN is funded by a Wellcome Trust grant (213038/Z/18/Z) and the UCLH NIHR Biomedical Research Centre.
5,954.2
2020-01-24T00:00:00.000
[ "Medicine", "Computer Science" ]
Loop-Shaping Based Control for Switch Mode DC-DC Power Converters Renewable energy sources require switching regulators as an interface to a load with high efficiency, small size, proper output regulation, and fast transient response. Moreover, due to the nonlinear behavior and switching nature of DC-DC power electronic converters, there is a need for high-performance control strategies. This work summarized the dynamic behavior for the three basic switch-mode DC-DC power converters operating in continuous conduction mode, i.e. buck, boost, and buck-boost. A controller was designed using loop-shaping based on current-mode control that consists of two feedback loops. A high-gain compensator with wide bandwidth was used in the inner current loop for fast transient response. A proportional-integral controller was used in the outer voltage loop for regulation purposes. A procedure was proposed for the parameters of the controller that ensures closed-loop stability and output voltage regulation. The design-oriented analysis was applied to the three basic switch-mode DC-DC power converters. Experimental results were obtained for a switching regulator with a boost converter of 150 W, which exhibits non-minimum phase behavior. The performance of the controller was tested for voltage regulation by applying large load changes. Introduction Switch-Mode Power Supplies (SMPS) were first developed for aerospace applications in the 60's. The interest in DC-DC switched power converters has in-creased due to the many DC electrical energy applications from renewable energy sources like fuel-cell stacks, photovoltaic arrays, or wind power [1] [2]. These sources require an interface to condition and regulate their output voltage before being connected to the grid. Many of non-isolated topologies have been proposed using the three basic DC-DC converters, namely buck, boost, and buck-boost. In recent years, interesting topologies have been proposed to obtain high step-up or step-down voltage conversions [3]. The dynamical study of these converters is crucial for control purposes [4] [5]. Furthermore, the design of a suitable control strategy for SMPS output voltage regulation represents a major challenge due to the existence of undesirable effects such as large unpredictable loads, parameter variations, nonlinearities, disturbances, and measurement noise. Moreover, the loop stability and performance are affected by the converter parameters; therefore, they should be considered into the analysis and controller design. In this sense, several control methods have been proposed to operate SMPS properly. For instance, linearized models around an operating point have been proposed to describe the dynamic behavior of switching regulators [6] [7]. About the schemes to control these converters, the most widely used are voltage-mode control and current-mode control. To develop a suitable controller, several techniques based on numerical state-space modeling, sliding-modes, H ∞ , linear-quadratic-regulator, fuzzy, optimal, and nonlinear control have been proposed in the open literature [8]- [15]. However, the physical implementation of some of the above techniques may increase the complexity, since microcontrollers are required. For this reason, there is more advisable use of linear techniques that allow finding general expressions that can be used to implement the controllers in a simple way, which can be built by using low-cost operational amplifiers. In this paper, a discussion of the steady state and dynamic behavior is given for the three basic converters, i.e. buck, boost and buck-boost. Additionally, a current-mode controller design is given with the corresponding parameter selection criteria to ensure stability and performance for these converters. A physical implementation is shown based on low-cost operational amplifiers. To show the performance of the controller, an experimental 150 W boost prototype was built to validate the results given within, where step changes are applied to the output load. This work is organized as follows. In Section 2, analysis, description and dynamic behavior of the three basic switch-mode PWM converters are discussed. Section 3, details the design of a current-mode controller with the corresponding parameter selection process. Experimental results validating the controller performance are shown in Section 4, and finally some concluding remarks are addressed in Section 5. The Basic Topologies of Power Converters The demand for high-performance PWM power converters has increased due to the use of DC electric renewable sources. The primary role of power conversion Journal of Power and Energy Engineering equipment is to facilitate the transfer of power from the source to a specified load by adjusting the voltages and currents from one form to another. This equipment must be energy-efficient and reliable with a high power density; thus, reducing their size, weight, and cost. To fulfill the above requirements, it is essential to understand the converter topology's steady state and transient behavior. In this sense, the operation modes widely used are continuous conduction mode (CCM) and discontinuous conduction mode (DCM), associated with high and low power density applications. In CCM, the inductor current should never cross zero during one switching cycle. In DCM, the inductor current ripple is large enough to cause the inductor current falls to zero. Now, the description and dynamic operation of the three basic power DC-DC converters in CCM are highlighted. Buck Converter The growth of renewable energy sources integration has brought new applications for buck converter topology, i.e. battery chargers in photovoltaic and wind energy systems [16] [17] [18]. The basic topology of this converter is show in Figure 1(a), where a power MOSFET M is used as an active switch, E the input voltage source, D1 the diode, L the filter inductor, C the filter capacitor and R the load resistance. The duty cycle D is computed by where ON t is the period, which M is conducting and T the switching period; therefore, D may take values between 0 and 1. Subsequently, , , [19] can be used as depicted in Figure 1 where the diode D1 is replaced by a power MOSFET M 2 . In steady state CCM operation, the voltage and current ripples due to the switching are ( ) In this work, state-space averaging is used, which is a modeling technique widely used to "approximate" the behavior of switching converters [7] [20] [21]. This technique requires that the LC output filter corner frequency C f be smaller than the switching frequency, such that 1 C S f f < . For the buck converter, the average dynamical model is represented by where the state variables are L i and O v , and the control input is the duty cycle d. Assuming that each state variable and the control signal are the sum of DC and AC components, they can be decomposed as where (˜) stands for AC variables. Note that the AC terms are equal to zero in steady state. The steady state values for the average output voltage and average inductor current are Consequently, taking into account the AC terms, the linear average small-signal model for the buck converter is given by 1 0 The model shown in (5) describes the behavior for frequencies up to 2 S f ; thus, the corresponding transfer functions results as where both transfer functions have a minimum phase behavior, i.e. there is no right-half plane (RHP) zeros; therefore, control design is easy to carry out. Boost Converter The step-up power converter, commonly known as a boost converter, is shown in Figure 2. It has an input power source E, connected in series with a filter Journal of Power and Energy Engineering inductor L, an active switch M, a diode D1, an output capacitor C and the load R. The main characteristic of this converter is that, in steady state, the average output voltage V O is greater than the input E; henceforth, the name boost. Due to its nature, this type of converter is used in applications where the source voltage needs to step up to higher levels, i.e. front-end stage for photovoltaic. If a higher power is required, an interleaved converter with two paths can be used. In steady state CCM operation, the voltage and current ripples for the boost converter due to the switching action are computed by additionally, to ensure CCM, the inductor value must be selected as Notice that the inductor current is equal to the source current. In contrast to the buck converter, this topology needs a larger filter capacitor C to limit the output voltage ripple. In the boost converter, it is possible to average the dynamical behavior of this converter by neglecting the ripple phenomena. Thus, applying Kirchhoff laws when M is ON/OFF, the average continuous nonlinear model is obtained as The nonlinear differential equations in (8) Consequently, taking into account the AC terms defined previously in (3), yields to the linear average small-signal model for the boost converter as ( ) where the bilinearity has been eliminated and the resulting matrix has only constant values. The resulting transfer functions of (10) can be expressed as follows where the transfer function output voltage-to-control signal exhibits a nonminimum phase behavior since it has a RHP zero. Buck-Boost Converter This converter is depicted in Figure 3. In the buck-boost converter, the diode D1 and inductor L have been interchanged compared to the buck converter given in Figure 1(a). The main feature here is that the average output voltage V O is negative. Its magnitude can be either greater, equal (when D = 0.5), or smaller than the input voltage; hence the name buck-boost has been coined. Thus, the output voltage and inductor ripple are the same as stated in (2). In steady state CCM operation, the voltage and current ripples for the buckboost converter due to the switching action can be computed by additionally, to ensure CCM, the inductor value must be selected as It is evident the existence of similarities between the boost and buck-boost converters, where the only remarkable difference is in the current through the capacitor. Considering the average approach, a nonlinear dynamical model for the buck-boost converter is obtained as ( ) In fact, model (14) is bilinear (11), since the control input d multiplies both state variables. Again, using the decomposition of DC and AC as in (3), the DC relationships for the average output voltage and inductor current are On the other hand, the small-signal linear model for the AC component is given by ( ) Note that this dynamical model is no longer bilinear, because the matrix has only constant parameters. For feedback design purposes, the aim is to get the frequency domain representation of (16); thus, the transfer functions for the control signal to each state variable are Similarly to the transfer function output voltage-to-control signal of the boost converter, there is a RHP zero; thus, the control task becomes difficult. Loop Shaping for Current-Mode Control Two of the most widely used control techniques are voltage-mode control and current-mode control. The voltage-mode control (VMC) scheme uses just one feedback loop and usually includes two elements: a voltage error amplifier and a voltage comparator [20]. The transfer functions like loop gain, closed-loop input impedance, and closed-loop audio susceptibility can be derived from the small-signal model. This control scheme is easy to design and implement due to a single control loop, but it has two major drawbacks: 1) the PI-controller used for output voltage regulation introduces a slow dynamic response, and 2) the stability is difficult to achieve for a non-minimal transfer function. Current-mode control (CMC) contains two nested feedback loops [22]. The outer loop measures the output voltage, and the inner loop measures the current flowing through the inductor. This scheme has several advantages over voltage-mode control. The first one is that the active switch (MOSFET, IGBT, or Journal of Power and Energy Engineering Bipolar Transistor) is turned OFF when the inductor current reaches a threshold level, and consequently, there is no current overload through the converter. The second advantage is that several switching converters can operate in parallel without a load-sharing problem. All the switching converters are provided with the same PWM signal from the control circuitry. Finally, it is well-known that the inductor current's feedback action greatly improves the dynamic performance of the overall closed-loop. In current-mode control, the inductor current acts as a current source. The output LC filters acts a voltage-controlled current source that supplies current to the output capacitor and the load; thus, the order of the system is reduced by one and the feedback compensation is considerably simplified. There are two basic schemes reported in the open literature for current-mode control; one is referred to as the peak current-mode control (PCMC) [23] and the other as the average current-mode control (ACMC) [24]. In PCMC control, the inductor or active switch current is directly fed back without a low-pass filter; then, a wide bandwidth current is obtained. A major drawback in PCMC is its instability, since oscillations may occur when the duty cycle exceeds 50%. However, this instability can be eliminated by the addition of an artificial ramp. On the other hand, ACMC is an effective control method that improves current regulation accuracy and fast dynamic response. In this technique, the inductor current is measured and averaged by a compensation network to obtain its DC component. ACMC has the following advantages over PCMC: 1) there is no need for an external compensation ramp, 2) it has a high gain at DC and low frequencies, 3) noise immunity of the measured current signal, and 4) over-current protection for each cycle of PWM. It has been shown that a modified version of ACMC can also be applied to high-gain converters, i.e. quadratic and cascade [25] [26] [27]. Current-mode control has been widely adopted as a useful technique for easing the design and improving regulators' dynamic performance with switch-mode converters. Early references have discussed the basic principles and advantages of this technique [22]. A methodology is now given to select the controller gains for the boost converter properly. When the inductor current is used for output voltage regulation, a faster response is obtained when step changes are applied to the load. Sensing the inductor current can also be used for preventing overload current through the converter. To derive the controller expressions, a proposed configuration for this technique is shown in Figure 4. Notice that this scheme applies in general for the buck, boost, and buck-boost converters. As can be seen, this scheme has current and voltage loops. For the current loop, N is the current sensor gain, G(s) a high-gain compensator, F(s) a low-pass filter, and finally V P the peak magnitude of the ramp used to generate the control pulses. For the voltage loop, H stands for the voltage sensor gain, V R the desired output voltage, and K(s) the transfer function of the PI controller. The overall controller design procedure for this scheme is a twofold problem: 1) shaping gain for the current loop L 1 (s), i.e. the product of transfer functions of ing requirements have to be satisfied: 1) for relative stability, the slope at or near cross-over frequency must be not more than −20 dB/dec; 2) to improve steady-state accuracy, the gain at low frequencies should be high; and 3) for robust stability, appropriate gain and phase margins are required [28]. In the following, an easy-to-use procedure is given to ensure good loop gain characteristics of the closed-loops. The poles and zeros for the current-mode controller are set mainly from the converter's operating switching frequency. Current Loop Control As can be seen, the transfer function inductor current-to-control signal has two complex poles and a left-hand zero. When damping is added through the gain N, the behavior looks like a gain and a single pole. Then, the high-gain compensator and low-pass filter are added, which have the following transfer functions respectively, where G P is the compensator gain, ω Z stands for the location of the compensator zero and ω P for the location of the filter pole. Notice that both transfer functions can be implemented using a single operational amplifier as shown in Figure 5. Then, the corresponding control law d  is defined by ( ) ( ) The design procedure is given now. The high-gain compensator zero Z ω should be placed at least a decade below of half of the PWM switching frequency Here b C is the capacitor of the current loop circuit. The compensator gain is computed by where the resistance values must be carefully selected such that ( ) Voltage Loop Control Once the current loop has been tuned, the voltage loop gain has to be designed. The outer loop is designed to provide a suitable steady state correction of the output voltage using a PI controller. The transfer function for the PI controller is given by is the proportional gain and T i the integral time. The resulting reference current for this loop is 24) Journal of Power and Energy Engineering The design procedure is as follows. The proportional gain is selected such that ( ) Finally, the integral time is computed by i C C T R C = where C R and C C are the resistance and capacitance values of the PI controller, which must be selected such that 1 i T should be placed at least one decade below S f . In this sequel, for the sake of clear and straightforward exposition, the attention has been focused on a boost converter since its transfer function (14) has a non-minimum behavior; thus, a controller is more difficult to design. However, without loss of generality, the proposed control scheme can be extended to both buck and buck-boost converters. The resulting expressions are shown in Table 1. As the buck converter has a minimum phase behavior, voltage-mode control can be used where the output voltage is the only feedback variable. Experimental Results A boost converter with the corresponding current-mode controller is shown in Figure 5, where the procedure outlined in Section 3 has been used. The converter parameters are shown in Table 2. The inductor current is measured using by a LA50-P transducer from LEM manufacturer with a current gain of N = 0.07. An input source of E = 12 V is provided and a nominal duty cycle of D = 0.5 is selected, which accordingly to (10) Finally, the switching frequency is selected to be f S = 75 kHz, the peak magnitude of the ramp is V P = 5 V, and the voltage sensor gain is H = 0.033. According to (7) Open and closed-loop experimental tests were performed considering nominal and step changes in the load resistance through the switch M 1 . These variations range from 3.8 Ω to 38.5 Ω; that is from full to 10% of load at a frequency of 10 Hz, which evidently modifies the load current profile. Open Loop Test Frequency responses of the theoretical transfer functions and the corresponding experimental from the prototype are shown in Figure 6 and Closed-Loop Test The experimental frequency responses in closed-loop for the current and voltage loop gains are shown in Figure 10 and Figure 11 at the nominal load. In the first case, the current loop gain has a −20 dB/dec slope and 70 degree phase margin when the magnitude reaches 0 db; therefore, internal stability is ensured. The PI controller dominates the experimental frequency response for the voltage loop gain. At the crossing of 0 db, the slope has a −20 dB/dec slope with a 88 degrees phase margin, which ensures robust stability. Furthermore, both loop gain responses show high gains at low frequencies and a 4 kHz bandwidth. The output voltage regulation is shown in Figure 12, when load changes are applied at a frequency of 10 Hz. It is noticeable that the output voltage remains constant despite changes from full to 10% load. The resulting control signal v CON to be compared with the ramp signal is shown in Figure 13, where it is clear that a change in the load modify the duty cycle. Conclusion This paper deals with a practical methodology for output voltage regulation for the three basic switch-mode converters. The scheme feds back the inductor current to implement an inner control loop using a high-gain compensator and a low-pass filter. The sensed current can also be used for over-current load protection. Afterward, the voltage loop is designed by implementing a PI controller for steady-state error regulation. Furthermore, a well-defined and straightforward procedure for the selection of the controller parameters is given, simplicity of the approach is of significant value in which the analytic results can be used to make design choices and tradeoffs between the inner and outer loops. This methodology was explicitly implemented in a current-mode controller for a boost converter, but it could be easily extended to the buck and buck-boost converters. This procedure can also be extended to other kinds of converters like the quadratic or cascaded buck or boost converters widely used to step-up or step-down voltages from renewable energy sources.
4,827.2
2020-10-16T00:00:00.000
[ "Engineering" ]
Local Motion and Contrast Priors Driven Deep Network for Infrared Small Target Super-Resolution Infrared small target super-resolution (SR) aims to recover reliable and detailed high-resolution image with high-contrast targets from its low-resolution counterparts. Since the infrared small target lacks color and fine structure information, it is significant to exploit the supplementary information among sequence images to enhance the target. In this paper, we propose the first infrared small target SR method named local motion and contrast prior driven deep network (MoCoPnet) to integrate the domain knowledge of infrared small target into deep network, which can mitigate the intrinsic feature scarcity of infrared small targets. Specifically, motivated by the local motion prior in the spatio-temporal dimension, we propose a local spatio-temporal attention module to perform implicit frame alignment and incorporate the local spatio-temporal information to enhance the local features (especially for small targets). Motivated by the local contrast prior in the spatial dimension, we propose a central difference residual group to incorporate the central difference convolution into the feature extraction backbone, which can achieve center-oriented gradient-aware feature extraction to further improve the target contrast. Extensive experiments have demonstrated that our method can recover accurate spatial dependency and improve the target contrast. Comparative results show that MoCoPnet can outperform the state-of-the-art video SR and single image SR methods in terms of both SR performance and target enhancement. Based on the SR results, we further investigate the influence of SR on infrared small target detection and the experimental results demonstrate that MoCoPnet promotes the detection performance. The code is available at https://github.com/XinyiYing/MoCoPnet. I. INTRODUCTION I NFRARED imaging system is all-weather in day and night and has high penetrability, sensitivity and concealment. Infrared imaging system is widely used in security monitoring, remote sensing investigation, aerospace offense-defense and other military mission. Recently, low-resolution (LR) infrared images cannot meet the high requirements of practical military mission. Therefore, it is necessary to improve the resolution of infrared images. A straightforward way to obtain high-resolution (HR) infrared images is to increase the size of infrared sensor arrays. However, due to the technical limitations of sensors and the high cost of large infrared sensor arrays, it is necessary and important to develop practical, lowcost and highly reliable infrared image super-resolution (SR) algorithms. Note that, modern autonomous driving technology requires the infrared imaging system to detect the target in a fairly long distance. Therefore, the target only occupies a very small proportion of the whole image, and is susceptible to noise and clutters. In this paper, we mainly focus on infrared small target SR task and investigate its influence on infrared small target detection. The special imaging mechanism and military application of infrared imaging system put forward the following requirements for infrared small target SR: 1) High fidelity of superresolved images. Noise and false contours should be avoided as much as possible. 2) High contrast of super-resolved targets. The target contrast in the super-resolved images should be strengthened to boost the subsequent tasks. 3) High robustness to complex scenes and noise. Small objects are sometimes submerged in clutter and thus of low local contrast to the background. SR algorithms should be robust to various complex scenes and imaging noise. 4) High generalization to insufficient datasets. The lack of infrared image datasets requires that SR algorithms should achieve stable results with a relative small dataset. The motivations of our method come from data analysis, and can be summarized as: 1) The target occupies a small proportion of the whole infrared image (generally less than 0.12% [1]) and lacks color and fine structure information (e.g., contour, shape and texture). Few information is available for SR within a single image. Therefore, we perform SR on image sequences to use the supplementary information among the temporal dimension to improve the SR performance and the target contrast. 2) Due to the long distance between the target and the imaging system, the mobility of the targets on the imaging plane is limited, leading to small motion of the target between neighborhood frames (i.e., local motion prior [2], [3] in spatio-temporal dimension). Therefore, we design a local spatio-temporal attention (LSTA) module to perform implicit frame alignment and exploit the supplementary information in the local spatio-temporal neighborhood to enhance the local features (especially for small targets). 3) Compared with the background clutter, the contrast and gradient between the target and the background in the local neighborhood are high in all directions (i.e., local contrast prior [4], [5] in spatial dimension). Therefore, we design a center difference residual group (CD-RG) to achieve center-oriented gradient-aware feature extraction, which can encode the local contrast prior to further improve the target contrast. Based on the above observations, we propose a local mo-tion and contrast prior driven deep network (MoCoPnet) for infrared small target SR. The main contributions can be summarized as follows: 1) We propose the first infrared small target SR method named local motion and contrast prior driven deep network (MoCoPnet) and summarize the definition and requirements of this task. The proposed modules (i.e., central difference residual group and local spatio-temporal attention module) of MoCoPnet integrate the domain knowledge (i.e., local contrast prior and local motion prior) of infrared small targets into deep networks, which can mitigate the intrinsic feature scarcity of data-driven approaches [5]. 2) The experimental results demonstrate that MoCoPnet can achieve state-of-the-art SR performance and effectively improve the target contrast. 3) Based on the SR results, we further investigate the influence of SR on infrared small target detection. The experimental results show that MoCoPnet can promote the detection performance to achieve high signal-to-noise ratio gain (SNRG), signal-to-clutter ratio gain (SCRG), contrast gain (CG) scores and improved receiver operating characteristic curve (ROC) results. II. RELATED WORK A. Single Image SR Image SR is an inherently ill-posed optimization problem and has been investigated for decades. In literature, researchers have proposed a variety of classic single image SR (SISR) methods, including prediction-based methods [6], [7], edgebased methods [8], [9], statistics-based methods [10], [11], patch-based methods [9], [12] and sparse representation methods [13], [14]. However, most of the aforementioned traditional methods use handicraft features to reconstruct HR images, which cannot formulate the complex SR process and thus limits the SR performance. Recently, due to the powerful feature representation capability, convolutional neural networks (CNNs) have been widely used in single image SR task and achieve the state-of-the-art performance [15], [16]. Dong et al. [17] proposed the pioneering CNN-based work SRCNN to recover an HR image from its LR counterpart. Kim et al. [18] deepened the network to 20 convolutional layers (i.e., VDSR) and achieved improved SR performance by increasing model complexity. Moreover, various increasingly deep and complex architectures (e.g., residual networks [19], recursive networks [20]- [23], densely connected networks [24]- [26], attentionbased networks [15], [27]) have also been applied to SISR for performance improvement. Other than tackling image average distortion by norm loss, generative adversarial image SR networks [28], [29] employed the perceptual loss for perceptual quality improvement. B. Video SR Existing video SR methods commonly follow a three-step pipeline, including feature extraction, motion compensation and reconstruction [30]. Traditional video SR methods [31], [32] employ handcrafted models to estimate motion, noise and blur kernel and reconstruct HR video sequences. Recent deep learning-based video SR methods are better in exploiting spatio-temporal information by its powerful feature representation capability and can achieve the state-of-the-art performance. Liao et al. [33] proposed the pioneering CNNbased video SR method to perform motion compensation by optical flow and then ensembled the compensated drafts via CNN. Afterwards, A series of optical flow-based video SR algorithms [34], [35] emerged to explicitly perform motion estimation and frame alignment, resulting in vague and duplication [36]. To avoid the aforementioned problem, deformable convolution [37], [38] has been employed to perform motion compensation explicitly in a unified step [39], [40] through extra offsets. Apart from these explicit motion compensation methods, implicit approaches (e.g., 3D convolution networks [41], [42], recursive networks [43], [44], non-local networks [40], [45]) have also been applied to video SR for performance improvement. C. Infrared Image SR With the increased demands of high-resolution infrared images, some researchers perform image SR on infrared images. Traditional methods [46] consider SR as sparse signal reconstruction in compressive sensing. Based on the previous studies, Zhang et al. [47] combined compressive sensing and deep learning to achieve improved SR performance with low computational cost. Han et al. [48] proposed to employ CNNs to recover high-frequency components with upscaled LR images to generate the SR results. He et al. [49] proposed a cascaded deep network with multiple receptive fields for large scale factor (×8) infrared image SR. Liu et al. [50] proposed to use generative adversarial network and perceptual loss to reconstruct the texture details of infrared images. Chen et al. [51] employed an iterative error reconstruction mechanism to perform SR in a coarse-to-fine manner. Huang et al. [52] proposed a progressive super-resolution generative adversarial network and employed the multistage transfer learning strategy to improve the SR performance from small samples. Prajapati et al. [53] proposed channel splitting-based convolutional neural network to eliminate the redundant features for efficient inference. Yang et al. [54] proposed a visible-assisted training strategy to promote details preservation. D. Attention Mechanism Since the importance of each spatial location and channel is not uniform, Hu et al. [55] proposed SeNet for classification, which consists of selection units to control the switch of passed data. Zhang et al. [15] proposed a channel attention mechanism to calculate the importance along the channel dimension for channel selection. Anwar et al. [56] proposed feature attention to urge the network to pay more attention to the high frequency region. Dai et al. [27] proposed second-order attention to adaptively readjust features for powerful feature correlation learning. Wang et al. [57] explored the sparsity in SR task and proposed sparse masks for efficient inference. The spatial mask and channel mask calculate the importance along both the spatial dimension and the channel dimension to prune the redundant computations. The aforementioned studies only consider the global importance on spatial and channel dimension. Since small targets only occupy a small portion in the whole image and have high contrast with the local neighborhood, we design a local attention mechanism which can better characterize the small targets. E. Sequence Image Infrared Small Target Detection Sequence image infrared small target detection is significant for long-range precision strikes, aerospace offensive-defensive countermeasures and remote sensing intelligence reconnaissance. According to whether the sequential information is used, sequence image infrared small target detection methods can be divided into two categories: detect before track (DBT) methods and track before detect (TBD) methods. Based on the results of single image infrared small target detection [5], [58]- [61], DBT methods employed the motion trajectory of targets through sequence image projection to eliminate the false targets and reduce the false alarm rate. DBT methods have low computational cost and are easy to implement. However, the performance drops rapidly with low SNR. TBD methods [62]- [64] commonly follow a three-step pipeline, including background suppression, region of interest extraction and target detection. TBD methods are robust to images with low SNR but have high computational cost, which cannot meet the requirements of real-time detection. It is challenging to achieve high detection rate and low false alarm rate in real-time due to the lack of target information, the complex background noise, the insufficient public datasets and the explosion of data amount and the computational cost. Therefore, it is necessary to recover reliable image details and enhance the contrast between target and background for detection. III. METHODOLOGY In this section, we introduce our method in details. Specifically, Section III-A introduces the overall framework of our network. Section III-B-III-C introduce the two modules which integrate local contrast prior and local motion prior of infrared small target into deep networks. A. Overall Framework The overall framework of our MoCoPnet is shown in Fig. 1. Specifically, an image sequence with 5 frames LR t+i (i = [−2, 2]) is first sent to a convolutional layer to generate the initial features F t+i 0 (i = [−2, 2]), which are then sent to the central difference residual group (CD-RG) to achieve centeroriented gradient-aware feature extraction. Then, each neighborhood feature F t+i CD−RG (i = −2, −1, 1, 2) is paired with the reference feature F t CD−RG and sent to two local spatiotemporal attention (LSTA) modules to achieve motion compensation and enhance the local features. Next, the reference feature F t CD−RG is concated with two compensated neighborhood frames F t+k LST A2 , F t−k LST A2 (k = 1, 2) and then sent to a residual group (RG) and a convolution layer for coarse fusion. Afterwards, the two fused features are concatenated and sent to an RG and a convolution for fine fusion. Then, the fused feature is processed by an RG, a sub-pixel layer and a convolutional layer for SR reconstruction and upsampling. Finally, the SR reference frame is obtained by adding the bicubicly upsampled LR reference frame to accelerate the training convergence. Note that, the number of the input frames is set to 7 in this paper and the process is the same as in Fig. 1(a). We use the mean square error (MSE) between the SR reference frame and the groundtruth reference frame as the loss function of our network. B. Central Difference Residual Group Central difference residual group (CD-RG) incorporates central difference convolution (CD-Conv [65], [66]) into residual group (RG [15], [26]) to achieve the center-oriented gradient-aware feature extraction, which can utilize the spatial local salient prior to strengthen the contrast of the small targets. Note that, we employ RG as the backbone of our MoCoPnet for the following reasons: RG can generate features with large receptive field and dense sampling rate, which promotes the information exploitation. The reuse of hierarchical features not only improves the SR performance [67] but also maintains the information of small targets [1], [61], [68]. The architecture of central difference residual group (CD-RG) is shown in Fig. 1(b). The input feature F t+i 0 is first fed to D central difference residual dense blocks [69] (CD-RDB) to extract hierarchical features. Then, the hierarchical features are concatenated and fed to a 1×1 convolutional layer to generate output feature F t+i CD−RG . As is shown in Fig. 1(b1), 1 CD-Conv and K − 1 Convs with a growth rate of G are used within each CD-RDB to achieve dense feature representation. The architecture of CD-Conv is shown in Fig. 1(b2). CD-Conv aggregates the center-oriented gradient information, which echoes the spatial local saliency prior of infrared small target. As shown in Fig. 2, different from handcrafted dilated local contrast measure (DLCM [5]) which can only reserve the contrast information in one direction, CD-Conv is a learnable measure and can improve the contrast of small target while maintaining the background information. In conclusion, CD-Conv is more in line with the task of infrared small target SR (i.e., recovering reliable and detailed high-resolution image with high-contrast target). DLCM and CD-Conv can be formulated as f (x, y) and g(x, y): where S x,y represents the value of a specific location (x, y) in the feature map, and (i, is the direction index. ω i,j is a learnable weight to continuously optimize the local contrast measure and θ ∈ [0, 1] is a hyperparameter to balance the contribution between gradient-level detailed information and intensity-level semantic information. Note that, θ is set to 0.7 [65] in our paper. C. Local Spatio-Temporal Attention Module Local spatio-temporal attention (LSTA) module calculates the local response between the neighborhood frame and the Concatation Summation Dot production reference frame and uses the local spatio-temporal information to enhance the local features of the reference frames. The inputs of LSTA are the reference frame and one neighborhood frame. For a sequence with 7 frames, the operation need to be repeated 6 times. The architecture of LSTA is shown in Fig. 1(c). The red reference feature F t CD−RG ∈ R H×W ×C and the blue neighborhood feature F t−1 CD−RG ∈ R H×W ×C are first fed to 1×1 convolutional layers conv q and conv k for (a) reference frame dimension compression to generate F 0 , F 1 ∈ R H×W ×C/cr , where cr is the compression ratio and is set to 8 in our paper. The process can be formulated as: Shared weights Grouping where H conv k and H conv q represent 1×1 convolutions. Then, we calculate the response between each location p 0 in F 0 and the corresponding neighborhood (centered in p 0 ) in F 1 . Afterwards, the response is summed and softmax along the channel dimension to generate the attention map M . The process is defined as: where p n represents the n th value of the local neighborhood centered in p 0 with kernel size of kern and dilation rate of dila. The purple 3×3 grid in Fig. 1(c) is the local attention feature map with parameter (kern=3, dila=1). Note that, as shown in Figs 3(c) and (d), dila can be integer larger than 1 to enlarge the receptive filed without additional computational cost. As shown in Figs 3(e) and (f), dila can also be fractional to capture the sub-pixel motion between frames and we employ bilinear interpolation to generate the exact corresponding values. Finally, dot production is performed between the local neighborhood feature F t−1 CD−RG (p n ) centered in p 0 and the corresponding attention map M (p n ) to generate the value of location p 0 in the output feature F t−1 LST A (p 0 ). The process is formulated as: LSTA first calculates the response between the reference frame and its adjacent frames to generate the attention map, and then calculates a weighted summation of these frames using the generated attention maps. In this way, the neighborhood frames can be implicitly aligned and the complementary temporal information can be incorporated to enhance the features of small targets. IV. EXPERIMENTS In this section, we first introduce the experiment settings, and then conduct ablation studies to validate our method. Next, we compare our network to several state-of-the-art SISR and video SR methods. Finally, we investigate the influence of SR on infrared small target detection. A. Experiment Settings In this subsection, we sequentially introduce the datasets, the evaluation metrics, the network parameters and the training details. (Anti-UAV [72]) releases 250 high-quality infrared video sequences with multi-scale UAV targets. In this paper, we employ the 1 st − 50 th sequences with target annotations of SAITD as the test datasets and the remaining 300 sequences as the training datasets. In addition, we employ Hui and Anti-UAV as the test dataset to test the robustness of our MoCoPnet to real scenes. In Anti-UAV dataset, only the sequences with infrared small target [1] (21 sequences in total) are selected as the test set. Note that, we only use the first 100 images of each sequence for test to balance computational/time cost and generalization performance. 2) Evaluation Metrics: We employ peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) to evaluate the SR performance. In addition, we introduce signal-to-noise ratio (SNR) and contrast ratio (CR) in the local background neighborhood [58] of targets to evaluate the performance of recovering small targets. As shown in Fig. 4(a), the size of the target area is a × b, and the local background neighborhood is extended from the target area by d both in width and height. Note that, the parameters of local background neighborhood (a, b, d) in HR images are set to (7,7,30), (11,11,50), (21,21,100) in SAITD 1 , Hui and Anti-UAV 2 respectively. When 4× SR is performed on HR images, the parameters (a, b, d) are set to (29,29,120), (45,45,200), (85, 85, 400). When 4× downsampling is performed on HR images, the parameters are set to (3, 3, 10), (3,3,10), (5,5,20). To further evaluate the impact of SR algorithms on infrared small target detection, we adopt SNR gain (SNRG), background suppression factor (BSF), signal-to-clutter ratio gain (SCRG), contrast gain (CG) and receiver operating characteristic curve (ROC) for comprehensive evaluation. Note that, the common detection evaluation metrics calculate the ratio of the statistics in the local background neighborhood before and after detection. Since we first super-resolve the LR image and then perform detection, the inputs of detection algorithms, which are the outputs of different SR algorithms, are different. Therefore, direct using the common detection evaluation metrics cannot evaluate the impact of SR on detection accurately. To eliminate the influence of different inputs, we modify the first four metrics to calculate the ratio of the statistics in the local background neighborhood between the LR image before SR and the HR target image after detection. The modified evaluation metrics are shown in Fig. 4(b). We then introduce the aforementioned evaluation metrics in details. SNRG is used to measure the SNR improvement of detection algorithms and is formulated as: where [·] in and [·] out represent the metrics in the local background neighborhood of the LR images and the HR target images respectively. P t and P b are the maximum value of the target area and the background area respectively. BSF is used to measure the background suppression effect and is formulated as: where σ b is the standard deviation of the background area. SCRG is used to measure the SCR improvement of detection algorithms and is formulated as: where µ t and µ b are the mean value of the target area and the background area respectively. CG is used to measure the improvement of contrast between targets and background and is formulated as: Note that, in order to avoid the value of "Inf" (i.e., the denominator is zero) and "NAN" (i.e., the numerator and denominator are both zero), we add to each denominator in equations 6-9 to prevent it from being zero. is set to 1e − 10 in our paper. ROC is used to measure the trend between detection probability P d and false alarm probability F a , which are formulated as: where TD and FD are the number of true detection and false detection. AT and NP are the amount of targets and the num- ber of image pixels. Note that, the criterion for judging true detection is that the distance between the detected location and the groundtruth location is less than threshold τ and τ is set to 10 pixels [71] in our paper. All experiments were implemented on a PC with an Nvidia RTX 3090 GPU. The networks were optimized using the Adam method [73] with λ 1 = 0.9, λ 2 = 0.999 and the batch size was set to 12. The learning rate was initially set to 1e − 3 and halved in 10K, 20K, 60K iterations. We trained our network from scratch for 100K iterations. B. Ablation Study In this subsection, we conduct ablation experiments to validate our design choice. 1) Central Difference Residual Group: To demonstrate the effectiveness of our central difference residual group (CD-RG), we replace all the CD-Convs in CD-RG by Convs (i.e., residual group) and retrain the network from scratch. The experimental results in Table I show that CD-RG (i.e., CD-Convs) can introduce 0.12dB/0.004 gains on PSNR/SSIM and 0.06/0.09 gains on SNR/CR. This demonstrates that CD-RG can exploit the spatial local contrast prior to effectively improve the SR performance and the target contrast. In addition, we visualize the feature maps generate by residual group (RG) and CD-RG with a toy example in Fig. 5. Note that, the visualization maps are the L2 norm results along the channel dimension [61], [74] and the red and blue boxes represent target areas and edge areas respectively. As is illustrated in Fig. 5(a), the input frame of the image sequence consists of a target of size 3×3 (i.e., the white cube at the top) and the clutter (i.e., the white area at the bottom). It can TABLE II ABLATION RESULTS OF THE LOCAL SPATIO-TEMPORAL ATTENTION MODULE ON THE AVERAGE OF SAITD, HUI AND ANTI-UAV DATASETS. NOTE THAT, LSTA1 VALIDATES THE EFFECTIVENESS OF THE MODULE AND LSTA2-5 INVESTIGATE THE IMPACT OF ITS PARAMETERS, NUMBERS, SUB-PIXEL INFORMATION be observed from Figs. 5(b) and (c) that the target contrast in the feature map extracted by CD-RG is higher than that of RG. This demonstrates that CD-RG can enhance the target contrast (from 7.41 to 13.55). In addition, CD-RG can also improve the contrast between high-frequency edges and background (from 6.64 to 13.59). This is because, CD-RG aggregates the gradient-level information to concentrate more on the high-frequency edge information, thus improving the SR performance and target contrast simultaneously. Moreover, we conduct ablation experiments to replace all the CD-convs in MoCoPnet by DLCMs. Note that, the training process of MoCoPnet with DLCMs is unstable with sudden loss divergence due to gradient fracture. By contrast, CD-conv reserves the image feature information to update all pixels, which ensures the gradient propagation continuity. The ablation results in Table I show that CD-conv introduces significant performance gain on PSNR/SSIM (i.e., 1.01/0.039 on average) and further improve the contrast of small targets (i.e., 0.024/0.022 SNR/CR gain on average). 2) Local Spatio-Temporal Attention Module: In MoCoPnet, two cascaded LSTAs with parameters LSTA(kern=3, dila=3) , L=1, ε=10 −7 Note that, we visualize the feature maps and attention maps generated by LSTA 3 (i.e., an LSTA with kernel size of 3 and dilation rate of 1) with a toy example in Fig. 6. Note that, the visualized feature maps are the L2 norm results along the channel dimension [61], [74]. As is illustrated in Fig. 6(a1), the target with size 1×1 (i.e., the white cube) is in the middle of the red reference frame. In Fig. 6(a2), the target is in the top left of the blue neighborhood frame. The corresponding features before LSTA are shown in Figs. 6(b1) and (b2). The aligned feature after LSTA is shown in Fig. 6(b3). It can be observed that LSTA can effectively perform frame alignment to achieve motion compensation. In addition, the attention maps are shown in Figs. 6(c1)-(c9), and the position of each attention map corresponds to the spatial arrangement in Fig. 3(b). It can be observed that Fig. 6(c1) has the highest intensity (more than 90% are 1) and represents the top-left motion, which demonstrates that LSTA can effectively capture the target motion to perform frame alignment. Finally, we replace LSTAs in MoCoPnet by an optical-flow module (OFM) and a deformable alignment module (DAM) to compare our LSTA with the widely used optical flow and deformable alignment techniques. The experimental results are listed in Table II. It can be observed that the PSNR/SSIM/SNR/CR scores of MoCoPnet with LSTAs are higher than MoCoPnet with OFM and DAM for 0.11dB/0.004/0.015/0.009 and 0.06dB/0.002/0.005/0.006 respectively. By contrast, the number of parameters and FLOPs of MoCoPnet with LSTA modules are lower than MoCoPnet with OFM and DAM for 0.11M/2.70G and 0.19M/3.80G respectively. This demonstrates that LSTA is superior in exploiting the information among frames to improve the SR performance and the target contrast with lower computational cost. This is because, on the one hand, LSTA can direct learn motion compensation by attention mechanism without optical flow estimation and warping, which results in ambiguous and duplicate results [36], [78]. On the other hand, compared with DAM, LSTA can better incorporate the local prior to achieve improved SR performance and the training process of LSTA is more stable to converge to a good results. In addition, we visualize the feature maps generated by OFM, DAM and LSTAs with a toy example in Fig. 7. Note that, the visualization maps are the L2 norm results along the channel dimension. As is illustrated in Fig. 7(a), the input image sequence consists of a random consistent movement of a target with size 3×3 (i.e., the white cube) in the background (i.e., the black area). The feature maps before OFM, DAM and LSTAs are shown in Figs. 7(b), (d) and (f). It can be observed that the target positions in the extracted feature maps are close to the blue dots (i.e., the groundtruth position of the target in the current feature). Then OFM, DAM and LSTAs perform feature alignment on the extracted features. As is illustrated in Fig. 7(c), the target positions in the feature maps generated by OFM are close to the blue dots. In Fig. 7(e), the blue dots and the red dots (i.e., the groundtruth position of the target in the reference feature) are both highlighted, which demonstrates that DAM does not perform frame alignment but highlight all the possible positions. The feature maps generated by LSTA1(kern=3, dila=3) and LSTA2( kern=3, dila=1) are shown in Figs. 7(e) and (f). As is illustrated in Fig. 7(f), all the target positions in the feature maps generated by LSTA2 are closer to the red dot than those of OFM. This demonstrates that LSTA is superior in motion compensation. Note that, it can be observed from Figs. 7(e) and (f) that LSTA1 and LSTA2 achieve coarse-to-fine alignment to highlight the aligned target. This demonstrates the effectiveness and superiority of our coarse-to-fine alignment strategy. C. Comparative Evaluation In this subsection, we compare our MoCoPnet with 1 topperforming single image SR methods RCAN [15], 5 video SR methods VSRnet [75], VESPCN [34], SOF-VSR [35] and TDAN [39], D3Dnet [30] and 3 infrared image SR methods IERN [51], PSRGAN [52], and ChaSNet [53]. For fair comparison, we retrain all the compared methods on infrared small target dataset [70] and exclude the first and the last 2 frames of the video sequences for performance evaluation. Table III. SNR and CR scores calculated in the local background neighborhood are listed in the 2 nd − 9 th columns of Table IV. It can be observed that MoCoPnet achieves the highest scores of PSNR, SSIM and outperforms most of the compared algorithms on SNR and CR scores. The above scores demonstrate that our network can effectively recover accurate details and improve the target contrast. That is because, LSTA performs implicit motion compensation and CD-RG incorporates the center-orient gradient information to effectively improve the SR performance and the target contrast. Note that, we also analyze the running time of different methods and the results are shown in Table III. The running time is the total time tested on 100 consecutive HR frames with a resolution of 256×256 and is averaged over 20 runs. It can be observed that our MoCoPnet achieves better SR performance with a reasonable increase in running time. Qualitative results are shown in Fig. 8. For SR performance, it can be observed from the blue zoom in regions that MoCoPnet can recover more accurate details (e.g., the sharp edges of buildings, and the lighthouse details closer to groundtruth HR image). For target enhancement, it can be observed from the red zoom in regions that, in the first row, MoCoPnet can further improve the target contrast which is almost invisible in other compared methods. In the second row, MoCoPnet is more robust to large motion caused by turntable collections [70] (e.g., artifacts in the zoom-in region of D3Dnet). In the third row, MoCoPnet can effectively improve the target contrast to be even higher than HR images (i.e., 1.82 vs. 1.75). 2) SR on Real Images: SNR and CR scores calculated in the local background neighborhood of super-resolved HR images are listed in the 10 th − 17 th columns of Table IV. It can be observed that MoCoPnet can achieve the best SNR score and the second best CR score on the average of test datasets under real-world degradation. This demonstrates the superiority of our method in improving the contrast between targets and background. Qualitative results are shown in Fig. 9. It can be observed that MoCoPnet can recover finer details and achieve better visual quality, such as the edges of building and window. In addition, MoCoPnet can further improve the intensity and the contour details of the targets. D. Effect On Infrared Small Target Detection Algorithm In this subsection, we select three typical infrared small target detection algorithms (Top-hat [76], ILCM [77], IPI [58]) to perform detection on super-resolved infrared images. The parameters of the three infrared small target detection algorithms are shown in Table V. When 4× SR is performed on HR images, the size of filters, block and stride, as well as the true detection threshold τ are enlarged by 4 times respectively. When 4× downsampling is performed on HR images, the filter sizes of Top-hat and ILCM are set to 3×3. The block sizes and the stride of IPI are set to 15×15 and 3. The true detection threshold τ is set to 3.0. For simplicity, we only use the best two super-resolved results of D3Dnet and MoCoPnet to perform detection. We also introduce bicubicly upsampled (Bicubic) images and HR images as the baseline results. 1) Detection on Synthetic Images: The quantitative detection results of super-resolved LR images are listed in Table VI. It can be observed that the SNRG, SCRG and CG of the superresolved images are generally higher than the Bicubic images. This demonstrates that SR algorithms can effectively improve the contrast between the target and the background, thus promoting the detection performance. It is worth noting that the SNRG, SCRG and CG scores of D3Dnet and MoCoPnet can even surpass those of HR. This is because, SR algorithms can perform better on the high-frequency small targets than the low-frequency local background, thus achieving improved target contrast than HR images. In addition, Bicubic can achieve the highest BSF score in most cases. This is because SR algorithms act on the entire image, which enhances targets and background simultaneously and detection algorithms have better filtering performance in smoothly changing background. Note that, BSF of MoCoPnet is generally higher than that of D3Dnet. This is because MoCoPnet can focus on recovering the local salient features in the image and further improve the contrast between targets and background, which benefits the detection performance. The qualitative results of super-resolved LR images and detection results are shown in Fig. 10. In the LR images, the targets intensity are very low (e.g., the targets in SAITD and Anti-UAV are almost invisible). In the super-resolved images, the targets intensity are higher and closer to the HR images. This is because, SR algorithms can effectively use the spatio-temporal information to enhance the target contrast. Note that, our Mo-CoPnet is more robust to large motion caused by turntable col-lections [70] (i.e., artifacts in the zoom-in region of D3Dnet in Hui dataset). In addition, the neighborhood noise in HR image are suppressed by the way of downsampling and then superresolution (e.g., point noise are not exist in the zoom-in regions of Hui and Anti-UAV datasets). Then, we perform detection on the super-resolved images. It can be observed in Fig. 10 that all the detection algorithms have poor performance on the Bicubic images (e.g., the target intensity in the target image is very low and almost invisible in all detection results). This is because, bicubic interpolation cannot introduce additional information. However, the targets intensity in the target images of super-resolved images are higher than the Bicubic images. Among the super-resolved images, MoCoPnet is superior than D3Dnet in improving the target contrast due to the centeroriented gradient-aware feature extraction of CD-RG and the effective spatio-temporal information exploitation of LSTA. To evaluate the detection performance comprehensively, we further calculate the ROC results which are shown in Fig. 11. Note that, ROC results on LR and HR image are used as the baseline results. The targets in HR images have the highest intensity. Therefore, high detection probability and low false alarm probability can be obtained and the detection probability reaches 1 faster (e.g., The ROC results reach 1 the fast in SAITD and Hui datasets). Downsampling leads to target intensity reduction, thus reducing the detection probability and increasing the false alarms probability. Bicubic introduces no additional image prior information, therefore, LR and Bicubic have the worst detection performance and the ROC results are significant lower than other algorithms (e.g., the ROC results of LR are the lowest and those of Bicubic are the second lowest except the ROC of Tophat in the SAITD dataset). SR algorithms can introduce prior information to improve the contrast between targets and background, thus achieving improved detection accuracy (e.g., The ROC results of MoCoPnet and D3Dnet are higher than Bicubic in SAITD and Hui datasets and even higher than HR in Anti-UAV dataset). Note that, false alarm rates of LR and Bicubic can only reach a relatively low value. This is because, IPI achieves detection by sparse and low rank recovery, which significantly decreases the false alarm rate than Tophat and ILCM. From another point, IPI suffers low detection rate of low contrast targets. Therefore, the ROC curves of Bicubic and LR images are shorter than those of HR and super-resolved images. The above experimental results show that SR algorithms can recover high-contrast targets, thus improving the detection performance. 2) Detection on Real Images: The quantitative detection results of super-resolved HR images are listed in Table VII. It can be observed that the detection performance of SR algorithms is superior to Bicubic. This demonstrates that MoCoPnet and D3Dnet can effectively improve the contrast between targets and background, resulting in performance gain of detection. Among SR algorithms, due to the superior performance of SR and target enhancement by our welldesigned modules, MoCoPnet can achieve the best SNRG, SCRG and CG scores in most cases. Note that, the SNRG and SCRG scores (achieved by IPI) of MoCoPnet in Anti-UAV dataset are 7-8 orders of magnitude lower than those of Bicubic and D3Dnet. First of all, MoCoPnet can achieve highest scores of CG. This demonstrates the target intensity can be effectively and further enhanced by MoCoPnet. Then, the differences come from the performance of background suppression. Since MoCoPnet can achieve higher scores of SR performance than Bicubic and D3Dnet, the local backgrounds of Bicubic and D3Dnet are more gentle and detection algorithms can achieve better suppression performance. IPI is superior in suppressing background clutter, therefore, sometimes the local backgrounds in the target image of Bicubic and D3Dnet are zero. Since we add to each denominator in equations 6-9 to prevent it from being zero, SNRG and SCRG scores can be very large due to completely suppressed background. In addition, bicubic interpolation suppresses the high-frequency components to a certain extent, resulting in optimal BSF value. The qualitative results of super-resolved HR images and detection results are shown in Fig. 12. It can be observed that the targets of Bicubic images are blur while SR can enhance the intensity of target (e.g., the highlighted and sharpened targets). After processed by SR algorithms, we then perform detection on the super-resolved images. Note that, SR algorithms can effectively improve the intensity of targets and the contrast against background, resulting in better detection performance. To evaluate the detection performance comprehensively, we further present the ROC results in Fig. 13. Note that, ROC results on HR image are used as the baseline results. It can be observed that SR algorithms can improve the detection probability and reduce false alarm probability in most cases. Compared with D3Dnet, MoCoPnet can further improve the target contrast, thus promoting the detection performance. Note that, false alarm rates of Bicubic can only reach a relatively low value. This is because, IPI achieves detection by sparse and low rank recovery, which significantly decreases the false alarm rate than Tophat and ILCM. In other words, IPI suffers low detection rate of low contrast targets. E. Limitation The proposed method fails when the image sequence contains repaid moving targets (Fig.14(a)) or sudden changes ( Fig.14(b)) caused by turnable collections. As we do not have a specific design for handling large motion and sudden change, the motion compensation by LSTAs in these cases can be wrong and our approach may not be able to effectively recover the targets. In future work, we aim to improve the robustness of our method to large motion and sudden change. V. CONCLUSION In this paper, we propose a local motion and contrast prior driven deep network (MoCoPnet) for infrared small target super-resolution. Experimental results show that MoCoPnet can effectively recover the image details and enhance the contrast between targets and background. Based on the super-resolved images, we further investigate the effect of SR algorithms on detection performance. Experimental results show that MoCoPnet can improve the performance of infrared small target detection.
9,420.6
2022-01-04T00:00:00.000
[ "Computer Science" ]
Transcranial Ultrasonic Focusing by a Phased Array Based on Micro-CT Images In this paper, we utilize micro-computed tomography (micro-CT) to obtain micro-CT images with a resolution of 60 μm and establish a micro-CT model based on the k-wave toolbox, which can visualize the microstructures in trabecular bone, including pores and bone layers. The transcranial ultrasound phased array focusing field characteristics in the micro-CT model are investigated. The ultrasonic waves are multiply scattered in skull and time delays calculations from the transducer to the focusing point are difficult. For this reason, we adopt the pulse compression method and the linear frequency modulation Barker code to compute the time delay and implement phased array focusing in the micro-CT model. It is shown by the simulation results that ultrasonic loss is mainly caused by scattering from the microstructures of the trabecular bone. The ratio of main and side lobes of the cross-correlation calculation is improved by 5.53 dB using the pulse compression method. The focusing quality and the calculation accuracy of time delay are improved. Meanwhile, the beamwidth at the focal point and the sound pressure amplitude decrease with the increase in the signal frequency. Focusing at different depths indicates that the beamwidth broadens with the increase in the focusing depth, and beam deflection focusing maintains good consistency in the focusing effect at a distance of 9 mm from the focal point. This indicates that the phased-array method has good focusing results and focus tunability in deep cranial brain. In addition, the sound pressure at the focal point can be increased by 8.2% through amplitude regulation, thereby enhancing focusing efficiency. The preliminary experiment verification is conducted with an ex vivo skull. It is shown by the experimental results that the phased array focusing method using pulse compression to calculate the time delay can significantly improve the sound field focusing effect and is a very effective transcranial ultrasound focusing method. Introduction Transcranial ultrasonic focusing is a noninvasive energy focusing method and has a variety of application scenarios in brain neuromodulation and tumor therapy.Currently, transcranial ultrasonic focusing is efficacious in the noninvasive treatment of malignant gliomas [1][2][3], neuromodulation of the function of specific brain regions [4,5], and neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease [3,6,7].However, due to the inhomogeneity of the skull structure and the heterogeneity of the material, ultrasonic wave transmissions through the skull are tremendously attenuated and distorted [8], which requires effective numerical model to simulate the ultrasonic field characteristics of transcranial ultrasonic focusing. Time-reversal methods and phased-array methods have been primarily used in previous transcranial ultrasound focusing researches.The time reversal technique can realize Sensors 2023, 23, 9702 2 of 21 good focusing effects in transcranial ultrasound [9,10], but its complicated emission waveform brings inconvenience to practical application.Ultrasound phased array technology with a simple emission waveform and flexible beam control is very popular in practical applications.There are large numbers of experimental and theoretical studies on transcranial ultrasound phased array focusing.Hynynen et al. [11][12][13] proposed the implanted hydrophone method in which a hydrophone was placed at the focal point in an isolated skull.The hydrophone was used to measure, invert, and applied the phase shift induced by the presence of the skull to all the elements of the array.A 64-element 0.664 MHz hemispherical array was used to focus at a distance of 10 mm on the other side of the skull to coagulate tissue.Further, Aubry [14] proposed a method for calculating the acoustic properties of the skull from CT scans.The numerical simulation model of the skull based on CT data allowed the calculation of the transducer-to-focus time delay used for the experiment.Based on the modeling of the CT data, Clement and Hynynen [15] introduced projection algorithms to calculate the excitation phase of each transducer element.The calculated excitation phase was applied to 0.74 MHz, 320-element hemispherical array emitted through the human skull with the focus shift within 1 mm.Marquet [16,17] performed noninvasive ultrasound tissue ablation based on time delays calculated from numerical models.In vitro experiments on monkey and human skull samples using an array of 300 emitters centered at 1 MHz revealed localization errors of less than 0.7 mm.Also, numerical modeling methods for sound field computation were well discussed and compared.Jing [18] compared the k-space method with the finite-difference time-domain (FDTD) method.The results showed that the two methods match well in model calculations with more than 10 grid points per wavelength.However, when the grid spacing was increased, the k-space method produces much smaller numerical errors.Jiang [19] compared the phase calculation results of the k-space, FDTD, and ray-tracing methods in transcranial ultrasound phase array focusing.He found that the phase errors of the k-space, FDTD, and ray-tracing methods were 0.7%, 1.2%, and 5.35%, respectively.This shows that the k-space method is a very effective calculation method in transcranial ultrasonic simulation. The structure of the skull is relatively complex, including the inner and outer layers of cortical bone and the middle layer of trabecular bone.The cortical bone is dense bone tissue characterized by low porosity, whereas trabecular bone is highly porous and fluid filled.The numerical model of transcranial ultrasonic focusing can be obtained by CT images [20].However, the general clinical CT images have a lower resolution (is usually 0.488 mm-0.625 mm) [21].The average size of trabecular bone ranges from 50 µm to 150 µm, and the spacing of the bone layers range from 0.5 mm to 2 mm [22].The pores and bone layer structures in trabecular bone are missing and trabecular bone is approximately a low-velocity homogeneous medium layer.Therefore, it is difficult to accurately estimate ultrasound propagation and attenuation characteristics in transcranial ultrasound models constructed by the clinical CT images. In order to display the structure of pores and bone layers in trabecular bone, Bossy [23] used micro-computed tomography (micro-CT) to numerically model trabecular bone.His research results indicated that the microstructures of trabecular bone caused multiple scattering, which greatly attenuated ultrasonic waves.Pinton [24] verified this conclusion with experiments and showed that only a small portion of the loss in transcranial ultrasonic focusing was caused by bone absorption.Robertson [25] revealed that ignoring the microstructures of trabecular bone in transcranial skull modeling led to errors in the calculation of attenuation and propagation time of ultrasonic wave.Robertson's research showed that the relative error between the ultrasound amplitude calculated by the clinical CT model for the transmitted skull was over 60% from the actual amplitude, and the time-of-flight error for the transmitted skull was over 0.3 µs.In summary, transcranial ultrasound focusing is able to achieve a focusing error of approximately 1 mm based on a numerical model established by clinical CT.However, the clinical CT model calculations ignore the scattering from bone trabecular structures and cannot accurately estimate the range of the focusing focal spot.Therefore, in order to accurately calculate the loss and propagation of ultrasound within the skull, transcranial ultrasound modeling should take into account the effect of trabecular structures in model calculations.However, there are few studies and discussions on transcranial ultrasound focusing for micro-CT modeling. In this paper, we use micro-CT images to reconstruct micro-CT models with trabecular bone pores and bone layer structures.In order to distinguish between bone trabecular structures with consideration of issues such as computational efficiency, we chose the resolution for micro-CT as 60 µm, which is sufficient for micro-CT images to show the intracranial foramina and bone layer structure.Based on the micro-CT model, we investigate the effect of bone trabecular microstructure on ultrasound propagation characteristics.Since the microstructure of the skull causes distortion of the ultrasound waveform as it transmits through the skull, we apply the pulse compression to the calculation of micro-CT models, which has been used in medical ultrasound to improve the accuracy of delay calculations [26].The pulse compression method calculates the time delay from the transducer array to the focusing point with high computational accuracy.The delay setting calculated by the pulse compression method makes phased array focusing possible in the micro-CT model.The effects of signal frequency, depth of focus, beam deflection and amplitude adjustment on the phased array focusing effect are investigated. Establishment of the Transcranial Model The micro-CT model is based on the CT scan of an ex vivo skull.The scanning equipment is YXLON FF85, a high-power, high-precision computed tomography system.The tube voltage of the scan is 210 kV, and the tube current is 112.0 uA.The X-rays used for the scan are cone beams, and the scan is rotated by the circle trajectory method.The reconstruction algorithm of the CT images is Feldkamp.Figure 1a shows a schematic of the skull, the skull is swept with the occipital bone padded 50 mm high.The anterior-posterior range of the sweep is 183 mm, the left-right range is 183 mm, and the vertical height is 160 mm, with a resolution of 60 µm in each direction.We take one slice in the middle of the skull, as shown in Figure 2b.The acoustic window is set at the occipital bone of the slice.We establish a Cartesian coordinate system (x, y) using the junction of the inner skull and brain as the coordinate origin.The positive x-axis is toward the deep brain.The transducer array with 128 elements is placed along the y-axis, and the coordinate of the transducer center is (−20 mm, 0 mm).The range of the transducer array is (−20 mm, −38.4 mm~38.4mm) and the spacing of each element is 0.6 mm.The simulation environment is a skull placed in pure water, as shown in Figure 1, where the pores of the skull are filled with aqueous medium. It can be seen that the skull is composed of inner and outer cortical bone and middle trabecular bone, with complex bone layers and various sizes of pores forming the trabecular bone microstructures. The micro-CT image provides the distribution of Hounsfield values.The acoustic parameters of the skull can be obtained by the Hounsfield value.At first, the porosity φ of the skull is calculated by the Hounsfield value H of each pixel point in the CT image by: Then, the density ρ, sound velocity c and the attenuation factor α at each point of the grid in the skull model are calculated as follows [14,19,27,28]: 2) is the maximum absorption attenuation value in the skull.In the micro-CT model, the skull is considered as porous media, including the bone medium and water in the pores.Therefore, the largest attenuating medium is bone medium, with an α b value of 2.7 dB/cm/MHz as measured in the literature [24] for cortical bone resorption.The acoustic parameters of the skull can be calculated by Equation (2). Figure 2 gives the bone structure and ultrasonic parameters obtained by Equation (2).In order to investigate the effect of missing trabecular bone microstructures in the transcranial model, we eliminate the porosity and bone layer structure of the trabecular bone by reducing the resolution and homogenizing the micro-CT model, and establish a model similar to the clinical CT image.The model is a clinical CT model.The bone attenuation coefficient α b for the clinical CT model is 16 dB/cm/MHz [19], and the sound velocity and density are calculated in the same way as for the micro-CT model.The bone structure and acoustic parameters of the clinical CT model are also shown in Figure 2. It can be seen that the skull has a complex structure but the attenuation factor of the bone is low in the micro-CT model as shown in Figure 2a,b.The skull is missing the trabecular bone structure while the attenuation factor of the bone is set higher in the clinical CT model as shown in Figure 2c,d. Calculation of the Ultrasonic Field The pseudospectrum approach [29,30], the finite difference method [31], the angular spectrum method [32], and other methods of computation can be used to calculate the transcranial ultrasound model.Aubry [33] compared various modeling and calculation methods, and the result showed that the above calculation methods have good consistency.The calculation method based on the k-wave toolbox was used as the benchmark for validation.The pseudospectral approach and first-order fluid coupling equation are the bases for the model computation in this article, which also uses the k-wave toolbox.The calculation equation is as follows [30,34]: where v is the velocity of the particle, ρ 0 is the static density, ρ is the density of the medium, c 0 is the isentropic sound speed, and u is the displacement vector of the particle.τ and η are the absorption and dispersion scaling factors.where τ = −2αc y−1 0 , η = 2αc y 0 tan(πy/2).These two items in the calculation control the frequency dependence of the ultrasonic attenuation and the dispersion of the waveform, respectively.Since the dispersion of sound waves is not considered in transcranial ultrasound simulation [35].Therefore, y is set to a value close to 1 in the calculation to eliminate waveform dispersion in the calculation, which is taken here as y= 1.05 [36]. In the calculation, the time step is reduced to ensure the convergence of the calculation results, and the time precision is set based on the Collan-Friedrich-Levy condition number (CFL).The time step ∆t is calculated by the maximum speed of sound in the medium c max , the spatial grid resolution ∆r and the Courant-Friedrichs-Lewy condition number N CFL .The grid resolution in the micro-CT model is 60 µm, while the grid resolution in the clinical CT model is 0.3 mm.Both computational models satisfy the requirement of 3 grid points per wavelength for the k-wave method.To ensure that the calculation results converge accurately, we determine the convergence time step values for the two model calculations by iterative time steps.The convergence value of the time steps is ∆t= 2.5 ns in the micro-CT model and ∆t= 8 ns in the clinical CT model, respectively.The actual N CFL of the corresponding models is N CFL = 0.15 and N CFL = 0.1, respectively.The time steps are calculated as follows [37]: Shear waves are not considered in the calculation of the model in this paper.In previous studies, the skull was considered to be an isotropic solid medium [38][39][40][41].But in fact, the excitation of the shear wave requires a large incidence angle.Shear waves require an incidence angle greater than 35 • for effective excitation, and the excitation amplitude is greater than that of longitudinal waves at incidence angles greater than 40 • [42,43].In the 128-element line array used in the computational model of this paper, the incident angle of the farthest edge array unit is 35 • .Therefore, during the focusing process, the acoustic waves in the skull are dominated by longitudinal waves.The attenuation of the shear wave in the skull is much higher than that of the compression wave, so the shear wave is usually not used as a focusing wave considering shear waves in the skull model can accurately estimate the energy of the focal point [44]. In the micro-CT model, the pores in the bone structure are saturated fluid pores with greater shear wave propagation attenuation as well as more complex waveform conversion.It is assumed in this study that the shear wave is greatly dissipated during the transmission of the cranial bone, and does not affect the focusing quality of the focal point.Meanwhile, the nonlinear setting in the transcranial model calculation does not affect the calculation of the time delay and the time-reversal focusing effect [45], so the nonlinear term is ignored in the calculation. Influence of Microstructure of Trabecular Bone on Ultrasonic Loss Calculations In order to calculate the ultrasonic loss in the skull, we excite ultrasonic signals through the transducer array shown in Figure 1.We use one-dimensional line arrays because they are easy to fabricate and achieve good focus variability.The line arrays are located above the skull and each array element is excited sequentially.High-frequency signals used in transcranial ultrasound focusing with higher attenuation but better focusing.Higher-frequency transducer arrays have a smaller size and better flexibility.Therefore, compared to the 500 KHz frequency used for clinical transcranial focusing, we choose 1 MHz frequency by considering the focusing effect, focusing efficiency and array size.The excitation signals are four-period cosine envelope (CE) signals with a frequency of 1 MHz.The skull and transducer array are placed in the water, and the received signal at the focusing point is calculated from Equation (3). Based on the above settings, the maximum amplitude of the received signal at the focusing point is A b , while the maximum amplitude of the received signal at the focusing point in the absence of the skull existence is A w .The ultrasonic loss in the skull is α l = 20lg(A b /A w ).The ultrasonic loss from each array element to the focusing point is calculated by the above method. The ultrasonic losses from each array element of the transducer to the focusing point are calculated in the micro-CT model and the clinical CT model using the above calculation process and the result is shown in Figure 3.The ultrasonic losses in the micro-CT model are −(16~20) dB and in the clinical CT model are −(12~28) dB.It can be seen that the ultrasonic losses in the micro-CT model come from the scattering of the trabecular bone pore and the bone layer.In contrast, due to the lack of the microstructure of the trabecular bone, the clinical CT model is set by a higher attenuation value to fit the attenuation due to scattering. .The ultrasonic loss from each array element to the focusing point is calculated by the above method. The ultrasonic losses from each array element of the transducer to the focusing point are calculated in the micro-CT model and the clinical CT model using the above calculation process and the result is shown in Figure 3.The ultrasonic losses in the micro-CT model are −(16~20) db and in the clinical CT model are −(12~28) db.It can be seen that the ultrasonic losses in the micro-CT model come from the scattering of the trabecular bone pore and the bone layer.In contrast, due to the lack of the microstructure of the trabecular bone, the clinical CT model is set by a higher attenuation value to fit the attenuation due to scattering.Laurent [46] measured the ultrasonic attenuation in six isolated skulls in the frequency range of 800 kHz-1.3MHz.The results show that the ultrasonic attenuation of 1 MHz frequency signal transmitting through the skull is −(10~16) db.The average ultrasonic attenuation of a 1 MHz frequency signal through the skull computed by the micro- Laurent [46] measured the ultrasonic attenuation in six isolated skulls in the frequency range of 800 kHz-1.3MHz.The results show that the ultrasonic attenuation of 1 MHz frequency signal transmitting through the skull is −(10~16) dB.The average ultrasonic attenuation of a 1 MHz frequency signal through the skull computed by the micro-CT model is −16 dB with a variance of 2.0.However, if the clinical CT model is used for the calculation, the attenuation of the 1 MHz frequency is −17 dB with a variance of 4.3, which is more than twice that of the micro-CT model. Characterization of the Transmitted Skull Ultrasound Signal Due to the presence of complex trabecular bone structures in the micro-CT model, the ultrasound waveforms transmitting through the skull produce distortions and multiple scattering.We excited 1 MHz CE signal on the 64th array element in the center of the array, and the excited ultrasound signal indicated a near-vertical incidence.Figure 4 The signal received at the focusing point in micro-CT is the superposit forms after multiple scattering.The superposition of scattered waves not onl tortion of the received waveform, but also increases the time domain length In the clinical CT model, the received ultrasonic waveform at the focal point plete, without strong distortion and obvious trailing. Figure 5 shows the characteristics of the ultrasonic signal at the focusing micro-CT model and clinical CT model.Figure 5a shows the waveform com tween the transmitted skull ultrasonic signal and the excitation signal.Figu the spectral comparison of the transmitted skull ultrasonic signal and the exci The signal received at the focusing point in micro-CT is the superposition of waveforms after multiple scattering.The superposition of scattered waves not only causes distortion of the received waveform, but also increases the time domain length of the signal.In the clinical CT model, the received ultrasonic waveform at the focal point is more complete, without strong distortion and obvious trailing. Figure 5 shows the characteristics of the ultrasonic signal at the focusing point in the micro-CT model and clinical CT model.Figure 5a shows the waveform comparison between the transmitted skull ultrasonic signal and the excitation signal.Figure 5b shows the spectral comparison of the transmitted skull ultrasonic signal and the excitation signal. The distortion of the waveform is caused by several factors.The bone structure leads to an increase in the nonuniformity of the propagation medium.The increase in the resolution of the modeled computational grid and the increase in the differences in acoustic parameters between grid points lead to distortions in the shape of the waveform.The bone layer in the trabecular bone scatters the transcranial ultrasonic waves many times [22,47].The received signal at the focusing point becomes the superposition of multiple scattered waves.Therefore, the transcranial waveforms in the micro-CT model are more complex than those in the clinical CT model, and have larger aberrations and scattering. tortion of the received waveform, but also increases the time domain length of the signal. In the clinical CT model, the received ultrasonic waveform at the focal point is more complete, without strong distortion and obvious trailing. Figure 5 shows the characteristics of the ultrasonic signal at the focusing point in the micro-CT model and clinical CT model.Figure 5a shows the waveform comparison between the transmitted skull ultrasonic signal and the excitation signal.Figure 5b shows the spectral comparison of the transmitted skull ultrasonic signal and the excitation signal.The distortion of the waveform is caused by several factors.The bone structure leads to an increase in the nonuniformity of the propagation medium.The increase in the resolution of the modeled computational grid and the increase in the differences in acoustic parameters between grid points lead to distortions in the shape of the waveform.The bone layer in the trabecular bone scatters the transcranial ultrasonic waves many times [22,47].The received signal at the focusing point becomes the superposition of multiple scattered waves.Therefore, the transcranial waveforms in the micro-CT model are more complex than those in the clinical CT model, and have larger aberrations and scattering. Furthermore, the computational model takes into account the positive frequency dependent attenuation specific to biological tissues.That is, the attenuation coefficient has a high attenuation value in the high-frequency part of the signal [23,48], which leads to a Furthermore, the computational model takes into account the positive frequency dependent attenuation specific to biological tissues.That is, the attenuation coefficient has a high attenuation value in the high-frequency part of the signal [23,48], which leads to a frequency shift in the center of the signal.There are spectral aberrations in the transcranial signals of both the micro-CT model and the clinical CT model. Application of Pulse Compression in Transcranial Phased Array Focusing Ultrasonic phased array focusing controls the time delay of the ultrasonic signal of the array element so that the ultrasonic signal reaches the focusing point at the same time [49].Ultrasonic phased array can focus ultrasonic waves in various complex media [50].For uniform media, the time delay of each array element of the transducer can be obtained from the distance and velocity.For nonuniform media, ultrasonic phased array can calculate the time delay from the transducer to the focusing point by the cross-correlation of the excitation signal of the transducer and the received signal at the focusing point. However, it is very difficult to accurately calculate the time delay from the transducer to the focusing point in the transmitted skull ultrasonic waveform shown in Figure 5a.Since the ultrasonic shape is distorted and scattered, it is difficult to obtain an accurate time delay by the cross-correlation based time delay method.Therefore, we employ the pulse compression method to improve the accuracy of this time delay calculation.The pulse compression method changes the autocorrelation function of the original signal.The autocorrelation function of the conventional unmodulated CE signal is still the envelope shape, and the peak of the cross-correlation is the maximum of the envelope.Pulse compression changes the autocorrelation function of the signal to peak pulse.The autocorrelation function of the spikes is effective in resisting frequency shifts, while the phase encoding in the pulse compression method eliminates the effect of the scattered wave cross-correlation peaks.Moreover, the large time-width and bandwidth signal modulated by the pulse compression method can obtain the signal-to-noise ratio of the matched filter and improve the robustness of the cross-correlation. The Principle of Pulse Compression The pulse compression method modulates the excitation ultrasound signal into a long-duration waveform by frequency modulation and phase coding, and decouples the received ultrasound signal into a narrow-band pulse with a high signal-to-noise ratio by using cross-correlation at the focusing point [51][52][53][54]. The pulse compression method constitutes the matching filter of the excitation ultrasonic signal and the received signal at the focusing point.The output gain of the matching filter is equal to the time bandwidth product of the excitation ultrasonic signal.Therefore, the pulse compression method actually increases the output gain of the matched filter by increasing the time bandwidth product of the excitation ultrasonic signal and thus further increases the accuracy of the time delay calculation. The time bandwidth product of a monofrequency ultrasonic signal is a fixed value due to the fact that the signal is periodically broadened in the time domain and its bandwidth in the frequency domain decreases [55].The time bandwidth product of an ultrasonic signal can only be increased by frequency modulation and phase coding.Frequency modulation and phase coding can increase the root-mean-square signal bandwidth β and the rootmean-square signal duration δ of the signal, respectively, consequently increasing the time bandwidth product T B of the excitation signal p(t) with the following expressions [56]: In the above equations, P( f ) is the Fourier transform of p(t) and E is the energy of the signal p(t).For a matching filtering process based on the cross-correlation calculation, the signal-to-noise ratio gain G SNR is expressed as [55]: In the matched filter expression of the above equation, N 0 is the noise power spectrum density, S is the average power of the excitation signal, T is the total duration of the excitation signal, and B is the bandwidth of the excitation signal.The time bandwidth product T B of the excitation signal is often used in the literature to describe the G SNR of the matched filtered output. Design of the Linear Frequency Modulation Barker (LFMB) Code The linear frequency modulation (FM) signal bandwidth is insensitive to the target doppler frequency deviation, but it has the disadvantage of strong time-delayed frequencydeviation coupling, which cannot resolve the effect of the presence of scattered waves in the signal.The Barker code is a nonperiodic sequence with good autocorrelation properties, which can reduce the computational error caused by scattered waves.Therefore, the phase modulation of the signal in the Barker code, followed by modulation of each code element on linear FM, can be computationally robust to the superposition and frequency shift of scattered waves in the transcranial signal. Figure 6 shows the unmodulated CE signal and the frequency-modulated signal with the ambiguity function.The ambiguity function of the excitation signal represents the output response of this signal in the matched filtering with the expression: Here, χ(τ, f d ) is the ambiguity function of the excitation signal p(t), τ is the time delay and f d is the frequency offset.The ambiguity function of the unmodulated CE signal in Figure 6a shows that the maximum output value is obtained at f d = 0, τ = 0.However, when the received signal with time delay τ includes the scattered signal with frequency offset f m and phase shift τ m , the output of matched filtering is χ(τ + τ m , f m ), resulting in χ(τ + τ m , f m ) > χ(0, 0).The time delay obtained by the cross-correlation calculation is erroneous.Since the transcranial ultrasonic signal is a superposition of waveforms that have been scattered many times, and the frequency-modulated signal will be similarly distorted due to scattering, we use the frequency-modulated ultrasound signal as a twophase encoded codeword in order to achieve phase coding.Phase coding further reduces the narrow pulse width of the frequency-modulated signal at the matched filter output. The 13 bit Barker code is used as a binary code set for phase encoding.The frequencymodulated signal with 8 µs length, 1 MHz center frequency, and 2 MHz bandwidth is used as a single code element to constitute the linear frequency modulation Barker (LFMB) code.The total length of the LFMB code is 104 µs, and the LFMB code is used in simulation to improve the accuracy of time delay calculations, which is expressed as [57] [ ] The LFMB code is computed by convolving the linear frequency modulation signal 1 ( ) p t with the Barker code 2 ( ) p t .⊗ is the convolution symbol, c f is the center frequency, B is the bandwidth, L t is the length of linear frequency modulation signal, is the frequency modulation slope, and B t is the length of the LFMB code.m C is the symbol of the 13 bit Barker code elements. Figure 7a shows the time domain waveform of the LFMB code and Figure 7b shows the ambiguity function of the LFMB code.It can be seen that phase coding discretizes the coupling of the frequency offset and the phase shift, which reduces the computational error and further improves the time bandwidth product at the same time.Figure 6b shows the ambiguity function of the excitation signal after frequency modulation.The gain of the signal time bandwidth product is expressed as the increase in the ambiguity function at χ(0, 0).Frequency modulation also changes the shape of the ambiguity function.The ambiguity function of the frequency-modulated signal is the the coupling of phase shift and frequency offset, thus the frequency modulation reduces the computational error caused by the frequency offset.The ambiguity function with coupling coefficient κ can be expressed as the following equation. Since the transcranial ultrasonic signal is a superposition of waveforms that have been scattered many times, and the frequency-modulated signal will be similarly distorted due to scattering, we use the frequency-modulated ultrasound signal as a two-phase encoded codeword in order to achieve phase coding.Phase coding further reduces the narrow pulse width of the frequency-modulated signal at the matched filter output. The 13 bit Barker code is used as a binary code set for phase encoding.The frequencymodulated signal with 8 µs length, 1 MHz center frequency, and 2 MHz bandwidth is used as a single code element to constitute the linear frequency modulation Barker (LFMB) code.The total length of the LFMB code is 104 µs, and the LFMB code is used in simulation to improve the accuracy of time delay calculations, which is expressed as [57] The LFMB code is computed by convolving the linear frequency modulation signal p 1 (t) with the Barker code p 2 (t).⊗ is the convolution symbol, f c is the center frequency, B is the bandwidth, t L is the length of linear frequency modulation signal, µ = B t L is the frequency modulation slope, and t B is the length of the LFMB code.C m is the symbol of the 13 bit Barker code elements. Figure 7a shows the time domain waveform of the LFMB code and Figure 7b shows the ambiguity function of the LFMB code.It can be seen that phase coding discretizes the coupling of the frequency offset and the phase shift, which reduces the computational error and further improves the time bandwidth product at the same time. Transcranial Ultrasound Phased Array Focusing Based on the Pulse Compression Method In the simulation micro-CT model, we transmit the excitation ultrasonic signal ( ) p t at each array element of the transducer.The excitation signal ( ) p t is the LFMB code cal- culated from Equation ( 9).For convenience, the excitation signal of the j th element is represented by ( ) is the cross-correlation function of the j th element, and the time delay j τ from the j th array element to the focusing point can be obtained from the peak value of the decoupled pulse.In actual transcranial ultrasound phased array focusing, the transducer array is based on the time delay settings calculated in the above model to make the emitted focusing ultrasonic waves reach the focusing point.The excitation signal in the model calculations is a long-duration modulated signal and is difficult to emit with the actual transducer.Therefore, in phased array focusing, the excitation signal of the j th array element adopts a single-frequency signal ( ) j f t with two cycles. We perform phased array focusing based on the time delay calculated by Equation (10).Thus, each element of the transducer emits signal ( ) j f t and the time delay is given by Formula (10).Therefore, the excitation signal of the j th array element is ( ) When all array elements simultaneously emit this signal Transcranial Ultrasound Phased Array Focusing Based on the Pulse Compression Method In the simulation micro-CT model, we transmit the excitation ultrasonic signal p(t) at each array element of the transducer.The excitation signal p(t) is the LFMB code calculated from Equation ( 9).For convenience, the excitation signal of the jth element is represented by p j (t).Under the excitation of this signal, the ultrasonic field generated at the focal point is represented by p o (t) which can be calculated according to Equation (3).The waveform of p o (t) is distorted when the ultrasonic wave is transmissive through the skull.However, the signal p o (t) can be decoupled into a single pulse by calculating the cross-correlation of p j (t) and p o (t), as shown in the following equation R j (τ j ) is the cross-correlation function of the jth element, and the time delay τ j from the jth array element to the focusing point can be obtained from the peak value of the decoupled pulse.In actual transcranial ultrasound phased array focusing, the transducer array is based on the time delay settings calculated in the above model to make the emitted focusing ultrasonic waves reach the focusing point.The excitation signal p j (t) in the model calculations is a long-duration modulated signal and is difficult to emit with the actual transducer.Therefore, in phased array focusing, the excitation signal of the jth array element adopts a single-frequency signal f j (t) with two cycles. We perform phased array focusing based on the time delay calculated by Equation ( 10).Thus, each element of the transducer emits signal f j (t) and the time delay is given by Formula (10).Therefore, the excitation signal of the jth array element is f j (t − τ j ).When all array elements simultaneously emit this signal f j (t − τ j ), the ultrasonic field is focused at the focal point, as shown in Figure 8. Comparison of Time Delay Calculations Following the method shown in Figure 8, in the micro-CT model, we excite the unmodulated CE signal and the LFMB code on each array element of the transducer.The time delay from each array element to the focusing point is calculated according to Equation (10). Figure 9a shows the time delays calculated by the unmodulated CE signal and Comparison of Time Delay Calculations Following the method shown in Figure 8, in the micro-CT model, we excite the unmodulated CE signal and the LFMB code on each array element of the transducer.The time delay from each array element to the focusing point is calculated according to Equation (10). Figure 9a shows the time delays calculated by the unmodulated CE signal and the LFMB code, respectively.Some of the time delays calculated by the two excitation signals are equal, while most of the time delays show a large deviation in the calculated values, and the deviation values ranging from 0 to 10 µs. Comparison of Time Delay Calculations Following the method shown in Figure 8, in the micro-CT model, we excite the unmodulated CE signal and the LFMB code on each array element of the transducer.The time delay from each array element to the focusing point is calculated according to Equation (10). Figure 9a shows the time delays calculated by the unmodulated CE signal and the LFMB code, respectively.Some of the time delays calculated by two excitation signals are equal, while most of the time delays show a large deviation in the calculated values, and the deviation values ranging from 0 to 10 µs. Figure 9b shows the difference between the time delays calculated for the two excitation signals.When the received waveforms are not distorted due to the frequency offset and the superposition of the scattered waveforms, the calculated time delays of the two excitation signals are the same.Where the cross-correlation calculation produces an error due to frequency offset, the difference obtained from the calculation is between 0.5 and 1 µs; and when the received waveform is peak lagged due to distortion and scattered wave superposition, the calculation difference is greater than 1 µs. Figure 9b shows the difference between the time delays calculated for the two excitation signals.When the received waveforms are not distorted due to the frequency offset and the superposition of the scattered waveforms, the calculated time delays of the two excitation signals are the same.Where the cross-correlation calculation produces an error due to frequency offset, the difference obtained from the calculation is between 0.5 and 1 µs; and when the received waveform is peak lagged due to distortion and scattered wave superposition, the calculation difference is greater than 1 µs. In order to show more visually the computational error of transcranial ultrasound distortion visually, Figure 10a shows the received waveforms of the unmodulated CE signal and the LFMB code excited at the same array element as the focusing point.The position of the 64th array element is in the center of the array, and the excitation signal is close to the vertical incidence on the outer surface of the skull.It can be seen that the two excitation signals are distorted due to the scattering after transmission through the skull in Figure 10a.In Figure 10b, the correlation peak of the CE signal is 0.2045, while the correlation peak of the LFMB code is 26.21, which indicates that the increased time bandwidth product of frequency modulation and phase coding increases the correlation peak of the correlation computation.We measure the accuracy of the time delay calculation by the peak sidelobe ratio of the cross-correlation calculation.The peak sidelobe ratio is the ratio of the highest peak to the second highest value in the cross-correlation calculation.The higher the peak sidelobe ratio, the higher the calculation accuracy. The second highest correlation peak of the CE signal is 0.2004, and that of the LFMB code is 13.In Figure 10b, the correlation peak of the CE signal is 0.2045, while the correlation peak of the LFMB code is 26.21, which indicates that the increased time bandwidth product of frequency modulation and phase coding increases the correlation peak of the correlation computation.We measure the accuracy of the time delay calculation by the peak sidelobe ratio of the cross-correlation calculation.The peak sidelobe ratio is the ratio of the highest peak to the second highest value in the cross-correlation calculation.The higher the peak sidelobe ratio, the higher the calculation accuracy. The second highest correlation peak of the CE signal is 0.2004, and that of the LFMB code is 13.53.The peak sidelobe ratio of the CE signal is 20 × log10(0.2054/0.2004)= 0.21 dB, and the peak sidelobe ratio of the LFMB code is 20 × log10(26.21/13.53)= 5.74 dB.Pulse compression coding increases the peak sidelobe ratio to 5.53 dB. Phased Array Focusing with Different Time Delays With the time delay settings in Figure 9a for phased array focusing, the signal used for focusing is a 1 MHz pulse wave.The sound pressure of the pulse wave excited on each array element is 1 MPa, and the depth of focus is 30 mm below the skull.Figure 11 shows the maximum sound pressure value map at each point within the sound field of the focusing point.The x-axis direction in the figure is the depth direction, and the greater positive direction of the x-axis indicates that the focusing point is farther away from the skull surface and deeper.The y-axis direction is the transverse direction, and the line array is symmetric about the y-axis.The origin in Figure 11 is located at a position 50 mm below the array center array element, i.e., 30 mm below the skull. Phased Array Focusing with Different Time Delays With the time delay settings in Figure 9a for phased array focusing, the signal used for focusing is a 1 MHz pulse wave.The sound pressure of the pulse wave excited on each array element is 1 MPa, and the depth of focus is 30 mm below the skull.Figure 11 shows the maximum sound pressure value map at each point within the sound field of the focusing point.The x-axis direction in the figure is the depth direction, and the greater positive direction of the x-axis indicates that the focusing point is farther away from the skull surface and deeper.The y-axis direction is the transverse direction, and the line array is symmetric about the y-axis.The origin in Figure 11 is located at a position 50 mm below the array center array element, i.e., 30 mm below the skull.Figure 11a represents phased array focusing in the absence of the skull environment in water.Figure 11b,c are focusing distribution in the skull in water.Figure 11b shows the distribution of the phased array focusing sound field with the time delay obtained from unmodulated CE signal acquisition, and no effective focus.Figure 11c shows the phased array focusing sound field distribution with time delay obtained from the LFMB code.When the time delay is calculated accurately, the phased-array method can achieve effective focus in the micro-CT model. This shows that phased array focusing can still be realized in the micro-CT model of complex skull structures with precise delay settings.Taking the −3 dB width of the peak sound pressure at the focal point as the effective range of the focused beam, it can be found that the −3 dB widths in the x-axis and y-axis directions at the focal points in Figure 11a are 4.0 mm and 0.8 mm, respectively, while in Figure 11c, the −3 dB widths in the x-axis and y-axis directions at the focal points are 6.3 mm and 1 mm, respectively.Therefore, the beam widths transmitted through the skull are broadened.Figure 11a represents phased array focusing in the absence of the skull environment in water.Figure 11b,c are focusing distribution in the skull in water.Figure 11b shows the distribution of the phased array focusing sound field with the time delay obtained from unmodulated CE signal acquisition, and no effective focus.Figure 11c shows the phased array focusing sound field distribution with time delay obtained from the LFMB code.When the time delay is calculated accurately, the phased-array method can achieve effective focus in the micro-CT model. This shows that phased array focusing can still be realized in the micro-CT model of complex skull structures with precise delay settings.Taking the −3 dB width of the peak sound pressure at the focal point as the effective range of the focused beam, it can be found Sensors 2023, 23, 9702 that the −3 dB widths in the x-axis and y-axis directions at the focal points in Figure 11a are 4.0 mm and 0.8 mm, respectively, while in Figure 11c, the −3 dB widths in the x-axis and y-axis directions at the focal points are 6.3 mm and 1 mm, respectively.Therefore, the beam widths transmitted through the skull are broadened. Effect of Signal Frequency on Focusing Figure 12 shows the variation in the focused beam at different frequencies, and the beam width decreases significantly in both the y-axis and the x-axis as the frequency increases, while the sound pressure at the focusing point decreases with increasing frequency.Among them, there is significant attenuation at 1 MHz. unmodulated CE signal acquisition, and no effective focus.Figure 11c shows the phased array focusing sound field distribution with time delay obtained from the LFMB code.When the time delay is calculated accurately, the phased-array method can achieve effective focus in the micro-CT model. This shows that phased array focusing can still be realized in the micro-CT model of complex skull structures with precise delay settings.Taking the −3 dB width of the peak sound pressure at the focal point as the effective range of the focused beam, it can be found that the −3 dB widths in the x-axis and y-axis directions at the focal points in Figure 11a are 4.0 mm and 0.8 mm, respectively, while in Figure 11c, the −3 dB widths in the x-axis and y-axis directions at the focal points are 6.3 mm and 1 mm, respectively.Therefore, the beam widths transmitted through the skull are broadened. Effect of Signal Frequency on Focusing Figure 12 shows the variation in the focused beam at different frequencies, and the beam width decreases significantly in both the y-axis and the x-axis as the frequency increases, while the sound pressure at the focusing point decreases with increasing frequency.Among them, there is significant attenuation at 1 MHz.In the preceding content analysis, the cranial medium of the micro-CT model has low attenuation values and an increase in frequency does not significantly increase attenuation In the preceding content analysis, the cranial medium of the micro-CT model has low attenuation values and an increase in frequency does not significantly increase attenuation absorption.Therefore, the decrease in sound pressure values is due to the enhanced scattering of the signal in the trabecular bone as the wavelength is reduced [10].When the focusing frequency is increased from 0.5 MHz to 1 MHz, the sound pressure of the focal point decreases by −3.2 dB, the x-axis −3 dB beamwidth decreases from 9.7 mm to 6.3 mm, and the y-axis −3 dB beamwidth decreases from 2.3 mm to 1 mm, whereas when the focusing frequency is increased to 1.5 MHz, the x-axis −3 dB beamwidth is 4.4 mm, the y-axis −3 dB beamwidth is 0.6 mm, and the focal sound pressure decreases by −6.4 dB compared to that at 0.5 MHz. Focus Depth Modulation and Beam Deflection Ultrasound phased array technology can adjust the depth of the focusing point by setting time delays or adjusting the position of the focusing point by using beam deflection.The time delays for the different focusing points are similarly calculated from the LFMB code. Figure 13 shows the focusing effect at different depths under the skull through phased array in the micro-CT model.The depth of the focusing point is 18 mm, 27 mm and 39 mm, respectively.Figure 13a-c shows the focused beams of the phased array at different depths, and the red circle marks the location of the focusing point. Figure 13d,e show the x-axis and y-axis beamwidths of focal points at different depths, respectively.Focusing at a shallow depth of 18 mm resulted in a focus shift, which is due to the larger incidence angle of transducer array elements at shallow focal points.At the same time, with the increase in focus depth, the beam width of the focus widened, and the peak sound pressure of the focus increased.This suggests that the ultrasonic phased array is more suitable for focusing in the deep brain, rather than the shallow cerebral cortex. Ultrasound phased array technology can adjust the depth of the focusing point by setting time delays or adjusting the position of the focusing point by using beam deflection.The time delays for the different focusing points are similarly calculated from the LFMB code. Figure 13 shows the focusing effect at different depths under the skull through phased array in the micro-CT model.The depth of the focusing point is 18 mm, 27 mm and 39 mm, respectively.Figure 13a-c shows the focused beams of the phased array at different depths, and the red circle marks the location of the focusing point.Figure 13d,e show the x-axis and y-axis beamwidths of focal points at different depths, respectively.Focusing at a shallow depth of 18 mm resulted in a focus shift, which is due to the larger incidence angle of transducer array elements at shallow focal points.At the same time, with the increase in focus depth, the beam width of the focus widened, and the peak sound pressure of the focus increased.This suggests that the ultrasonic phased array is more suitable for focusing in the deep brain, rather than the shallow cerebral cortex. Figure 14 shows the transverse distance of focus controlled on the y-axis by beam deflection, and the offset distance is 9 mm. Figure 14a-c shows the focused sound field with a focus offset ±9 mm from the transverse y-axis wheelbase based on the position 27 mm below the skull.Figure 14d,e show the width of the x-axis and y-axis of the focus beam, respectively.Figure 14 shows the transverse distance of focus controlled on the y-axis by beam deflection, and the offset distance is 9 mm. Figure 14a-c shows the focused sound field with a focus offset ±9 mm from the transverse y-axis wheelbase based on the position 27 mm below the skull.Figure 14d,e show the width of the x-axis and y-axis of the focus beam, respectively.We use delay setting to make the transducer array achieve beam deflection focusing.In deflection focusing, focal points with a deflection distance of ±9 mm all form effective focusing.The peak sound pressure of focal points at different deflection distances is close, which indicates that focal points in transcranial phased array focusing are controllable.We use delay setting to make the transducer array achieve beam deflection focusing.In deflection focusing, focal points with a deflection distance of ±9 mm all form effective focusing.The peak sound pressure of focal points at different deflection distances is close, which indicates that focal points in transcranial phased array focusing are controllable. Effect of Amplitude Regulation Due to the differentiation of trabecular bone structures in different positions of the skull, the attenuation of ultrasonic waves transmitting through the skull varies.Therefore, the path of phased array focusing can be optimized by amplitude weighting the transducer array elements.The amplitude-weighted values of each array element can be obtained from the cross-correlation results calculated in Equation (10).The amplitude-weighted value A j of jth element is calculated as [58]: N is the number of array elements and the amplitude modulation is the initial amplitude of the jth element multiplied by the amplitude weighting A j .Figure 15a shows the amplitude regulation value of each array element.Similarly, we set the focal point at 30 mm below the skull, and 50 mm below the center array element of the transducer array.The frequency of the focused ultrasonic waves is 1 MHz.The peak amplitude of the sound pressure at the focal point is 7.26 × 10 4 Pa, as shown in Figure 15b.It can be found that the peak amplitude of the sound pressure at the focal point in Figure 11a,c is 8.39 × 10 5 Pa and 6.71 × 10 4 Pa, respectively.Comparing Figure 11a,c and Figure 15b, it can be concluded that the peak amplitude of the sound pressure at the focal point is attenuated by 20lg 8.39×10 5 6.71×10 4 = 21.9 dB and 20lg 8.39×10 5 7.26×10 4 = 21.2 dB without and with amplitude modulation.Therefore, the amplitude modulation method can increase the focusing effect and the sound pressure peak amplitude improved by 0.7 dB, i.e., the sound pressure amplitude is increased by 7.26×10 4 −6.71×10 4 without and with amplitude modulation.Therefore, the amplitude modulation method can increase the focusing effect and the sound pressure peak amplitude improved by 0.7 dB, i.e., the sound pressure amplitude is increased by Preliminary Experimental Verification To validate our methodology and conclusions, we prepare the skull and phased array for preliminary experimental validation.Initially, we use micro-CT images to locate the position of micro-CT model slices.The CT data are imported into the Mimics software 21.0.We determine the number of CT slices corresponding to the specific structures (holes and gaps) of the skull based on the CT images.The location of the CT slices in the actual skull is determined by calculating the distance among the CT slices.The position of the acoustic window is similarly determined based on the above method.The transducer array is placed at the position set by the micro-CT model, and the transducer array is symmetric about the acoustic window.The center of the acoustic window on the skull is marked to confirm the position of the central element of the array.The skull is first placed The amplitude regulation of the transducer array can improve the focusing efficiency to some extent.Amplitude regulation actually adjusts the transducer's contribution to the focusing point according to the merit of the focusing path, which tends to be related to the porosity in the bone trabeculae and the complexity of the bone structure. Preliminary Experimental Verification To validate our methodology and conclusions, we prepare the skull and phased array for preliminary experimental validation.Initially, we use micro-CT images to locate the position of micro-CT model slices.The CT data are imported into the Mimics software 21.0.We determine the number of CT slices corresponding to the specific structures (holes and gaps) of the skull based on the CT images.The location of the CT slices in the actual skull is determined by calculating the distance among the CT slices.The position of the acoustic window is similarly determined based on the above method.The transducer array is placed at the position set by the micro-CT model, and the transducer array is symmetric about the acoustic window.The center of the acoustic window on the skull is marked to confirm the position of the central element of the array.The skull is first placed close to the center element of the transducer and the relative distance between the array and the skull is ensured by measuring the movement of the platform.By adjusting the tilt angle and position of the skull to ensure that the relative positions of the acoustic window, the skull and the array are consistent with those in the micro-CT model.In the experiment, we use a 32-element line phased array with a wafer size of 1.5 mm × 20 mm, array spacing of 1.5 mm, and center frequency of 2 MHz.The focusing point is 50 mm away from the center array.A hydrophone is placed at the position of the focusing point.Prior to the experiment, the skull needs to be placed in water for 2 days to ensure that the pores were filled with water.The experiment is performed in the water tank with the water inside after more than 2 weeks of resting, as shown in Figure 16a.In the experiment, we use the time delay obtained from the simulation calculation for transcranial ultrasound focusing, and use the time delay calculated by the LFMB code, the time delay calculated by the CE signal, and the no time delay setting to excite the phased array, respectively.The transmit signal used for transcranial ultrasound focusing in the experiment is a 4-cycle 2 MHz frequency sine wave, and the signal received by the hydrophone at the focusing point is shown in Figure 16c.It can be seen that the waveforms of the signals received by the hydrophone are all distorted and scattered.The peak value of the focusing voltage is 11.2 mV with the time delay calculated using the LFMB code, 2.7 mV with the time delay calculated using the CE signal, and 7.4 mV with no time delay.The strong distortion of the transcranial CE signal leads to errors in the time delay calculation and reduces the received signal voltage at the focusing point, whereas the LFMB code improves the accuracy of the time delay calculation, so the received signal voltage at the focusing point is 8.5 mV higher than that with time delay setting calculated from the CE signal and 3.8 mV higher than that with no delay setting. The above experimental results indicate that the method proposed in this paper is effective and feasible. Conclusions In this paper, we use micro-computed tomography to obtain micro-CT images with a resolution of 60 µm.Based on these images, we construct the micro-CT model, which can restore the pore and bone layer structure of trabecular bone.Model calculations verify that the scattering of ultrasonic waves by trabecular bone pores leads to the high attenuation characteristics of the skull.Moreover, ultrasonic waves transmitting through the skull produce frequency shifts and scattering superimpositions. Therefore, we adopt the pulse compression method and design the LFMB code to refine the calculation of the time delay.The combination of the pulse compression method and the cross-correlation calculation can effectively improve the accuracy of the time delay acquisition of the complex received signal.Using the time delay obtained by this method, the acoustic field characteristics of phased array focusing are investigated.We evaluate the effects of the signal frequency, beam deflection, and amplitude modulation on transcranial phased array focusing. In previous studies, the skull is considered to be a high attenuation medium layer with a positive frequency correlation.Our research by the micro-CT model shows that the skeleton medium does not show high attenuation, but the pore and bone layer structure of trabecular bone greatly attenuates the ultrasonic waves transmitted through the skull.The attenuation caused by scattering is further enhanced with the increase in the frequency of focused ultrasonic waves.The trabecular bone structure of the skull varies greatly in different locations, which indicates that transcranial ultrasonic focusing can improve focusing efficiency by using regulatory methods.Amplitude regulation is a simple method to optimize focusing efficiency, which optimizes the ultrasound focusing path by weighting the amplitude of the array elements.Meanwhile, we investigated the feasibility of beam deflection focusing of linear arrays by time delay setting.With accurate time delay calculated from the LFMB code, the transducer array can be focused at different positions and depths in the skull. In the implementation of transcranial ultrasound focused therapy using the phase aberration correction method, the skull is simply considered as a nonuniform triple media layer, and in this idealization, the fine structure of the trabecular bone (nonuniform media) is reduced to a uniform medium, where refraction-induced intracranial light ray path deflections are ignored.This leads to a computationally incorrect estimation of the focused acoustic field as well as an actual focal shift.The cranial microstructural model developed in this paper helps to analyze the unpredictable wave-trajectory bending induced by the trabecular layer, thus providing further insights into the main factors contributing to transcranial ultrasound attenuation as well as beam scattering and deflection.In addition, the pulse compression method is universally applicable to all transcranial ultrasound computational models, and works well to improve accuracy in all time-delay calculations in complex media.The pulse compression method provides a reference for fast and accurate phase estimation for more precise transcranial ultrasound focusing and treatment.The preliminary experiments have been validated in this paper and we will further improve the experiment and demonstrate the availability of the method in transcranial ultrasound focusing. 5 α b .(2) where ρ w and c w are the density and sound velocity of the water, ρ b and c b are the density and sound velocity of the skull, respectively, and α b is the attenuation factor of the skull.In the calculation, ρ w = 1000 kg/m 3 , ρ b = 1900 kg/m 3 , c w = 1500 m/s, and c b = 2900 m/s.The attenuation α b in Equation ( Sensors 2023 , 22 Figure 1 . Figure 1.(a) Schematic representation of the skull scanning region, the acoustic window and the position of the phased array.(b) Skull slice images, acoustic window and line array coordinate positions. Figure 2 . Figure 2. Bone structure and acoustic parameters in models.(a,b) The micro-CT model, (c,d) the clinical CT model, (a,c) sound velocity c and (b,d) the attenuation factor α .The micro-CT image provides the distribution of Hounsfield values.The acoustic parameters of the skull can be obtained by the Hounsfield value.At first, the porosity φ of the skull is calculated by the Hounsfield value H of each pixel point in the CT image by: Figure 1 . 4 Figure 1 . Figure 1.(a) Schematic representation of the skull scanning region, the acoustic window and the position of the phased array.(b) Skull slice images, acoustic window and line array coordinate positions. Figure 2 . Figure 2. Bone structure and acoustic parameters in models.(a,b) The micro-CT model, (c,d clinical CT model, (a,c) sound velocity c and (b,d) the attenuation factor α .The micro-CT image provides the distribution of Hounsfield values.The acoustic rameters of the skull can be obtained by the Hounsfield value.At first, the porosity the skull is calculated by the Hounsfield value H of each pixel point in the CT imag Figure 2 . Figure 2. Bone structure and acoustic parameters in models.(a,b) The micro-CT model, (c,d) the clinical CT model, (a,c) sound velocity c and (b,d) the attenuation factor α. Sensors 2023 , 22 1 23, x FOR PEER REVIEW 7 of MHz frequency by considering the focusing effect, focusing efficiency and array size.The excitation signals are four-period cosine envelope (CE) signals with a frequency of 1 MHz.The skull and transducer array are placed in the water, and the received signal at the focusing point is calculated from Equation (3).Based on the above settings, the maximum amplitude of the received signal at the focusing point is b A , while the maximum amplitude of the received signal at the focusing point in the absence of the skull existence is w A .The ultrasonic loss in the skull is 20 lg( / ) Figure 3 . Figure 3. Ultrasonic loss from each array element to the focusing point. Figure 3 . Figure 3. Ultrasonic loss from each array element to the focusing point. shows the ultrasonic waveforms received at the focusing point for the micro-CT model and the clinical CT model.sors 2023, 23, x FOR PEER REVIEW Figure 4 . Figure 4. Transmission skull ultrasonic waveforms received at the focusing point for model and the clinical CT model. Figure 4 . Figure 4. Transmission skull ultrasonic waveforms received at the focusing point for the micro-CT model and the clinical CT model. Figure 5 . Figure 5.Comparison between the transcranial skull signal received at the focusing point and the excitation source signal of the transducer in the micro-CT model and the clinical CT model.(a) Signal waveforms and (b) signal spectra. Figure 5 . Figure 5.Comparison between the transcranial skull signal received at the focusing point and the excitation source signal of the transducer in the micro-CT model and the clinical CT model.(a) Signal waveforms and (b) signal spectra. Figure 6 . Figure 6.The ambiguity function of the excitation signal.(a) The unmodulated CE signal; (b) the frequency-modulated signal. Figure 6 . Figure 6.The ambiguity function of the excitation signal.(a) The unmodulated CE signal; (b) the frequency-modulated signal. Figure 7 . Figure 7. Signal characteristics of the LFMB code.(a) The time domain signal; (b) the ambiguity function. ultrasonic field is focused at the focal point, as shown in Figure8. Figure 7 . Figure 7. Signal characteristics of the LFMB code.(a) The time domain signal; (b) the ambiguity function. Sensors 2023 , 22 Figure 8 . Figure 8. Transcranial ultrasound phased array focusing based on the pulse compression method. Figure 8 . Figure 8. Transcranial ultrasound phased array focusing based on the pulse compression method. Figure 8 . Figure 8. Transcranial ultrasound phased array focusing based on the pulse compression method. Figure 9 . Figure 9.Time delay of each array element computed by the unmodulated CE signal and the LFMB code and the difference between the computed time delays of the two signals.(a) Time delay of each array element; (b) difference between the computed time delays of the two signals. Figure 9 . Figure 9.Time delay of each array element computed by the unmodulated CE signal and the LFMB code and the difference between the computed time delays of the two signals.(a) Time delay of each array element; (b) difference between the computed time delays of the two signals. Figure9bshows the difference between the time delays calculated for the two excitation signals.When the received waveforms are not distorted due to the frequency offset and the superposition of the scattered waveforms, the calculated time delays of the two excitation signals are the same.Where the cross-correlation calculation produces an error due to frequency offset, the difference obtained from the calculation is between 0.5 and 1 µs; and when the received waveform is peak lagged due to distortion and scattered wave superposition, the calculation difference is greater than 1 µs.In order to show more visually the computational error of transcranial ultrasound distortion visually, Figure10ashows the received waveforms of the unmodulated CE signal and the LFMB code excited at the same array element as the focusing point.The position of the 64th array element is in the center of the array, and the excitation signal is close to the vertical incidence on the outer surface of the skull.It can be seen that the two excitation signals are distorted due to the scattering after transmission through the skull in Figure10a.The cross-correlation calculation results of the two signals are shown in Figure10b.The time delays are 29.53 µs for CE signal and 26.09 µs for the LFMB code, and the calculation difference is 3.44 µs.In Figure10b, the correlation peak of the CE signal is 0.2045, while the correlation peak of the LFMB code is 26.21, which indicates that the increased time bandwidth product of frequency modulation and phase coding increases the correlation peak of the correlation computation.We measure the accuracy of the time delay calculation by the peak sidelobe ratio of the cross-correlation calculation.The peak sidelobe ratio is the ratio of the highest peak to the second highest value in the cross-correlation calculation.The higher the peak sidelobe ratio, the higher the calculation accuracy.The second highest correlation peak of the CE signal is 0.2004, and that of the LFMB code is 13.53.The peak sidelobe ratio of the CE signal is 20 × log10(0.2054/0.2004)= 0.21 dB, and the peak sidelobe ratio of the LFMB code is 20 × log10(26.21/13.53)= 5.74 dB.Pulse compression coding increases the peak sidelobe ratio to 5.53 dB. Figure9bshows the difference between the time delays calculated for the two excitation signals.When the received waveforms are not distorted due to the frequency offset and the superposition of the scattered waveforms, the calculated time delays of the two excitation signals are the same.Where the cross-correlation calculation produces an error due to frequency offset, the difference obtained from the calculation is between 0.5 and 1 µs; and when the received waveform is peak lagged due to distortion and scattered wave superposition, the calculation difference is greater than 1 µs.In order to show more visually the computational error of transcranial ultrasound distortion visually, Figure10ashows the received waveforms of the unmodulated CE signal and the LFMB code excited at the same array element as the focusing point.The position of the 64th array element is in the center of the array, and the excitation signal is close to the vertical incidence on the outer surface of the skull.It can be seen that the two excitation signals are distorted due to the scattering after transmission through the skull in Figure10a.The cross-correlation calculation results of the two signals are shown in Figure10b.The time delays are 29.53 µs for CE signal and 26.09 µs for the LFMB code, and the calculation difference is 3.44 µs.In Figure10b, the correlation peak of the CE signal is 0.2045, while the correlation peak of the LFMB code is 26.21, which indicates that the increased time bandwidth product of frequency modulation and phase coding increases the correlation peak of the correlation computation.We measure the accuracy of the time delay calculation by the peak sidelobe ratio of the cross-correlation calculation.The peak sidelobe ratio is the ratio of the highest peak to the second highest value in the cross-correlation calculation.The higher the peak sidelobe ratio, the higher the calculation accuracy.The second highest correlation peak of the CE signal is 0.2004, and that of the LFMB code is 13.53.The peak sidelobe ratio of the CE signal is 20 × log10(0.2054/0.2004)= 0.21 dB, and the peak sidelobe ratio of the LFMB code is 20 × log10(26.21/13.53)= 5.74 dB.Pulse compression coding increases the peak sidelobe ratio to 5.53 dB. nal and the LFMB code excited at the same array element as the focusing point.The position of the 64th array element is in the center of the array, and the excitation signal is close to the vertical incidence on the outer surface of the skull.It can be seen that the two excitation signals are distorted due to the scattering after transmission through the skull in Figure10a.The cross-correlation calculation results of the two signals are shown in Figure10b.The time delays are 29.53 µs for CE signal and 26.09 µs for the LFMB code, and the calculation difference is 3.44 µs. Figure 10 . Figure 10.Received signal waveforms of the excitation signal at the focusing point and the results of the cross-correlation calculation of the received signals.(a) Received signal waveforms at the focusing point of the unmodulated CE signal and the LFMB code; (b) results of the cross-correlation calculation of the received signals of the unmodulated CE signal and the LFMB code. Figure 10 . Figure 10.Received signal waveforms of the excitation signal at the focusing point and the results of the cross-correlation calculation of the received signals.(a) Received signal waveforms at the focusing point of the unmodulated CE signal and the LFMB code; (b) results of the cross-correlation calculation of the received signals of the unmodulated CE signal and the LFMB code. Figure 11 . Figure 11.Maximum sound pressure sound field plot for phased array focusing based on different time delay settings, with the origin of the plot located 50 mm below the center array element of the transducer array.The frequency of the focusing ultrasonic waves is 1 MHz.(a) Phased array focusing in the absence of the skull environment in water, (b) focusing on time delay calculated from the unmodulated CE signal, and (c) focusing on time delay calculated from the LFMB code. Figure 11 . Figure 11.Maximum sound pressure sound field plot for phased array focusing based on different time delay settings, with the origin of the plot located 50 mm below the center array element of the transducer array.The frequency of the focusing ultrasonic waves is 1 MHz.(a) Phased array focusing in the absence of the skull environment in water, (b) focusing on time delay calculated from the unmodulated CE signal, and (c) focusing on time delay calculated from the LFMB code. Figure 12 . Figure 12.Focused beams of different frequency focusing signals at the time delay setting of the LFMB code calculation.(a) X-axis beamwidth; (b) y-axis beamwidth. Figure 12 . Figure 12.Focused beams of different frequency focusing signals at the time delay setting of the LFMB code calculation.(a) X-axis beamwidth; (b) y-axis beamwidth. Figure 13 . Figure 13.Focusing at different depth positions (red circles) by the phased-array method.(a-c) The two-dimensional focusing ultrasonic field distribution with a focus depth of 18 mm, 27 mm, and 39 mm, respectively.(d,e) The x-axis and y-axis ultrasonic beamwidths with different depth positions. Figure 13 . Figure 13.Focusing at different depth positions (red circles) by the phased-array method.(a-c) The two-dimensional focusing ultrasonic field distribution with a focus depth of 18 mm, 27 mm, and 39 mm, respectively.(d,e) The x-axis and y-axis ultrasonic beamwidths with different depth positions. of the transducer array can improve the focusing efficiency to some extent.Amplitude regulation actually adjusts the transducer's contribution to the focusing point according to the merit of the focusing path, which tends to be related to the porosity in the bone trabeculae and the complexity of the bone structure. Figure 15 . Figure 15.Amplitude weighted for each array element and the focused sound field after amplitude modulation, with the origin of the plot located 50 mm below the center array element of the transducer array.The frequency of the focused ultrasonic waves is 1 MHz.(a) Amplitude weighting of each array element; (b) focused ultrasonic field after amplitude modulation. Figure 15 . Figure 15.Amplitude weighted for each array element and the focused sound field after amplitude modulation, with the origin of the plot located 50 mm below the center array element of the transducer array.The frequency of the focused ultrasonic waves is 1 MHz.(a) Amplitude weighting of each array element; (b) focused ultrasonic field after amplitude modulation. 22 Figure 16 . Figure 16.Using the time delay setting of the 32-element phased array for focusing.The hydrophone receives signals transmitted through the skull on the other side of the skull at the focusing point, and the received signals are displayed in an oscilloscope.(a) Positions of the line phased array, skull, focusing point and hydrophone, with the focusing point 50 mm directly below the center array element of the line array.(b) Time delay calculated by micro-CT simulation model.(c) Hydrophone received signal shown on an oscilloscope.We locate the position of the simulated computational acoustic window in the actual skull based on micro-CT image slices.Simulation calculations are set up based on the phased array, skull, and the distance to the focusing point.The CE signal and the LFMB code are used in the simulation to calculate the time delay from each array element of the transducer to the focusing point.The time delay calculated in the simulation model is the absolute delay from the transducer array element to the focusing point.The time delays of each array element of the transducer to the focusing point calculated in the simulation are shown in Figure 16b.In the experiment, we use the time delay obtained from the simulation calculation for transcranial ultrasound focusing, and use the time delay calculated by the LFMB code, the time delay calculated by the CE signal, and the no time delay setting to excite the phased array, respectively.The transmit signal used for transcranial ultrasound focusing in the experiment is a 4-cycle 2 MHz frequency sine wave, and the signal received by the hydrophone at the focusing point is shown in Figure16c.It can be seen that the waveforms of the signals received by the hydrophone are all distorted and scattered.The peak value of the focusing voltage is 11.2 mV with the time delay calculated using the LFMB code, 2.7 mV with the time delay calculated using the CE signal, and 7.4 mV with no time delay.The strong distortion of the transcranial CE signal leads to errors in the time delay calculation and reduces the received signal voltage at the focusing point, whereas the LFMB code improves the accuracy of the time delay calculation, so the received signal voltage at Figure 16 . Figure 16.Using the time delay setting of the 32-element phased array for focusing.The hydrophone receives signals transmitted through the skull on the other side of the skull at the focusing point, and the received signals are displayed in an oscilloscope.(a) Positions of the line phased array, skull, focusing point and hydrophone, with the focusing point 50 mm directly below the center array element of the line array.(b) Time delay calculated by micro-CT simulation model.(c) Hydrophone received signal shown on an oscilloscope.We locate the position of the simulated computational acoustic window in the actual skull based on micro-CT image slices.Simulation calculations are set up based on the phased array, skull, and the distance to the focusing point.The CE signal and the LFMB code are used in the simulation to calculate the time delay from each array element of the transducer to the focusing point.The time delay calculated in the simulation model is the absolute delay from the transducer array element to the focusing point.The time delays of each array element of the transducer to the focusing point calculated in the simulation are shown in Figure 16b.In the experiment, we use the time delay obtained from the simulation calculation for transcranial ultrasound focusing, and use the time delay calculated by the LFMB code, the time delay calculated by the CE signal, and the no time delay setting to excite the phased array, respectively.The transmit signal used for transcranial ultrasound focusing in the experiment is a 4-cycle 2 MHz frequency sine wave, and the signal received by the hydrophone at the focusing point is shown in Figure 16c.It can be seen that the waveforms of the signals received by the hydrophone are all distorted and scattered.The peak value of
17,760.4
2023-12-01T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
First-order Three-Point BVPs at Resonance (II) This paper deals with existence of solutions to three-point BVPs in perturbed systems of first-order ordinary differential equations at resonance. An existence theorem is established by using the Theorem of Borsuk and some examples are given to illustrate it. A result for computing the local degree of polynomials whose terms of highest order have no common real linear factors is also presented. Introduction In this paper, we consider where M, N and R are constant square matrices of order n, A(t) is an n × n matrix with continuous entries, E : [0, 1] → R continuous, F : [0, 1] × R n × (−ε 0 , ε 0 ) → R n is a continuous function and ε ∈ R such that | ε |< ε 0 , and η ∈ (0, 1). 1 Corresponding author EJQTDE, 2011 No. 68, p. 1 The work is motivated by Cronin [6,7] who considered the problem of finding periodic solutions of perturbed systems.We adapt her approach to study three-point BVPs with linear boundary conditions using the methods and results of Cronin [6,7].The three-point BVP (1), ( 2) is called resonant or degenerate in the case that the rank of matrix L = n − r , 0 < n − r < n, that is the matrix L = M + NY 0 (η) + RY 0 (1) is singular where M, N and R are the constant n × n matrices given in (1), and Y (t) is a fundamental matrix of linear system x ′ = A(t)x and Y 0 (t) = Y (t)Y −1 (0).In studying the resonant case, we will use a finite-dimensional version of the Lyapunov Schmidt procedure (see [7]). Recently, Mohamed et al. [30] established the existence of solutions at resonance for the following nonlinear boundary conditions where M, N and R are constant square matrices of order n, A(t) is an n × n matrix with continuous entries, In this paper, we make use of the Theorem of Borsuk to show the existence of solutions of the BVP (1), (2) under suitable assumptions on the coefficients.We obtain the existence of solutions of three-point BVPs at resonance for general BVPs.We also present a result for computing the degree of ψ 0 (c) = (ψ 1 0 (c 1 , c 2 ), ψ 2 0 (c 1 , c 2 )) at (0, 0) where the ψ 0 (c 1 , c 2 ) are polynomials whose terms of highest order have no common real linear factors; see Cronin [7] p. 296-297.This result is for homogeneous polynomials in two variables which need not be odd functions while Borsuk's Theorem holds for continuous odd functions in any dimensions. These results generalize the degenerate case of periodic BVPs considered by Cronin [6,7], and also the degenerate case of three-point BVP [13,30]. Preliminaries Lemma 2.1.Consider the system where A(t) is an n × n matrix with continuous entries on the interval [0,1].Let Y (t) be a fundamental matrix of (5).Then the solution of (5) which satisfies the initial condition Lemma 2.2.[30] Let Y (t) be a fundamental matrix of (5).Then any solution of ( 1) and ( 6) can be written as The solution (1) satisfies the boundary conditions (2) if and only if EJQTDE, 2011 No. 68, p. 3 where and x(t, c, ε) is the solution of (1) given x(0) = c. nents of c.The system (8) is sometimes called the branching equations. Next we suppose that L is a singular matrix.This is sometimes called the resonance case or degenerate case.Now we consider the case rank L = n − r , 0 < n − r < n.Let E r denote the null space of L and let basis for E n−r . Let P r be the matrix projection onto Ker L = E r , and P n−r = I − P r , where I is the identity matrix.Thus P n−r is a projection onto the complementary space E n−r of E r , and P 2 r = P r , P 2 n−r = P n−r and P n−r P r = P r P n−r = 0. Without loss of generality, we may assume We will identify P r c with c r = (c 1 , • • • , c r ) and it is convenient to do so. Let H be a nonsingular n × n matrix satisfying Matrix H can be computed easily (see Cronin [7]).The nature of the solutions of the branching equations depends heavily on the rank of the matrix L. Next we give a necessary and sufficient condition for the existence of solutions of x(t, c, ε) of three-point BVPs for ε > 0 such that the solution satisfies x(0) = c where c = c(ε) for suitable c(ε). We need to solve (8) for c when ε is sufficiently small.The problem of finding solutions to ( 1) and ( 2) is reduced to that of solving the branching equations ( 8) for c as function of (8) which is equivalent to Multiplying ( 8) by the matrix H and using ( 11), we have where ) and Since the matrix H is nonsingular, solving (8) for c is equivalent to solving (12) for c. The following theorem due to Cronin [6,7] gives a necessary condition for the existence of solutions to the BVP ( 1) and (2). Theorem 2.4.A necessary condition that ( 12) can be solved for c, with | ε | < ε 0 , for some where c n−r (c r , ε) = c n−r is a differentiable function of c r and ε, P r HN is interpreted as . Similarly we will sometimes identify P n−r c and c n−r .Setting ε = 0, we have where c n−r (c r , 0) = P n−r Hd; note that from the context c n−r (c r , 0) = P n−r Hd is interpreted it follows that the matrix H is the identity matrix.Thus define a continuous mapping Main Results Now we state the well known Theorem of Borsuk (see, for example, Piccinini, Stampacchia and Vidossich [31] p. 211). Theorem 3.1.Let B k ⊆ R n be a bounded open set that is symmetrical with respect to the origin (that is B k = −B k ) and contains the origin.If Φ 0 : Bk → R n is continuous and antipodal ) is an odd number (and thus nonzero). Next we introduce the computation of the topological degree of a mapping in Euclidean 2-space defined by homogeneous polynomials.The methods and notations described below EJQTDE, 2011 No. 68, p. 6 come from Cronin [7,8].Let where C 1 , C 2 are constants.(We include the possibility that some a i = ∞ or some b j = ∞; equivalently, that the factor y −a i x is equal to −x or the factor y −b j x is equal to −x).The topological degree is resolved by examining the changes of sign of Φ 1 0 (c 1 , c 2 ) and Φ 2 0 (c 1 , c 2 ) as c 1 , c 2 varies over the boundary of the ball B k with centre at the origin and arbitrary radius when computing the topological degree of (Φ 1 0 , Φ 2 0 ).We may omit the following factors since none of them affect the degree of (Φ 1 0 , Φ 2 0 ) on B k at 0. where a i and b j have complex conjugates in Φ 1 0 , respectively, Φ 2 0 . Factors which appear with even exponents where a i and b j are real. 3. Factors (c 1 − a i c 2 ) and (c 1 − a i+1 c 2 ), if there exists a pair a i , a i+1 (i < i + 1) such that no b j lies between them (i.e., there is no b j such that a i < b j < a i+1 ). Similarly for pairs b j , b j+1 . If there are no remaining factors in Φ 1 0 or Φ 2 0 , then the topological degree is zero.We now state the second main theorem in this paper (see Cronin [7] p. [38][39][40]. Theorem 3.2.If we assume that the terms of highest degree of Φ 1 0 (c 1 , c 2 ) and Φ 2 0 (c 1 , c 2 ) are homogenous polynomials with no common real linear factors after reduction using the EJQTDE, 2011 No. 68, p. 7 conditions 1, 2, 3, and 4 above, then for some integer p ≤ min{m, n}.In the first case the degree is p, while in the second case the degree is −p.Hence for B k , a ball with centre at the origin and sufficiently large radius.Then for sufficiently Hence there is a solution x(t, c, ε) of the BVP (1), (2) Remark 3.3.In this paper, we find that an arbitrarily small change in A(t) will affect the structure of the set of solutions, and the value of the local degree will depend on how the function f (t, y, y ′ , ε) is changed. Applications and Examples In this section, we apply our results from the previous section and we start by considering the degenerate case for α = √ 2 in the interval [0, 2π] with rank L (α= √ 2) = 1 < 2. Thus, we consider where EJQTDE, 2011 No. 68, p. 8 Then we study the totally degenerate case, rank L = 0 for general boundary conditions and give an example where Borsuk's Theorem or Theorem 3.2 applies.We consider where We will use the following facts in solving the examples.if and only if both n and m are even.Rank L (α= √ 2) = 1 < 2, α = √ 2 and y ′ (0) = 0. Since Φ 0 (c 1 ) is odd, the local degree is odd and therefore nonzero.Then for sufficiently large Next we apply Borsuk's Theorem in Example 1, and then Theorem 3. Rank L = 0. Definition 2 . 5 . [30] Let E r denote the null space of L and let E n−r denote the complement in R n of E r .Let P r be the matrix projection onto Ker L = E r , and P n−r = I − P r , where I is the identity matrix.Thus P n−r is a projection onto the complementary space E n−r of EJQTDE, 2011 No. 68, p. 5 4 . Factors (c 1 − a r c 2 ) and (c 1 − a s c 2 ), if a r and a s are the smallest and largest of the array of numbers a 1 , • • • , a n , b 1 , • • • , b m .Similarly factors (c 1 − b r c 2 ) and (c 1 − b s c 2 ), if b r and b s are the smallest and largest of the array of numbers 1 0 sin n 2πs cos m 2πs ds = 0(20) if and only if both n and m are even.
2,648.8
2011-08-01T00:00:00.000
[ "Mathematics" ]
Alkali activated cements mix design for concretes application in high corrosive conditions . This paper covers the results of development of corrosion resistant ash alkali-activated cements based on regulation of phase composition of the hydration products through changing the alkali content, content of calcium- containing cement constituents resulting in the increase strength and density of the cement stone. The results of study suggested to conclude that the cement compositions with predominance in the hydration products of weakly soluble low basic hydrosilicates of calcium, hydrogarnets and minerals similar to natural hydroaluminates exhibited the highest corrosion resistance. The results of comparison suggested to draw a conclusion that the alkali-activated cements Types APC ІІІ-400 and ACC V-400, according to National Ukrainian Standard DSTU B V.2.7, had high corrosion resistance compared to that of OPC, thus allowing to recommend the developed cements for the concretes intended for use in aggressive environments, inclusive of sodium and magnesium sulphates and others. Coefficients of corrosion resistance of concretes are higher than 1 after even 42 months. Introduction With the increase in volumes of construction a first-priority issue is how to improve their durability. Durability of concrete structures is dependent upon service conditions and, among many other factors, by characteristics of the cement used for making the concrete. Of special interest is durability of corrosion resistant concretes for special intended use. Until now, there is no unified solution for the concrete mix design which would meet all needs with regard to corrosion resistant concretes. The most important characteristics which determine corrosion resistance of the concretes are: type of cement and its mineralogical composition, composition of hydration products, water to cement ratio (W/C), type of admixtures and additives, and aggregates, pore structure, etc. In the corrosion resistant concrete mix design the following factors should be taken into account: service conditions and aggressive exposure conditions. The alkali activated cements based on granulated blast-furnace slags and fly ashes, that have been developed by a scientific school headed by Professor V.D.Glukhovsky, are the cements that can be successfully used for making corrosion resistant concretes [1][2][3]. The alkali activated cement concretes have high strength properties (40-120 МPа and higher), frost resistance (1000 cycles), water penetration (W10-W50), high corrosion resistance in different mineral and organic environments [4,5]. Further research work allowed to develop fly ash alkali activated cements in which the fly ash content reached 90% [6][7][8][9]. These fly ash alkali activated cements can be an important alternative to portland cements containing fly ash in a quantity of maximum 30%, since in their physical-mechanical properties are similar to the portland cements, and the fly ash alkali activated cement stone is characterized by the higher weather resistance, frost-and corrosion resistance. Raw material and methods of examination The composition of the concrete without admixtures was constant in all experiments. Lowcalcium fly ash class F (under ASTM C 618), ground to a specific surface of 800 m 2 /kg by Blaine, sodium carbonate and sodium metasilicate penthahydrate as alkaline activator were used as basic raw materials. Ordinary portland cement (OPC) Type I Grade 52,5 (commercial product with a specific surface of 380 m 2 /kg) and blast-furnace slag (specific surface 450 m 2 /kg) were taken as additional cement constituents. Chemical composition of raw materials is shown in Table 1. The fly ash alkali activated cements under study, they were: alkali activated pozzolanic cement APC ІІІ and alkali activated composite cement ACC V [13] were prepared by mixing of the preliminary separately ground fly ash and slag, OPC, and alkaline activator and plasticizer in a ball mill. The slag portland cement М400 (OPC ІІІ/А), according to Ukrainian classification [14] was taken as reference. The composition of the fly ash alkali activated cements under study is given in Table 2. Corrosion resistance of the fly ash alkali activated cements under study was determined according to recommendations given in [10] by measuring strength variations of the specimens placed in aggressive environments (10%-and 5%-solutions of sodium sulfate (Na 2 SO 4 ), 4%-and 2%-solutions of magnesium sulfate (MgSO 4 ), concentrated sea water). The specimens were kept for 3 days in normal conditions, then, for 25 days -in water of technical grade. The test ages were: 30, 60, 90 , 180 and 1300 days. The corrosion resistance was expressed by a coefficient of corrosion resistance, which is calculated as a ratio between a flexural strength of the specimen after storage for 12 months in aggressive environment and flexural strength of the specimen after storage for 12 months in water. According to above recommendations the cement stone is considered to be a corrosion resistant if the coefficient of corrosion resistance after storage for 12 months is equal or higher than (>)0.8. Results and discussion At the first stage of the study, main physical-mechanical characteristics of the fly ash alkali activated cements have been determined (Fig.1). The results of test suggested to draw a conclusion that the developed cements fall within the ranges of compressive strength requirements for the alkali activated cements APC and ACC of strength class M400 (their compressive strength was 38.4-41.3 MPa). The results of study of corrosion resistance (Fig.2, а, b, c, d, е, f,) suggested to conclude that all cements under study after long-term storage in aggressive environments at the age of 30 days have increased their strength properties comparing to the reference cement/ This can be attributed to the deepening of the hydration process. However, at the age of 2-3 months strength of the majority of the cements under study tended to lowering. So, at the age of 6 months, the cement OPC ІІІ/А-400 has rapidly lost its strength compared to the fly ash alkali activated cements APC and ACC. Thus, the cement OPC ІІІ/А-400 could not be considered as corrosion resistant cement (Fig 3). The calculated coefficients of corrosion resistance ( Table 2) suggested to draw a conclusion that the developed fly ash alkali activated cements (APC and ACC) have much better resistance against aggressive environment compared with OPC. So, at the age of 9 months, the coefficient of corrosion resistance of these cements did not lower below 1, and at the age of 12 months the developed composite cement ACC had good flexural strength (the coefficient of corrosion resistance was 1.0 and higher), and the coefficient of corrosion resistance of the developed cement APC declined below critical level only in sea water and magnesium sulfate with concentration of 4%. This result can be explained by a constructive character of the combined alkaline-sulfate activation of the fly ashes [11]. Analyzing the coefficients of corrosion resistance (Table 3) suggested to draw a conclusion corrosion resistance of the developed fly ash alkali activated cements under study is directly proportional to a quantity of the calcium-containing cement constituents. Thus, with regard to the calcium-containing cement constituents the corrosion resistance decreases as the follows: ACC<APC< OPC ІІІ/А-400. With regard of different aggressive environments, the corrosion resistance decreases as follows: sea water < sodium sulfate < magnesium sulfate. Conclusion The results of study suggested to conclude that in corrosion resistance the developed fly ash alkali activated cements, they are: alkali activated pozzolanic and alkali activated composite cements exhibit the higher corrosion resistance compared to the slag portland cements and can be successfully used in making corrosion resistant concretes for the use in severe aggressive environments. The results of study suggested to conclude that concretes on the fly ash alkali activated cements APC and ACC exhibit high corrosion resistance (Кs = 1.04...1.39 after 12 months of storage in aggressive environments) and on the the developed composite cement ACC has the higher characteristic. At the age of 42 months slag OPC concretes were totally destroyed, thus alkali activated concretes were showing high corrosion resistance.
1,794.8
2018-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Platelets—Disarmed guardians in the fight against the plague Platelets represent the first line of defense upon injury, acting as large army of anucleated cells that wall up damaged vessels to prevent blood loss and to protect the host against invading pathogens. At the same time platelets, via release of soluble factors, sound the alarm to recruit leukocytes and put the immune-defense troops on standby.1 The multiplicity of platelets, their sensitive nature, and quick reply to changes in the environment turn them into a very dangerous opponent for any invader. Therefore, enemies must find a way to outflank platelets to successfully complete their sinister missions. More than 6000 years ago, one such powerful invader, Yersinia pestis, emerged from enteric bacterial ancestors. Known as the Black Death, the plague caused by Y pestis spread quickly for centuries, killing millions. It is one of the deadliest diseases, and plague outbreaks represent the most notorious epidemics in history.2 Transmitted by fleas from rodent reservoirs, Y pestis represents a highly sophisticated pathogenic bacterium with an impressive repertoire of combat gear. It possesses a complex set of virulence determinants, including classical pathogen-associated molecular patterns, the Yersinia outer-membrane proteins (Yops), the broad-range protease Pla, and iron capture systems. They all play critical roles in the molecular strategies of Y pestis warfare to subvert the human immune system, allowing unrestricted bacterial dissemination and replication.2 Moreover, like other infamous invaders, including Chlamydia, Pseudomonas, Salmonella, Shigella, and Vibrio, Y pestis is equipped with a special weapon, the type III secretion system (T3SS), which is essential for its pathogenicity. The T3SS system represents an ingenious protein nano-syringe to hijack eukaryotic cells via injection of virulence effectors directly into host cytosol. Contact of the needle with a host cell triggers the T3SS to start secreting.3 However, the trigger and the exact way in which effectors enter the host remain largely a mystery. At the end of the day, however, all this fancy equipment becomes useless when the first line of defense can withhold the attack. Therefore, not only the immune system but also the hemostatic system need to be outwitted to favor intracellular spread of the bacterium. In this issue, Palace et al elegantly demonstrate how Y pestis found a way to escape entrapment in thrombi (schematic overview in Figure 1).4 They found that the presence of Y pestis inhibits platelet aggregation in response to thrombin and renders thrombi unstable, leading to their disaggregation. The inhibiting effects on platelet aggregation were dependent on functional Pla protease, which is also responsible for the fibrinolytic properties of Y pestis. This surface-exposed, transmembrane β-barrel protease exhibits a complex array of interactions with the hemostatic system: It favors fibrinolysis by direct plasminogen and urokinase activation while inactivating the serpins plasminogen activator inhibitor-1 and α2-antiplasmin as well as the thrombin-activatable fibrinolysis inhibitor.5 Bacterial escape from thrombi is further helped by T3SS, which, in contrast to Pla, not only prevents platelet aggregation but also promotes destabilization and disaggregation of already formed Platelets represent the first line of defense upon injury, acting as large army of anucleated cells that wall up damaged vessels to prevent blood loss and to protect the host against invading pathogens. At the same time platelets, via release of soluble factors, sound the alarm to recruit leukocytes and put the immune-defense troops on standby. 1 The multiplicity of platelets, their sensitive nature, and quick reply to changes in the environment turn them into a very dangerous opponent for any invader. Therefore, enemies must find a way to outflank platelets to successfully complete their sinister missions. More than 6000 years ago, one such powerful invader, Yersinia pestis, emerged from enteric bacterial ancestors. Known as the Black Death, the plague caused by Y pestis spread quickly for centuries, killing millions. It is one of the deadliest diseases, and plague outbreaks represent the most notorious epidemics in history. 2 Transmitted by fleas from rodent reservoirs, Y pestis represents a highly sophisticated pathogenic bacterium with an impressive repertoire of combat gear. It possesses a complex set of virulence determinants, including classical pathogen-associated molecular patterns, the Yersinia outer-membrane proteins (Yops), the broad-range protease Pla, and iron capture systems. They all play critical roles in the molecular strategies of Y pestis warfare to subvert the human immune system, allowing unrestricted bacterial dissemination and replication. 2 Moreover, like other infamous invaders, including Chlamydia, Pseudomonas, Salmonella, Shigella, and Vibrio, Y pestis is equipped with a special weapon, the type III secretion system (T3SS), which is essential for its pathogenicity. The T3SS system represents an ingenious protein nano-syringe to hijack eukaryotic cells via injection of virulence effectors directly into host cytosol. Contact of the needle with a host cell triggers the T3SS to start secreting. 3 However, the trigger and the exact way in which effectors enter the host remain largely a mystery. At the end of the day, however, all this fancy equipment becomes useless when the first line of defense can withhold the attack. Therefore, not only the immune system but also the hemostatic system need to be outwitted to favor intracellular spread of the bacterium. In this issue, Palace et al elegantly demonstrate how Y pestis found a way to escape entrapment in thrombi (schematic overview in Figure 1). 4 They found that the presence of Y pestis inhibits platelet aggregation in response to thrombin and renders thrombi unstable, leading to their disaggregation. The inhibiting effects on platelet aggregation were dependent on functional Pla protease, which is also responsible for the fibrinolytic properties of Y pestis. This surface-exposed, transmembrane β-barrel protease exhibits a complex array of interactions with the hemostatic system: It favors fibrinolysis by direct plasminogen and urokinase activation while inactivating the serpins plasminogen activator inhibitor-1 and α2-antiplasmin as well as the thrombin-activatable fibrinolysis inhibitor. 5 Bacterial escape from thrombi is further helped by T3SS, which, in contrast to Pla, not only prevents platelet aggregation but also YopH has similarities to eukaryotic protein tyrosine phosphatase, primarily targeting focal adhesion kinase and p130Cas, a docking protein that plays a central coordinating role for tyrosine kinase-based signalling related to cell adhesion. 7 Following its injection into the host cell, YopH rapidly dephosphorylates these cell adhesion-promoting molecules that had become activated during the initial contact between the host cell and Yersinia. YopE acts as a GTPase-activating protein toward the RhoA family of GTPases (RhoA, Rac, and Cdc42), 6 thus accounting for the cytoskeletal collapse observed in Yersinia-infected cells. Both proteins lead to disturbed adhesion and cytoskeleton rearrangements and therefore also contribute to the antiphagocytic activity of this pathogen. The authors demonstrate that YopH and YopG change platelet morphology with formation of elongated, but barely branched lamellipodia, indicating that the T3SS system destabilizes thrombi by interfering with platelet cytoskeleton remodelling, which is in line with findings from other cellular systems. In experiments with different Y pestis mutants, Palace et al discovered that platelet-trapped bacteria were primarily killed by neutrophils and not by platelets, which also bear antimicrobial capacity. They further showed that neutrophils were less efficient in killing bacteria when they possess T3SS activity. The authors identified that the T3SS system enhances bacterial survival by enabling neutrophil extracellular trap (NET) formation, another host defence mechanism against tricky invaders. These insoluble NETs, consisting of expelled nuclear DNA and proteins, have one clear mission: to capture and kill bacteria. 8 Platelets augment neutrophil NET formation and NETs in turn are highly pro-thrombotic, highlighting the tight interplay between the hemostatic and the immune system. 1,9,10 However, NETs also recruit and activate platelets by presenting histones to platelet toll-like receptors (TLR). 11,12 Platelet TLRs mediate platelet activation in response to pathogen interactions, which in turn foster the formation of NETs as direct interaction of platelets with neutrophils leads to neutrophil activation and induces migration and NET formation. However, Y pestis lipopolysaccharide manages to evade recognition via TLR4, 13 which represents another witty way to deceive the host. They are escaping! activation and diminishes thrombi, thereby escaping entrapment and killing by neutrophils. This is the first demonstration that bacterial T3SS can interfere with platelet function and thrombus stability. It will be exciting to learn if also other pathogens, which also express T3SS, can diminish platelet responses as efficiently. Moreover, in vivo studies are warranted to fully unravel the consequences of these platelet-bacteria interactions and translate the findings to the clinical situation. Understanding the underlying mechanisms of bacteria-mediated platelet dysregulation could ultimately help to understand and control hemostatic imbalance in septic patients. CO N FLI C T O F I NTE R E S T The author declares no conflict of interest.
1,990.8
2020-12-01T00:00:00.000
[ "Biology" ]